diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md deleted file mode 100644 index 4039ddd0bed05157cbf04a6f6d015a2a03461352..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md +++ /dev/null @@ -1,80 +0,0 @@ -## 10 endrathukulla full movie download 720p - - - - - - - - - -**Download File ===> [https://eromdesre.blogspot.com/?d=2txKKP](https://eromdesre.blogspot.com/?d=2txKKP)** - - - - - - - - - - - - - -# 10 Endrathukulla Full Movie Download 720p: A Thrilling Road Action Comedy - - - -If you are looking for a movie that combines action, comedy, and adventure, then you might want to check out **10 Endrathukulla**, a 2015 Tamil-language film starring Vikram and Samantha Ruth Prabhu. The movie is written and directed by Vijay Milton and produced by A. R. Murugadoss under the banner A. R. Murugadoss Productions and Fox Star Studios. - - - -The movie follows the story of an extreme driver (Vikram) who is on a mission to deliver his boss's goods to the rightful man. Along the way, he meets a mysterious woman (Samantha) who joins him on his journey. However, he soon finds himself being pulled into a track filled with twists and turns, as he faces various challenges and enemies. The movie is packed with thrilling car chases, stunts, and humor, as well as a surprising revelation at the end. - - - -If you want to watch **10 Endrathukulla** full movie in 720p quality, you can download it from various online sources. However, be careful of illegal or pirated websites that may harm your device or violate the copyright laws. We recommend you to use legal and safe platforms that offer high-quality streaming or downloading options for **10 Endrathukulla** full movie. - - - -Some of the legal and safe platforms that you can use to watch **10 Endrathukulla** full movie in 720p are: - - - -- [Hotstar](https://www.hotstar.com/in/movies/10-endrathukulla/1000074620/watch): This is a popular streaming service that offers a variety of movies and shows in different languages. You can watch **10 Endrathukulla** full movie in 720p on Hotstar with a subscription plan or a VIP access. - -- [YouTube](https://www.youtube.com/watch?v=Q6kVU8uNdic): This is a free platform that allows you to watch videos of various genres and categories. You can watch **10 Endrathukulla** full movie in 720p on YouTube for free, but you may have to deal with some ads and interruptions. - -- [Amazon Prime Video](https://www.amazon.com/10-Endrathukulla-Vikram/dp/B01M7YJ4ZL): This is a premium streaming service that offers a wide range of movies and shows from different countries and languages. You can watch **10 Endrathukulla** full movie in 720p on Amazon Prime Video with a subscription plan or a rental fee. - - - -We hope you enjoy watching **10 Endrathukulla** full movie in 720p and have a great time with this entertaining road action comedy. - - - -If you want to know more about **10 Endrathukulla** and its cast and crew, here are some interesting facts and trivia that you might find useful. - - - -- **10 Endrathukulla** is the second collaboration between Vikram and A. R. Murugadoss, after the 2005 blockbuster **Ghajini**. - -- The movie was shot in various locations across India, including Chennai, Hyderabad, Rajasthan, Sikkim, and Nepal. - -- The movie features a cameo appearance by Bollywood actor Abhimanyu Singh, who plays the role of a corrupt cop. - -- The movie was originally titled **Paththu Enradhukulla**, which means "before I count to ten" in Tamil. However, the title was later changed to **10 Endrathukulla**, which is a shorter and catchier version. - -- The movie was released on October 21, 2015, coinciding with the festival of Dussehra. It received mixed reviews from critics and audiences, but was praised for its action sequences and performances. - - - -We hope you learned something new about **10 Endrathukulla** and its making. If you have any feedback or suggestions for us, please feel free to leave a comment below. We would love to hear from you. - - dfd1c89656 - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md deleted file mode 100644 index b8cd40c3b931a5a705866f6a172cccaad8737a1b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md +++ /dev/null @@ -1,30 +0,0 @@ -
-

How to Download and Install SIM EKB for Siemens Software

-

SIM EKB is a software that allows you to activate Siemens software products without buying a license. It is mainly used by students and hobbyists who want to learn and experiment with Siemens software. However, it is not recommended for professional use, as it may violate Siemens' terms and conditions. In this article, we will show you how to download and install SIM EKB for Siemens software.

-

Ekb License Siemens Download.rar


DOWNLOAD ->>> https://imgfil.com/2uy0O7



-

Step 1: Download SIM EKB

-

The latest version of SIM EKB as of April 2023 is SIM EKB Install 2022 11 27, which supports all the software in the TIA PORTAL V18 package along with many other upgrades. You can download it from the following link[^1^]. The password to extract the file is plc4me.com.

-

Step 2: Delete old keys

-

If you have previously installed any Siemens software products, you may need to delete the old keys before installing new ones. To do this, go to the hidden folder C:\AX NF ZZ and delete all the files inside it. You may need to enable the option to show hidden files and folders in Windows Explorer.

-

Step 3: Run SIM EKB Install

-

After extracting the file, run the SIM EKB Install.exe file as administrator. You will see a window like this:

-SIM EKB Install window -

Select the software products that you want to activate from the list on the left. You can use the search box to find them quickly. The unlocked software will be highlighted in blue. Then click on Install button at the bottom right corner.

-

Step 4: Enjoy your Siemens software

-

After installing the keys, you can launch your Siemens software and use it without any limitations. However, remember that this is only for educational purposes and not for commercial use. If you need professional support or updates, you should contact Siemens and buy a license.

-

-

References

-
    -
  1. [Download] SIM EKB Install 2022 11 27 for Siemens Software - plc4me.com
  2. -

Some examples of Siemens software products

-

Siemens offers a wide range of software products for various industrial applications. Some of the most popular ones are:

- -

These are just some of the Siemens software products that you can activate with SIM EKB. However, there are many more that you can explore on the Siemens website or on the SIM EKB Install window.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md deleted file mode 100644 index 5189255ed073a8fb878eb525abbf24a969728293..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md +++ /dev/null @@ -1,143 +0,0 @@ -
-

2017 Brawl Stars APK: How to Download and Play the Epic Mobile Game

-

If you are a fan of mobile games, you might have heard of Brawl Stars, a fast-paced multiplayer game from Supercell, the makers of Clash of Clans and Clash Royale. Brawl Stars was released globally in 2018, but before that, it was available in a few countries as a beta version in 2017. If you want to experience the original version of the game, you can download and install the 2017 Brawl Stars APK on your Android device. In this article, we will show you how to do that and also give you some tips on how to play the game.

-

2017 brawl stars apk


DOWNLOAD ✯✯✯ https://urlin.us/2uSScE



-

What is Brawl Stars?

-

A fast-paced multiplayer game from Supercell

-

Brawl Stars is a game that combines elements of shooter, MOBA, and battle royale genres. You can team up with your friends or play solo in various game modes, each with a different objective. You can also unlock and upgrade dozens of characters, called Brawlers, each with a unique ability and style. You can collect skins, pins, and trophies to show off your achievements and personality.

-

Different game modes and characters to choose from

-

Brawl Stars has four main game modes: Smash & Grab, Heist, Showdown, and Bounty. Each mode has its own rules and strategies, so you need to adapt your gameplay accordingly. Here is a brief overview of each mode:

- -

Brawl Stars also has 22 different Brawlers that you can unlock and use in any game mode. Each Brawler has a basic attack, a super ability, a star power, and a gadget. You can level up your Brawlers by collecting power points and coins, and unlock new skins by earning gems or buying them with real money. Some of the Brawlers are:

- - - - - - - - - - - - - - -Super RareA bartender who can toss bottles of flaming liquid that leave a burning area on the ground. -PocoSuper RareA musician who can heal himself and his allies with his soothing tunes. -RosaSuper RareA botanist who can punch enemies with her boxing gloves and shield herself with her plant barrier. -RicoSuper RareA bouncy ball machine who can shoot bullets that bounce off walls and obstacles. -DarrylSuper RareA barrel robot who can roll into enemies and blast them with his double shotguns. -PennyEpicA pirate who can fire a bag of coins that splits into three on impact and build a cannon that shoots at enemies. -PiperEpicA sniper who can deal more damage the farther her bullets travel and drop bombs when she uses her umbrella to fly away. -PamEpicA junker who can spray scrap metal at enemies and deploy a healing turret for her allies. -FrankEpicA zombie who can smash enemies with his hammer and stun them with his super. -t -t -t -t -t -t -t -t>Mr. P -t>Sprout -t>Crow -t>Spike -t -t -t -t>Gale -t>Colette -
NameTypeAbility
ShellyCommonA shotgunner who can blast enemies at close range and charge her super to unleash a powerful shot that can destroy obstacles.
NitaCommonA fighter who can summon a big bear to fight by her side.
ColtCommonA sharpshooter who can fire a burst of bullets with great accuracy.
Bull CommonA tank who can charge forward and deal massive damage with his double-barreled shotgun.
JessieCommonAn inventor who can build a turret that shoots at enemies.
BrockRareA rocket launcher who can fire long-range missiles that explode on impact.
DynamikeRareA miner who can throw sticks of dynamite and a big barrel bomb.
BoRareA bowman who can shoot explosive arrows and plant hidden mines.
TickRareA metal ball of mischief who can detach and toss his head, which explodes after a few seconds.
8-BitRareA retro arcade machine who can shoot laser beams and boost his and his allies' damage with his booster.
EmzRareA social media star who can spray a cloud of hairspray that damages enemies over time.
El PrimoSuper RareA wrestler who can punch enemies with his fiery fists and leap into the fray with his super.
Barley
-

How to download and install the 2017 Brawl Stars APK

-

The requirements and risks of using an APK file

-

An APK file is an Android application package that contains all the files and data needed to run an app on an Android device. You can download APK files from various sources online, but you need to be careful about the quality and security of the files. Some APK files may contain malware or viruses that can harm your device or steal your personal information. You also need to make sure that the APK file is compatible with your device and Android version.

-

To download and install the 2017 Brawl Stars APK, you will need an Android device that runs on Android 4.1 or higher, has at least 1 GB of RAM, and has enough storage space. You will also need to enable the option to install apps from unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store. However, this also means that you are responsible for the safety and performance of your device. You should only download APK files from trusted sources and scan them for viruses before installing them.

-

The steps to download and install the APK file

-

Here are the steps to download and install the 2017 Brawl Stars APK on your Android device:

-
    -
  1. Go to a reliable website that offers the 2017 Brawl Stars APK file, such as [APKPure] or [APKMirror].
  2. -
  3. Find the 2017 Brawl Stars APK file and tap on the download button. The file size is about 100 MB, so make sure you have a stable internet connection and enough battery life.
  4. -
  5. Once the download is complete, locate the APK file in your device's file manager and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen and grant the necessary permissions to the app.
  8. -
  9. Wait for the installation to finish and then launch the app from your home screen or app drawer.
  10. -
  11. Enjoy playing Brawl Stars!
  12. -
-

How to play Brawl Stars on your Android device

-

The basic controls and gameplay mechanics

-

Brawl Stars is easy to learn but hard to master. The game has simple controls that you can customize according to your preference. You can use either a joystick or tap mode to move your Brawler around the map. You can also use either auto-aim or manual aim to shoot at enemies. To use your super ability, you need to fill up your super meter by hitting enemies with your basic attack. You can also use a gadget once per match if you have unlocked it for your Brawler.

-

2017 brawl stars apk download for android
-2017 brawl stars apk mod unlimited gems
-2017 brawl stars apk latest version
-2017 brawl stars apk free download uptodown
-2017 brawl stars apk old version
-2017 brawl stars apk hack no root
-2017 brawl stars apk offline installer
-2017 brawl stars apk update new features
-2017 brawl stars apk file size
-2017 brawl stars apk compatible devices
-2017 brawl stars apk gameplay tips
-2017 brawl stars apk review and rating
-2017 brawl stars apk best characters
-2017 brawl stars apk how to install
-2017 brawl stars apk error fix
-2017 brawl stars apk online multiplayer mode
-2017 brawl stars apk fun and addictive
-2017 brawl stars apk unlock all skins
-2017 brawl stars apk safe and secure
-2017 brawl stars apk original from Supercell
-2017 brawl stars apk cheats and tricks
-2017 brawl stars apk requirements and specifications
-2017 brawl stars apk alternative download links
-2017 brawl stars apk beta version testing
-2017 brawl stars apk support and feedback
-2017 brawl stars apk new maps and modes
-2017 brawl stars apk events and challenges
-2017 brawl stars apk rewards and trophies
-2017 brawl stars apk clans and friends
-2017 brawl stars apk ranking and leaderboard
-2017 brawl stars apk skins and customizations
-2017 brawl stars apk coins and gems generator
-2017 brawl stars apk patch notes and changelog
-2017 brawl stars apk bugs and glitches report
-2017 brawl stars apk videos and screenshots
-2017 brawl stars apk guides and tutorials
-2017 brawl stars apk forums and communities
-2017 brawl stars apk news and updates
-2017 brawl stars apk comparison with other games
-2017 brawl stars apk pros and cons analysis

-

The game has different gameplay mechanics depending on the game mode you choose. For example, in Smash & Grab, you need to collect gems from the center of the map and hold them until the countdown ends. If you die, you will drop all your gems, so you need to be careful and protect yourself and your teammates. In Showdown, you need to survive as long as possible by avoiding enemies, collecting power ups, and hiding in bushes or behind walls. The map will shrink over time, forcing you to confront other players. The last one standing wins.

-

Some tips and tricks to improve your skills

-

Brawl Stars is a game that requires strategy, teamwork, and skill. Here are some tips and tricks that can help you improve your skills:

- -

Conclusion

-

Brawl Stars is a fun and addictive game that you can play on your Android device. If you want to experience the original version of the game from 2017, you can download and install the 2017 Brawl Stars APK file from a reliable source. However, you need to be careful about the quality and security of the APK file and enable the option to install apps from unknown sources on your device. You also need to learn how to play the game well and use the best Brawlers and strategies for each game mode. With some practice and teamwork, you can become a Brawl Star!

-

FAQs

-

Here are some frequently asked questions about Brawl Stars and the 2017 Brawl Stars APK:

-
    -
  1. What is the difference between the 2017 Brawl Stars APK and the current version of the game?
  2. -

    The 2017 Brawl Stars APK is the beta version of the game that was released in a few countries before the global launch in 2018. The 2017 version has some differences from the current version, such as fewer Brawlers, game modes, skins, maps, features, and updates. The 2017 version also has some bugs and glitches that may affect your gameplay experience.

    -
  3. Is it safe to download and install the 2017 Brawl Stars APK?
  4. -

    It depends on where you download the APK file from. Some websites may offer fake or malicious APK files that can harm your device or steal your personal information. You should only download APK files from trusted sources that have positive reviews and ratings from other users. You should also scan the APK file for viruses before installing it on your device.

    -
  5. Will I get banned for using the 2017 Brawl Stars APK?
  6. -

    No, you will not get banned for using the 2017 Brawl Stars APK as long as you do not use any cheats, hacks, mods, or third-party tools that give you an unfair advantage over other players. However, you may not be able to access some features or events that are exclusive to the current version of the game.

    -
  7. Can I play with my friends who have the current version of the game?
  8. -

    No, you cannot play with your friends who have the current version of the game because they are on different servers. You can only play with other players who have the same version of the game as you.

    -
  9. Can I update the 2017 Brawl Stars APK to the current version of the game?
  10. -

    No, you cannot update the 2017 Brawl Stars APK to the current version of the game. You will need to uninstall the 2017 Brawl Stars APK and download the current version of the game from the Google Play Store or another reliable source.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md b/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md deleted file mode 100644 index 795d826dbf72fa1b4ee33eedb26683bdf5ddda67..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md +++ /dev/null @@ -1,149 +0,0 @@ -
    -

    How to Download Apkfew Whatsapp Tracker and Why You Need It

    -

    Do you want to track the online activity and chat history of any WhatsApp user? Do you want to know who viewed your profile and who deleted their account? If yes, then you need Apkfew Whatsapp Tracker, a powerful and reliable app that lets you monitor any WhatsApp account discreetly and remotely. In this article, we will show you how to download Apkfew Whatsapp Tracker for Android devices and how to use it effectively. We will also compare it with other similar apps and answer some frequently asked questions.

    -

    download apkfew whatsapp tracker


    Download Zip ——— https://jinyurl.com/2uNOSC



    -

    What is Apkfew Whatsapp Tracker?

    -

    Apkfew Whatsapp Tracker is a free app that allows you to track the online status, last seen, chat messages, media files, profile visits, and deleted accounts of any WhatsApp user. You can use it to spy on your spouse, children, friends, employees, or anyone else who uses WhatsApp. You can also use it to protect your privacy and security by knowing who is stalking you or trying to hack your account.

    -

    Features of Apkfew Whatsapp Tracker

    - -

    Benefits of Apkfew Whatsapp Tracker

    - -

    How to Download Apkfew Whatsapp Tracker for Android

    -

    To download Apkfew Whatsapp Tracker for Android devices, you need to follow these simple steps:

    -

    Step 1: Enable Unknown Sources

    -

    Since Apkfew Whatsapp Tracker is not available on the Google Play Store, you need to enable unknown sources on your device to install it. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.

    -

    Step 2: Visit the Apkfew Website

    -

    The next step is to visit the official website of Apkfew at [https://apkcombo.com/search/apkfew-whatsapp-tracker-free](^1^). Here you will find the latest version of the app along with its description and reviews. You can also check out other apps from Apkfew that offer similar features.

    -

    Step 3: Download and Install the Apk File

    Once you are on the website, click on the download button and wait for the apk file to be downloaded on your device. The file size is about 10 MB and it should take a few minutes depending on your internet speed. After the download is complete, locate the file in your downloads folder and tap on it to start the installation process. Follow the instructions on the screen and agree to the terms and conditions to finish the installation.

    -

    download apkfew whatsapp tracker free
    -download apkfew whatsapp tracker online
    -download apkfew whatsapp tracker app
    -download apkfew whatsapp tracker pro
    -download apkfew whatsapp tracker premium
    -download apkfew whatsapp tracker mod
    -download apkfew whatsapp tracker apk
    -download apkfew whatsapp tracker for android
    -download apkfew whatsapp tracker for ios
    -download apkfew whatsapp tracker for pc
    -download apkfew whatsapp tracker for windows
    -download apkfew whatsapp tracker for mac
    -download apkfew whatsapp tracker for linux
    -download apkfew whatsapp tracker latest version
    -download apkfew whatsapp tracker 2023
    -download apkfew whatsapp tracker update
    -download apkfew whatsapp tracker review
    -download apkfew whatsapp tracker tutorial
    -download apkfew whatsapp tracker guide
    -download apkfew whatsapp tracker tips
    -download apkfew whatsapp tracker tricks
    -download apkfew whatsapp tracker hacks
    -download apkfew whatsapp tracker cheats
    -download apkfew whatsapp tracker features
    -download apkfew whatsapp tracker benefits
    -download apkfew whatsapp tracker advantages
    -download apkfew whatsapp tracker disadvantages
    -download apkfew whatsapp tracker problems
    -download apkfew whatsapp tracker issues
    -download apkfew whatsapp tracker bugs
    -download apkfew whatsapp tracker fixes
    -download apkfew whatsapp tracker solutions
    -download apkfew whatsapp tracker alternatives
    -download apkfew whatsapp tracker competitors
    -download apkfew whatsapp tracker comparison
    -download apkfew whatsapp tracker best practices
    -download apkfew whatsapp tracker case studies
    -download apkfew whatsapp tracker testimonials
    -download apkfew whatsapp tracker feedbacks
    -download apkfew whatsapp tracker ratings
    -download apkfew whatsapp tracker rankings
    -download apkfew whatsapp tracker statistics
    -download apkfew whatsapp tracker analytics
    -download apkfew whatsapp tracker insights
    -download apkfew whatsapp tracker reports
    -download apkfew whatsapp tracker results
    -download apkfew whatsapp tracker performance
    -download apkfew whatsapp tracker quality
    -download apkfew whatsapp tracker reliability
    -download apkfew whatsapp tracker security

    -

    Step 4: Launch the App and Grant Permissions

    -

    The final step is to launch the app and grant it the necessary permissions to access your device's data and functions. To do this, open the app from your app drawer or home screen and sign up with your email and password. You will then be asked to enter the phone number of the WhatsApp user you want to track. You will also need to grant the app permissions to access your contacts, storage, location, camera, microphone, and notifications. These permissions are essential for the app to work properly and collect the data you need.

    -

    How to Use Apkfew Whatsapp Tracker

    -

    Now that you have downloaded and installed Apkfew Whatsapp Tracker, you can start using it to monitor any WhatsApp account you want. Here are some of the things you can do with the app:

    -

    Track Online Status and Last Seen

    -

    With Apkfew Whatsapp Tracker, you can track the online status and last seen of any WhatsApp user, even if they hide it or block you. You can see when they are online or offline, how long they stay online, and how often they change their status. You can also see their last seen time and date, even if they disable it in their settings. This way, you can know their activity patterns and habits, and find out if they are lying or cheating on you.

    -

    Monitor Chat Messages and Media Files

    -

    Another feature of Apkfew Whatsapp Tracker is that it allows you to monitor the chat messages and media files of any WhatsApp user, even if they delete them or use end-to-end encryption. You can read their text messages, voice messages, images, videos, documents, stickers, emojis, and more. You can also see who they are chatting with, what they are talking about, and when they are sending or receiving messages. This way, you can know their interests, preferences, opinions, and secrets.

    -

    View Profile Visits and Deleted Accounts

    -

    A third feature of Apkfew Whatsapp Tracker is that it enables you to view the profile visits and deleted accounts of any WhatsApp user, even if they disable read receipts or change their number. You can see who visited their profile, how many times they visited it, and when they visited it. You can also see who deleted their account, why they deleted it, and when they deleted it. This way, you can know who is stalking them or trying to hack their account.

    -

    Comparison Table of Apkfew Whatsapp Tracker and Other Apps

    -

    To give you a better idea of how Apkfew Whatsapp Tracker compares with other similar apps in the market, we have created a comparison table that shows some of the key features and differences between them. Here is the table:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -All Android devices (rooted) All iOS devices (jailbroken)DetectableYesOnline status, last seen, chat messages, media filesCocospy - - - - -
    App NamePriceCompatibilityDetectabilityRooting/Jailbreaking RequiredData Collected
    Apkfew Whatsapp TrackerFreeAll Android devicesUndetectableNoOnline status, last seen, chat messages, media files, profile visits, deleted accounts
    mSpy$29.99/monthAll Android devices (rooted) All iOS devices (jailbroken)DetectableYesOnline status, last seen, chat messages, media files
    Spyzie$39.99/monthAll Android devices (rooted) All iOS devices (jailbroken)DetectableYesOnline status, last seen, chat messages, media files
    FoneMonitor$29.99/month$39.99/month All Android devices All Android devices (rooted) All iOS devices (jailbroken)DetectableYesOnline status, last seen, chat messages, media files
    -

    As you can see, Apkfew Whatsapp Tracker is the best app among the four, as it offers more features, better compatibility, higher security, and lower cost. It is the only app that does not require rooting or jailbreaking the target device, and it is the only app that can track profile visits and deleted accounts. It is also the only app that is free to download and use, while the others charge you hefty fees. Therefore, we recommend you to choose Apkfew Whatsapp Tracker over the other apps.

    -

    Conclusion

    -

    In conclusion, Apkfew Whatsapp Tracker is a free app that lets you track the online activity and chat history of any WhatsApp user. You can use it to spy on your spouse, children, friends, employees, or anyone else who uses WhatsApp. You can also use it to protect your privacy and security by knowing who is stalking you or trying to hack your account. To download Apkfew Whatsapp Tracker for Android devices, you need to enable unknown sources, visit the Apkfew website, download and install the apk file, and launch the app and grant permissions. To use Apkfew Whatsapp Tracker, you need to enter the phone number of the WhatsApp user you want to track, and then you can access all the data remotely from a web-based dashboard. Apkfew Whatsapp Tracker is better than other similar apps in terms of features, compatibility, security, and cost. It is the best app for WhatsApp tracking that you can find in the market.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Apkfew Whatsapp Tracker:

    -

    Q: Is Apkfew Whatsapp Tracker safe to use?

    -

    A: Yes, Apkfew Whatsapp Tracker is safe to use, as it does not contain any viruses, malware, spyware, or adware. It also does not collect or store any personal or sensitive information from your device or the target device. It only accesses the data that is relevant for WhatsApp tracking and does not share it with anyone else.

    -

    Q: Is Apkfew Whatsapp Tracker legal to use?

    -

    A: Yes, Apkfew Whatsapp Tracker is legal to use, as long as you follow the laws and regulations of your country and respect the privacy and security of the target user. You should not use Apkfew Whatsapp Tracker for any illegal or unethical purposes, such as blackmailing, harassing, threatening, or harming anyone. You should also inform and obtain consent from the target user before using Apkfew Whatsapp Tracker on their device.

    -

    Q: Does Apkfew Whatsapp Tracker work on iOS devices?

    -

    A: No, Apkfew Whatsapp Tracker does not work on iOS devices, as it is designed for Android devices only. However, you can still use Apkfew Whatsapp Tracker to track an iOS device if you have access to its WhatsApp web login credentials. You can then scan the QR code from your Android device and access all the data from the web-based dashboard.

    -

    Q: How can I contact Apkfew Whatsapp Tracker support team?

    -

    A: If you have any questions, issues, feedbacks, or suggestions about Apkfew Whatsapp Tracker, you can contact their support team by sending an email to [support@apkfew.com]. They will respond to you within 24 hours and help you resolve any problems.

    -

    Q: How can I update Apkfew Whatsapp Tracker to the latest version?

    -

    A: To update Apkfew Whatsapp Tracker to the latest version, you need to visit their website at [https://apkcombo.com/search/apkfew-whatsapp-tracker-free] and download and install the new apk file over the old one. You do not need to uninstall or reinstall the app. The update will automatically apply and improve the performance and functionality of the app.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/7hao/bingo/src/lib/hooks/chat-history.ts b/spaces/7hao/bingo/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts b/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py deleted file mode 100644 index 2f24f9a0f2c54a2bb634c1f374bc1b534d63697f..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = ['./resnet50_32xb64-warmup_in1k.py'] -model = dict( - head=dict( - type='LinearClsHead', - num_classes=1000, - in_channels=2048, - loss=dict( - type='LabelSmoothLoss', - loss_weight=1.0, - label_smooth_val=0.1, - num_classes=1000), - )) diff --git a/spaces/Abhaykoul/BardCookies-AI_Query/README.md b/spaces/Abhaykoul/BardCookies-AI_Query/README.md deleted file mode 100644 index 515ca2ea6c4cbd164e7468d3ba92ccb9496ab99e..0000000000000000000000000000000000000000 --- a/spaces/Abhaykoul/BardCookies-AI_Query/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI With Realtime Data -emoji: 🐠 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py deleted file mode 100644 index 3399d613fd4c40ab594154a8e9c5f0ec04054a4e..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py +++ /dev/null @@ -1,52 +0,0 @@ -from __future__ import annotations - -import time -import hashlib - -from ..typing import AsyncGenerator -from ..requests import StreamSession -from .base_provider import AsyncGeneratorProvider - - -class Aibn(AsyncGeneratorProvider): - url = "https://aibn.cc" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - timeout: int = 30, - **kwargs - ) -> AsyncGenerator: - async with StreamSession(impersonate="chrome107", timeout=timeout) as session: - timestamp = int(time.time()) - data = { - "messages": messages, - "pass": None, - "sign": generate_signature(timestamp, messages[-1]["content"]), - "time": timestamp - } - async with session.post(f"{cls.url}/api/generate", json=data) as response: - response.raise_for_status() - async for chunk in response.iter_content(): - yield chunk.decode() - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -def generate_signature(timestamp: int, message: str, secret: str = "undefined"): - data = f"{timestamp}:{message}:{secret}" - return hashlib.sha256(data.encode()).hexdigest() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py deleted file mode 100644 index 3eb55a64568c28df41f14051002ade95ca8dbcec..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py +++ /dev/null @@ -1,74 +0,0 @@ -from __future__ import annotations - -import os, re -from aiohttp import ClientSession - -from .base_provider import AsyncProvider, format_prompt - - -class ChatgptLogin(AsyncProvider): - url = "https://opchatgpts.net" - supports_gpt_35_turbo = True - working = True - _nonce = None - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> str: - headers = { - "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36", - "Accept" : "*/*", - "Accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3", - "Origin" : "https://opchatgpts.net", - "Alt-Used" : "opchatgpts.net", - "Referer" : "https://opchatgpts.net/chatgpt-free-use/", - "Sec-Fetch-Dest" : "empty", - "Sec-Fetch-Mode" : "cors", - "Sec-Fetch-Site" : "same-origin", - } - async with ClientSession( - headers=headers - ) as session: - if not cls._nonce: - async with session.get( - "https://opchatgpts.net/chatgpt-free-use/", - params={"id": os.urandom(6).hex()}, - ) as response: - result = re.search(r'data-nonce="(.*?)"', await response.text()) - if not result: - raise RuntimeError("No nonce value") - cls._nonce = result.group(1) - data = { - "_wpnonce": cls._nonce, - "post_id": 28, - "url": "https://opchatgpts.net/chatgpt-free-use", - "action": "wpaicg_chat_shortcode_message", - "message": format_prompt(messages), - "bot_id": 0 - } - async with session.post("https://opchatgpts.net/wp-admin/admin-ajax.php", data=data) as response: - response.raise_for_status() - data = await response.json() - if "data" in data: - return data["data"] - elif "msg" in data: - raise RuntimeError(data["msg"]) - else: - raise RuntimeError(f"Response: {data}") - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js deleted file mode 100644 index 9b82b16fabb55c225b0fd74f357f1ea23a7c786a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js +++ /dev/null @@ -1,19 +0,0 @@ -import Flip from './flip.js'; - -class FlipPlugin extends Phaser.Plugins.BasePlugin { - - constructor(pluginManager) { - super(pluginManager); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } - - add(gameObject, config) { - return new Flip(gameObject, config); - } -} - -export default FlipPlugin; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js deleted file mode 100644 index 2ebb9ed46855bfa8ab1f26785e232f2d9c2249a6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js +++ /dev/null @@ -1,11 +0,0 @@ -import Shake from './Shake.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('shake', function (gameObject, config) { - return new Shake(gameObject, config); -}); - -SetValue(window, 'RexPlugins.UI.Shake', Shake); - -export default Shake; \ No newline at end of file diff --git a/spaces/Ailexcoder/GPT4ALL1/README.md b/spaces/Ailexcoder/GPT4ALL1/README.md deleted file mode 100644 index 0171abc807b3d45293b6841c6fa63e349b9b0710..0000000000000000000000000000000000000000 --- a/spaces/Ailexcoder/GPT4ALL1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gpt4all -emoji: 🦀 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -duplicated_from: Ailexcoder/GPT4ALL ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py b/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py deleted file mode 100644 index 546427a1e9f91fceecea94913b23e46fc1787289..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py +++ /dev/null @@ -1,60 +0,0 @@ -from torch import nn - - -class BaseNetwork(nn.Module): - def __init__(self): - super(BaseNetwork, self).__init__() - - def forward(self, x, y): - pass - - def print_network(self): - if isinstance(self, list): - self = self[0] - num_params = 0 - for param in self.parameters(): - num_params += param.numel() - print('Network [%s] was created. Total number of parameters: %.1f million. ' - 'To see the architecture, do print(network).' - % (type(self).__name__, num_params / 1000000)) - - def set_requires_grad(self, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary computations - Parameters: - requires_grad (bool) -- whether the networks require gradients or not - """ - for param in self.parameters(): - param.requires_grad = requires_grad - - def init_weights(self, init_type='xavier', gain=0.02): - def init_func(m): - classname = m.__class__.__name__ - if classname.find('BatchNorm2d') != -1: - if hasattr(m, 'weight') and m.weight is not None: - nn.init.normal_(m.weight.data, 1.0, gain) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - nn.init.normal_(m.weight.data, 0.0, gain) - elif init_type == 'xavier': - nn.init.xavier_normal_(m.weight.data, gain=gain) - elif init_type == 'xavier_uniform': - nn.init.xavier_uniform_(m.weight.data, gain=1.0) - elif init_type == 'kaiming': - nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - nn.init.orthogonal_(m.weight.data, gain=gain) - elif init_type == 'none': # uses pytorch's default init method - m.reset_parameters() - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - - self.apply(init_func) - - # propagate to children - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights(init_type, gain) diff --git a/spaces/Ame42/rwms/main.py b/spaces/Ame42/rwms/main.py deleted file mode 100644 index 13c9aa15d9a8d93025188414ac4c6b47c55044e6..0000000000000000000000000000000000000000 --- a/spaces/Ame42/rwms/main.py +++ /dev/null @@ -1,401 +0,0 @@ -# 'dataset' holds the input data for this script -import os.path - -import gradio as gr -import numpy -import pandas -from sklearn.ensemble import RandomForestRegressor -from sklearn.linear_model import LinearRegression -from sklearn.metrics import explained_variance_score, max_error, mean_absolute_error, mean_squared_error, \ - mean_squared_log_error, median_absolute_error, mean_absolute_percentage_error, r2_score, mean_poisson_deviance, \ - mean_gamma_deviance, mean_tweedie_deviance, d2_tweedie_score, mean_pinball_loss, d2_pinball_score, \ - d2_absolute_error_score -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import StandardScaler - -import datastore -from local_utils import * - -MAX_DEPTH = 20 -N_EST = 10 - -mode = {"app": test_mode, "data": all_mode, "regen": False} - - -def clean_prepare_train(data_i, train_size=0.015, test_size=0.005): - # drop sparse column THP BLIND then drop empty rows for all remaining columns - data_i.drop(axis=1, columns=[blind_col], inplace=True) - data_i.dropna(axis=0, inplace=True, how="any") - data_i.reset_index(inplace=True) - - # change well_id to dummies - dummies = pandas.get_dummies(data_i[well_col]) - data_i = pandas.concat([data_i, dummies], axis=1).reindex(data_i.index) - data_i.drop(columns=[well_col], axis=1, inplace=True) - - # remove useless columns - data_i = keep_useful_cols(data_i, [ro_col, dur_col, man_col, blind_col, temp_col] + dummies.columns.tolist()) - - # get x and y - y = data_i[ro_col] - x_i = data_i.drop(axis=1, columns=[ro_col]) - - # verify data row count - print(f"\n{x_i.shape[0]} rows") - - # fit scaler - scaler_i = StandardScaler(copy=False) - scaler_i.fit(x_i) - x_fit = pandas.DataFrame(scaler_i.transform(x_i), columns=x_i.columns) - - # data split - x_train, x_test, y_train, y_test = \ - train_test_split(x_fit, y, random_state=30, train_size=train_size, test_size=test_size) - - # model - model_i = RandomForestRegressor(n_estimators=N_EST, random_state=30, max_depth=MAX_DEPTH) - model_i.fit(x_train, y_train) - # print([est.get_depth() for est in model_i.estimators_]) - - # testing - y_pred = model_i.predict(x_test) - score_i = r2_score(y_test, y_pred) - # print("explained_variance_score:", explained_variance_score(y_test, y_pred)) - # print("max_error:", max_error(y_test, y_pred)) - # print("mean_absolute_error:", mean_absolute_error(y_test, y_pred)) - # print("mean_squared_error:", mean_squared_error(y_test, y_pred)) - # print("mean_squared_log_error:", mean_squared_log_error(y_test, y_pred)) - # print("median_absolute_error:", median_absolute_error(y_test, y_pred)) - # print("mean_absolute_percentage_error:", mean_absolute_percentage_error(y_test, y_pred)) - # print("r2_score:", r2_score(y_test, y_pred)) - # print("mean_poisson_deviance:", mean_poisson_deviance(y_test, y_pred)) - # print("mean_gamma_deviance:", mean_gamma_deviance(y_test, y_pred)) - # print("mean_tweedie_deviance:", mean_tweedie_deviance(y_test, y_pred)) - # print("d2_tweedie_score:", d2_tweedie_score(y_test, y_pred)) - # print("mean_pinball_loss:", mean_pinball_loss(y_test, y_pred)) - # print("d2_pinball_score:", d2_pinball_score(y_test, y_pred)) - # print("d2_absolute_error_score:", d2_absolute_error_score(y_test, y_pred)) - - # create power_bi data payload - x_test, y_test, y_pred = (pandas.DataFrame(x_test).reset_index(), - pandas.DataFrame(y_test).reset_index(), - pandas.DataFrame(y_pred, columns=[sim_col]).reset_index()) - data_run = pandas.concat([x_test, y_test, y_pred], axis=1).drop("index", axis=1) - - return model_i, scaler_i, score_i, x_i, data_run - - -def report_on(model_i, scaler_i, score_i, x_i): - print(f""" - \033[1;31mAI generalization stats\033[0m - Model performance (rms score): \033[0;35m{score_i * 100:.2f}%\033[0m - """) - - tests = [WellDataPoint(thp=661.84, day_sec=54100, man_pres=143.93, temp=93.9, _l1=0, _s1=1, _l2=0, _s2=0), - WellDataPoint(thp=1118.456, day_sec=86050, man_pres=166.063, temp=79.706, _l1=1, _s1=0, _l2=0, _s2=0), - WellDataPoint(thp=609.08, day_sec=42600, man_pres=137.2, temp=95.477, _l1=0, _s1=0, _l2=0, _s2=1), - WellDataPoint(thp=1118.07, day_sec=49400, man_pres=146.44, temp=98.5, _l1=0, _s1=0, _l2=1, _s2=0)] - - for test in tests: - print(f"\n{test}") - try: - test_x = pandas.DataFrame(scaler_i.transform(pandas.DataFrame([test.get_x()], columns=x_i.columns)), - columns=x_i.columns) - y_vis_pred = model_i.predict(test_x) - print(f"Real: \033[0;35m{test.get_y():.2f} psi\033[0m vs. " - f"Prediction: \033[0;35m{y_vis_pred[0]:.2f} psi\033[0m", flush=True) - except ValueError: - print(x_i.columns, flush=True) - - -def train(mode, best=(25, 10, 54, 0, 0)): - if mode == day_mode: - data = datastore.get_22_data() - model, scaler, score, x, results = clean_prepare_train(data, train_size=0.75, test_size=0.25) - write_state_files(model, scaler) - results.to_csv(f"{out_folder}POWER_BI_DATA_DAY.csv", index_label=id_col) - report_on(model, scaler, score, x) - else: - # get data payload - if not os.path.exists(f"{out_folder}data_opt_balanced.csv"): - data_dict = datastore.get_all_data() - - # search for the best offset combination model - # best = find_best(data_dict, model_search, best) - print(f"\033[1;31mFinal offsets\033[0m\n{s1}: {best[0]}, {l1}: {best[1]}, {s2}: {best[2]}, {l2}: {best[3]}") - data = datastore.offset_wells(data_dict, [x for x in best[:4]]) - - # remove unnecessary id columns - data = keep_useful_cols(data) - - # balance it by oversampling - data = oversample_balance(data) - - # dump it - data.to_csv(f"{out_folder}data_opt_balanced.csv", index_label=id_col) - else: - data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv") - - # create model - model, scaler, score, x, results = clean_prepare_train(keep_useful_cols(data), train_size=0.75, test_size=0.25) - write_state_files(model, scaler) - results.to_csv(f"{out_folder}POWER_BI_DATA.csv", index_label=id_col) - report_on(model, scaler, score, x) - - return model - - -def model_search(dt_dict, s_1, l_1, s_2, l_2, current_best): - dt = datastore.offset_wells(dt_dict, [s_1, l_1, s_2, l_2]) - _, _, scr, _, _ = clean_prepare_train(dt, train_size=0.75, test_size=0.25) - scores_i = (s_1, l_1, s_2, l_2, scr) - print(f"s1: {s_1}, l1: {l_1}, s2: {s_2}, l2: {l_2}, \033[0;35mscore: {scr * 100}\033[0m vs. " - f"\033[1;31mbest: {current_best[4] * 100}\033[0m") - return scores_i if scr > current_best[4] else current_best - - -def find_best(data_dict, model_search, best): - for i in range(60): - best = model_search(data_dict, i, best[1], best[2], best[3], best) - for j in range(60): - best = model_search(data_dict, best[0], j, best[2], best[3], best) - for k in range(60): - best = model_search(data_dict, best[0], best[1], k, best[3], best) - for n in range(180): - best = model_search(data_dict, best[0], best[1], best[2], n, best) - return best - - -def app(hours, mins, secs, man_pres, temp, well, thp=None, regen=False, full_text_reply=True): - global test_x, y_vis_pred - - dur_sec = to_sec(hours, mins, secs) - - if regen or not (os.path.exists(f"{model_file}.mdl") and os.path.exists(f"{scaler_file}.sts")): - train(mode['data']) - - mdl, scl = read_state_files(model_file, scaler_file) - - thp = 0 if thp is None else thp - - _l1, _l2, _s1, _s2 = change_well_to_dummy(well) - - test = WellDataPoint(thp=thp, day_sec=dur_sec, man_pres=man_pres, temp=temp, _l1=_l1, _s1=_s1, _l2=_l2, _s2=_s2) - columns = ['Daylight duration (SEC)', 'Manifold Pressure (PSI)', 'TEMP (°F)', '1L', '1S', '2L', '2S'] - try: - test_x = pandas.DataFrame(scl.transform(pandas.DataFrame([test.get_x()], columns=columns)), columns=columns) - y_vis_pred = mdl.predict(test_x) - print(f"Real: \033[0;35m{test.get_y():.2f} psi\033[0m vs. " - f"Prediction: \033[0;35m{y_vis_pred[0]:.2f} psi\033[0m") - except ValueError: - print(test, flush=True) - raise - - return f"{test.__plain__()}\nReal: {test.get_y():.2f} psi vs. Prediction: {y_vis_pred[0]:.2f} psi" if \ - full_text_reply else y_vis_pred - - -def i_app(wl, pres): - # match well to factors - factor = factors.loc[factors["Well"] == wl[6:]] - - # retrieve conversion and flow factor - c_factor = factor["Conversion Factor"] - f_factor = factor["Flow Factor"] - - # return math result - return f"""\ -Testing data - Manifold pressure: {pres} psi - Well: {wl} - -Flowing tubing head pressure: {pres + [f for f in c_factor][0]:.2f} psi -Q-liquid: {pres * [f for f in f_factor][0]:.2f} bbl/day""" - - -scroll_data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv") # pandas.DataFrame() -n_real = 0 -n_sim = 0 -mn = 0 -mx = 0 -_, _, _, _, results = clean_prepare_train(scroll_data, train_size=0.50, test_size=0.50) -state_var = False -results.insert(0, id_col, numpy.array(range(results.shape[0])), False) - -# randomize data rows and reset index -scroll_data = scroll_data.sample(frac=1) -scroll_data.drop([id_col, "index"], axis=1, inplace=True, errors="ignore") -scroll_data.insert(0, id_col, numpy.array(range(scroll_data.shape[0])), False) -y_range = min(scroll_data[ro_col]), max(scroll_data[ro_col]) - - -# async def load_data(): -# global state_var -# if not state_var: -# state_var = True -# global scroll_data -# data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv") -# model, scaler, score, x, results = clean_prepare_train(keep_useful_cols(data), train_size=0.50, test_size=0.50) -# i = 0 -# -# while i < results.shape[0]: -# await asyncio.sleep(1) -# i += 1 -# new_row = results.iloc[[i]] -# print(new_row) -# scroll_data = pandas.concat([scroll_data, new_row], ignore_index=True) -# if scroll_data.shape[0] > 100: -# scroll_data.drop(0, axis=0, inplace=True) -# print(scroll_data.shape) - - -# URL = "https://docs.google.com/spreadsheets/d/1ZQbeOeCaiLMidenqmwq7wC-ni7rdtUYQXH1XER6XyyQ/edit#gid=0" -# csv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=') -# -# -# def get_data(): -# return pandas.read_csv(csv_url) - - -def get_real_data() -> pandas.DataFrame: - global results - global mn - global mx - mx += 1 - mn = 0 if mx - 50 < 0 else mx - 50 - sl = results.iloc[mn:mx] - sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False) - return gr.LinePlot.update(value=sl) # scroll_data - - -def get_sim_data() -> pandas.DataFrame: - global results - sl = results.iloc[mn:mx] - sl.insert(0, time_col, numpy.array([from_sec(r) for r in sl[id_col].tolist()]), False) - return gr.LinePlot.update(value=sl) # scroll_data - - -x_real = 0 -x_pres = 0 -x_ql = 0 - - -def get_x_real_data() -> pandas.DataFrame: - global results - sl = scroll_data.iloc[mn:mx] - sl = sl.drop(time_col, axis=1, errors="ignore") - sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False) - return gr.LinePlot.update(value=sl) # scroll_data - - -def get_x_sim_pres_data() -> pandas.DataFrame: - global results - sl = scroll_data.iloc[mn:mx] - sl = sl.drop(sim_col, axis=1, errors="ignore") - sl = sl.drop(time_col, axis=1, errors="ignore") - sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False) - sl.insert(0, sim_col, numpy.array([calc_excel(r)[0] for r in sl[man_col].tolist()]), False) - return gr.LinePlot.update(value=sl) # scroll_data - - -def get_x_sim_ql_data() -> pandas.DataFrame: - global results - sl = scroll_data.iloc[mn:mx] - sl = sl.drop(time_col, axis=1, errors="ignore") - sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False) - sl.insert(0, ql_col, numpy.array([calc_excel(r)[1] for r in sl[man_col].tolist()]), False) - return gr.LinePlot.update(value=sl) # scroll_data - - -# get conversion factors -factors = datastore.get_conversion_factors() - -if mode['app'] == train_mode: - app(23, 59, 40, 143.96, 79.523, parse_well_id(s2)) - app(17, 2, 0, 144.41, 97.278, parse_well_id(l1), regen=mode['regen']) -else: - with gr.Blocks() as demo: - gr.Markdown("#") - with gr.Tab("Dashboard"): - mx = 50 - # pull data into line plot - with gr.Row(): - with gr.Column(): - gr.Markdown("# Our AI-powered calculator (Accuracy: 99.61%)") - # Real Tubing Head Pressure - real_ai = gr.LinePlot(y=ro_col, x=time_col, label="Awoba Well X", title="Real Tubing Head Pressure", - y_title=ro_col, x_title=time_col, every=1, height=150, width=600) - demo.load(fn=get_real_data, inputs=None, outputs=real_ai) - - # Calculated Tubing Head Pressure - sim_ai = gr.LinePlot(y=sim_col, x=time_col, label="Awoba Well X", - title="Calculated Tubing Head Pressure", - y_title=sim_col, x_title=time_col, every=1, height=150, width=600) - demo.load(fn=get_sim_data, inputs=None, outputs=sim_ai) - - - with gr.Column(): - gr.Markdown("###") - gr.Markdown("### Excel formulae (Accuracy: 27.53%)") - # Real Tubing Head Pressure - real_x = gr.LinePlot(y=ro_col, x=time_col, label="Abura Well X", title="Real Tubing Head Pressure", - y_title=ro_col, x_title=time_col, every=1, height=150, width=600, y_lim=y_range - ) - demo.load(fn=get_x_real_data, inputs=None, outputs=real_x) - - # Calculated Tubing Head Pressure - sim_x = gr.LinePlot(y=sim_col, x=time_col, label="Abura Well X", title="Calculated Tubing Head Pressure" - , y_title=sim_col, x_title=time_col, every=1, height=150, width=600, - y_lim=y_range) - demo.load(fn=get_x_sim_pres_data, inputs=None, outputs=sim_x) - - # Calculated Production - sim_ql_x = gr.LinePlot(y=ql_col, x=time_col, label="Abura Well X", title="Calculated Production", - y_title=ql_col, x_title=time_col, every=1, height=150, width=600) - demo.load(fn=get_x_sim_ql_data, inputs=None, outputs=sim_ql_x) - with gr.Tab("AI approach"): - hours = gr.Number(label="Hours (24-hour format)", value=23) - mins = gr.Number(label="Minutes", value=59) - secs = gr.Number(label="Seconds", value=40) - man_pres = gr.Number(label=man_col, value=143.96) - temp = gr.Number(label=temp_col, value=79.523) - well = gr.Radio( - [parse_well_id(w) for w in [l1, s1, l2, s2]], - value=parse_well_id(s2), - label="Select a well" - ) - thp = gr.Number(label=ro_col, value=641.98) - greet_btn = gr.Button("Simulate") - greet_btn.style(full_width=True) - output = gr.Textbox(label="Results") - greet_btn.click(fn=app, inputs=[hours, mins, secs, man_pres, temp, well, thp], outputs=output) - - with gr.Tab("Excel approach"): - # build interface to take in well selection and manifold pressure - i_man_pres = gr.Number(label=man_col, value=143.96) - i_well = gr.Radio( - [parse_well_id_2(w) for w in factors["Well"]], - label="Select a well" - ) - i_greet_btn = gr.Button("Simulate") - i_greet_btn.style(full_width=True) - i_output = gr.Textbox(label="Results") - - # call i_app function with params on button click - i_greet_btn.click(fn=i_app, inputs=[i_well, i_man_pres], outputs=i_output) - - - # demo.load(fn=get_real_data, inputs=None, outputs=real_ai) - # with gr.Column(): - # with gr.Row(): - # gr.LinePlot(value=get_real_data, y=ro_col, x=id_col, label="Real Tubing Head Pressure", - # y_title=ro_col, x_title=time_col, every=1, height=80, width=600) - # gr.LinePlot(value=get_sim_data, y=sim_col, x=id_col, label="Calculated Tubing Head Pressure", - # y_title=sim_col, x_title=time_col, every=1, height=80, width=600) - # with gr.Row(): - # gr.LinePlot(value=get_real_data, y=ro_col, x=id_col, label="Real Tubing Head Pressure", - # y_title=ro_col, x_title=time_col, every=1, height=80, width=600) - # gr.LinePlot(value=get_sim_data, y=sim_col, x=id_col, label="Calculated Tubing Head Pressure", - # y_title=sim_col, x_title=time_col, every=1, height=80, width=600) - - demo.launch(enable_queue=True, share=False) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py deleted file mode 100644 index c5ef06997d3c16368f9c105476a77ae65a655f99..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py +++ /dev/null @@ -1,720 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPTextModel, CLIPTokenizer - -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet3DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import ( - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import TextToVideoSDPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler - >>> from diffusers.utils import export_to_video - - >>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) - >>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) - >>> pipe.to("cuda") - - >>> prompt = "spiderman running in the desert" - >>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames - >>> # safe low-res video - >>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") - - >>> # let's offload the text-to-image model - >>> pipe.to("cpu") - - >>> # and load the image-to-image model - >>> pipe = DiffusionPipeline.from_pretrained( - ... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" - ... ) - >>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) - >>> pipe.enable_model_cpu_offload() - - >>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode - >>> pipe.vae.enable_slicing() - - >>> # now let's upscale it - >>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] - - >>> # and denoise it - >>> video_frames = pipe(prompt, video=video, strength=0.6).frames - >>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") - >>> video_path - ``` -""" - - -def tensor2vid(video: torch.Tensor, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> List[np.ndarray]: - # This code is copied from https://github.com/modelscope/modelscope/blob/1509fdb973e5871f37148a4b5e5964cafd43e64d/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78 - # reshape to ncfhw - mean = torch.tensor(mean, device=video.device).reshape(1, -1, 1, 1, 1) - std = torch.tensor(std, device=video.device).reshape(1, -1, 1, 1, 1) - # unnormalize back to [0,1] - video = video.mul_(std).add_(mean) - video.clamp_(0, 1) - # prepare the final outputs - i, c, f, h, w = video.shape - images = video.permute(2, 3, 0, 4, 1).reshape( - f, h, i * w, c - ) # 1st (frames, h, batch_size, w, c) 2nd (frames, h, batch_size * w, c) - images = images.unbind(dim=0) # prepare a list of indvidual (consecutive frames) - images = [(image.cpu().numpy() * 255).astype("uint8") for image in images] # f h w c - return images - - -def preprocess_video(video): - supported_formats = (np.ndarray, torch.Tensor, PIL.Image.Image) - - if isinstance(video, supported_formats): - video = [video] - elif not (isinstance(video, list) and all(isinstance(i, supported_formats) for i in video)): - raise ValueError( - f"Input is in incorrect format: {[type(i) for i in video]}. Currently, we only support {', '.join(supported_formats)}" - ) - - if isinstance(video[0], PIL.Image.Image): - video = [np.array(frame) for frame in video] - - if isinstance(video[0], np.ndarray): - video = np.concatenate(video, axis=0) if video[0].ndim == 5 else np.stack(video, axis=0) - - if video.dtype == np.uint8: - video = np.array(video).astype(np.float32) / 255.0 - - if video.ndim == 4: - video = video[None, ...] - - video = torch.from_numpy(video.transpose(0, 4, 1, 2, 3)) - - elif isinstance(video[0], torch.Tensor): - video = torch.cat(video, axis=0) if video[0].ndim == 5 else torch.stack(video, axis=0) - - # don't need any preprocess if the video is latents - channel = video.shape[1] - if channel == 4: - return video - - # move channels before num_frames - video = video.permute(0, 2, 1, 3, 4) - - # normalize video - video = 2.0 * video - 1.0 - - return video - - -class VideoToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin): - r""" - Pipeline for text-guided video-to-video generation. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer (`CLIPTokenizer`): - A [`~transformers.CLIPTokenizer`] to tokenize text. - unet ([`UNet3DConditionModel`]): - A [`UNet3DConditionModel`] to denoise the encoded video latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet3DConditionModel, - scheduler: KarrasDiffusionSchedulers, - ): - super().__init__() - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling - def enable_vae_tiling(self): - r""" - Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to - compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow - processing larger images. - """ - self.vae.enable_tiling() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling - def disable_vae_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_tiling() - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.vae, self.unet]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - - batch_size, channels, num_frames, height, width = latents.shape - latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width) - - image = self.vae.decode(latents).sample - video = ( - image[None, :] - .reshape( - ( - batch_size, - num_frames, - -1, - ) - + image.shape[2:] - ) - .permute(0, 2, 1, 3, 4) - ) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - video = video.float() - return video - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.check_inputs - def check_inputs( - self, prompt, strength, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None - ): - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, video, timestep, batch_size, dtype, device, generator=None): - video = video.to(device=device, dtype=dtype) - - # change from (b, c, f, h, w) -> (b * f, c, w, h) - bsz, channel, frames, width, height = video.shape - video = video.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height) - - if video.shape[1] == 4: - init_latents = video - else: - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - elif isinstance(generator, list): - init_latents = [ - self.vae.encode(video[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(video).latent_dist.sample(generator) - - init_latents = self.vae.config.scaling_factor * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `video` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - latents = latents[None, :].reshape((bsz, frames, latents.shape[1]) + latents.shape[2:]).permute(0, 2, 1, 3, 4) - - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - video: Union[List[np.ndarray], torch.FloatTensor] = None, - strength: float = 0.6, - num_inference_steps: int = 50, - guidance_scale: float = 15.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "np", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - video (`List[np.ndarray]` or `torch.FloatTensor`): - `video` frames or tensor representing a video batch to be used as the starting point for the process. - Can also accpet video latents as `image`, if passing latents directly, it will not be encoded again. - strength (`float`, *optional*, defaults to 0.8): - Indicates extent to transform the reference `video`. Must be between 0 and 1. `video` is used as a - starting point, adding more noise to it the larger the `strength`. The number of denoising steps - depends on the amount of noise initially added. When `strength` is 1, added noise is maximum and the - denoising process runs for the full number of iterations specified in `num_inference_steps`. A value of - 1 essentially ignores `video`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality videos at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in video generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. Latents should be of shape - `(batch_size, num_channel, num_frames, height, width)`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"np"`): - The output format of the generated video. Choose between `torch.FloatTensor` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead - of a plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] is - returned, otherwise a `tuple` is returned where the first element is a list with the generated frames. - """ - # 0. Default height and width to unet - num_images_per_prompt = 1 - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Preprocess video - video = preprocess_video(video) - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 5. Prepare latent variables - latents = self.prepare_latents(video, latent_timestep, batch_size, prompt_embeds.dtype, device, generator) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # reshape latents - bsz, channel, frames, width, height = latents.shape - latents = latents.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height) - noise_pred = noise_pred.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # reshape latents back - latents = latents[None, :].reshape(bsz, frames, channel, width, height).permute(0, 2, 1, 3, 4) - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if output_type == "latent": - return TextToVideoSDPipelineOutput(frames=latents) - - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - - video_tensor = self.decode_latents(latents) - - if output_type == "pt": - video = video_tensor - else: - video = tensor2vid(video_tensor) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (video,) - - return TextToVideoSDPipelineOutput(frames=video) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py deleted file mode 100644 index de513fe234fd6b1e6a900149205171cf9acff7f2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py +++ /dev/null @@ -1,143 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import numpy as np -import torch - -from diffusers import DDIMPipeline, DDIMScheduler, UNet2DModel -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow, torch_device - -from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS -from ..test_pipelines_common import PipelineTesterMixin - - -enable_full_determinism() - - -class DDIMPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = DDIMPipeline - params = UNCONDITIONAL_IMAGE_GENERATION_PARAMS - required_optional_params = PipelineTesterMixin.required_optional_params - { - "num_images_per_prompt", - "latents", - "callback", - "callback_steps", - } - batch_params = UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=3, - out_channels=3, - down_block_types=("DownBlock2D", "AttnDownBlock2D"), - up_block_types=("AttnUpBlock2D", "UpBlock2D"), - ) - scheduler = DDIMScheduler() - components = {"unet": unet, "scheduler": scheduler} - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "batch_size": 1, - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - def test_inference(self): - device = "cpu" - - components = self.get_dummy_components() - pipe = self.pipeline_class(**components) - pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - self.assertEqual(image.shape, (1, 32, 32, 3)) - expected_slice = np.array( - [1.000e00, 5.717e-01, 4.717e-01, 1.000e00, 0.000e00, 1.000e00, 3.000e-04, 0.000e00, 9.000e-04] - ) - max_diff = np.abs(image_slice.flatten() - expected_slice).max() - self.assertLessEqual(max_diff, 1e-3) - - def test_dict_tuple_outputs_equivalent(self): - super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3) - - def test_save_load_local(self): - super().test_save_load_local(expected_max_difference=3e-3) - - def test_save_load_optional_components(self): - super().test_save_load_optional_components(expected_max_difference=3e-3) - - def test_inference_batch_single_identical(self): - super().test_inference_batch_single_identical(expected_max_diff=3e-3) - - -@slow -@require_torch_gpu -class DDIMPipelineIntegrationTests(unittest.TestCase): - def test_inference_cifar10(self): - model_id = "google/ddpm-cifar10-32" - - unet = UNet2DModel.from_pretrained(model_id) - scheduler = DDIMScheduler() - - ddim = DDIMPipeline(unet=unet, scheduler=scheduler) - ddim.to(torch_device) - ddim.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = ddim(generator=generator, eta=0.0, output_type="numpy").images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.1723, 0.1617, 0.1600, 0.1626, 0.1497, 0.1513, 0.1505, 0.1442, 0.1453]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_inference_ema_bedroom(self): - model_id = "google/ddpm-ema-bedroom-256" - - unet = UNet2DModel.from_pretrained(model_id) - scheduler = DDIMScheduler.from_pretrained(model_id) - - ddpm = DDIMPipeline(unet=unet, scheduler=scheduler) - ddpm.to(torch_device) - ddpm.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = ddpm(generator=generator, output_type="numpy").images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 256, 256, 3) - expected_slice = np.array([0.0060, 0.0201, 0.0344, 0.0024, 0.0018, 0.0002, 0.0022, 0.0000, 0.0069]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py deleted file mode 100644 index c88208291ab2a605bee9fe6c1a28a443b74c6372..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py +++ /dev/null @@ -1,63 +0,0 @@ -import mmcv -import numpy as np -import pycocotools.mask as mask_util - - -def split_combined_polys(polys, poly_lens, polys_per_mask): - """Split the combined 1-D polys into masks. - - A mask is represented as a list of polys, and a poly is represented as - a 1-D array. In dataset, all masks are concatenated into a single 1-D - tensor. Here we need to split the tensor into original representations. - - Args: - polys (list): a list (length = image num) of 1-D tensors - poly_lens (list): a list (length = image num) of poly length - polys_per_mask (list): a list (length = image num) of poly number - of each mask - - Returns: - list: a list (length = image num) of list (length = mask num) of \ - list (length = poly num) of numpy array. - """ - mask_polys_list = [] - for img_id in range(len(polys)): - polys_single = polys[img_id] - polys_lens_single = poly_lens[img_id].tolist() - polys_per_mask_single = polys_per_mask[img_id].tolist() - - split_polys = mmcv.slice_list(polys_single, polys_lens_single) - mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single) - mask_polys_list.append(mask_polys) - return mask_polys_list - - -# TODO: move this function to more proper place -def encode_mask_results(mask_results): - """Encode bitmap mask to RLE code. - - Args: - mask_results (list | tuple[list]): bitmap mask results. - In mask scoring rcnn, mask_results is a tuple of (segm_results, - segm_cls_score). - - Returns: - list | tuple: RLE encoded mask. - """ - if isinstance(mask_results, tuple): # mask scoring - cls_segms, cls_mask_scores = mask_results - else: - cls_segms = mask_results - num_classes = len(cls_segms) - encoded_mask_results = [[] for _ in range(num_classes)] - for i in range(len(cls_segms)): - for cls_segm in cls_segms[i]: - encoded_mask_results[i].append( - mask_util.encode( - np.array( - cls_segm[:, :, np.newaxis], order='F', - dtype='uint8'))[0]) # encoded with RLE - if isinstance(mask_results, tuple): - return encoded_mask_results, cls_mask_scores - else: - return encoded_mask_results diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py deleted file mode 100644 index 87b1a3eca9595a130121526f8b4c29915387ab35..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict - -import torch -import torch.distributed as dist - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio.file_client import FileClient -from annotator.uniformer.mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch=True, - interval=10, - ignore_last=True, - reset_flag=False, - interval_exp_name=1000, - out_dir=None, - out_suffix=('.log.json', '.log', '.py'), - keep_local=True, - file_client_args=None): - super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner): - super(TextLoggerHook, self).before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.')) - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner): - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([mem / (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict, runner): - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - if torch.cuda.is_available(): - log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', - 'memory', 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict, runner): - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner): - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - if 'time' in runner.log_buffer.output: - # statistic memory - if torch.cuda.is_available(): - log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner): - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath, 'r') as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - (f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.')) - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - (f'{local_filepath} was removed due to the ' - '`self.keep_local=False`')) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py deleted file mode 100644 index 584df9f55c5d63d632f375d703f858e18c0acf2c..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py +++ /dev/null @@ -1,52 +0,0 @@ -import logging -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.operations.check import ( - check_package_set, - create_package_set_from_installed, -) -from pip._internal.utils.misc import write_output - -logger = logging.getLogger(__name__) - - -class CheckCommand(Command): - """Verify installed packages have compatible dependencies.""" - - usage = """ - %prog [options]""" - - def run(self, options: Values, args: List[str]) -> int: - package_set, parsing_probs = create_package_set_from_installed() - missing, conflicting = check_package_set(package_set) - - for project_name in missing: - version = package_set[project_name].version - for dependency in missing[project_name]: - write_output( - "%s %s requires %s, which is not installed.", - project_name, - version, - dependency[0], - ) - - for project_name in conflicting: - version = package_set[project_name].version - for dep_name, dep_version, req in conflicting[project_name]: - write_output( - "%s %s has requirement %s, but you have %s %s.", - project_name, - version, - req, - dep_name, - dep_version, - ) - - if missing or conflicting or parsing_probs: - return ERROR - else: - write_output("No broken requirements found.") - return SUCCESS diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py deleted file mode 100644 index c5e9d85cd75884b129d4ab8d0453c0e50d0c1f68..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py +++ /dev/null @@ -1,9 +0,0 @@ -""" -This module exists only to simplify retrieving the version number of chardet -from within setuptools and from chardet subpackages. - -:author: Dan Blanchard (dan.blanchard@gmail.com) -""" - -__version__ = "5.1.0" -VERSION = __version__.split(".") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py deleted file mode 100644 index 188e13e4829591facb23ae0e2eda84b9807cb818..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py +++ /dev/null @@ -1,99 +0,0 @@ -""" -requests.structures -~~~~~~~~~~~~~~~~~~~ - -Data structures that power Requests. -""" - -from collections import OrderedDict - -from .compat import Mapping, MutableMapping - - -class CaseInsensitiveDict(MutableMapping): - """A case-insensitive ``dict``-like object. - - Implements all methods and operations of - ``MutableMapping`` as well as dict's ``copy``. Also - provides ``lower_items``. - - All keys are expected to be strings. The structure remembers the - case of the last key to be set, and ``iter(instance)``, - ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()`` - will contain case-sensitive keys. However, querying and contains - testing is case insensitive:: - - cid = CaseInsensitiveDict() - cid['Accept'] = 'application/json' - cid['aCCEPT'] == 'application/json' # True - list(cid) == ['Accept'] # True - - For example, ``headers['content-encoding']`` will return the - value of a ``'Content-Encoding'`` response header, regardless - of how the header name was originally stored. - - If the constructor, ``.update``, or equality comparison - operations are given keys that have equal ``.lower()``s, the - behavior is undefined. - """ - - def __init__(self, data=None, **kwargs): - self._store = OrderedDict() - if data is None: - data = {} - self.update(data, **kwargs) - - def __setitem__(self, key, value): - # Use the lowercased key for lookups, but store the actual - # key alongside the value. - self._store[key.lower()] = (key, value) - - def __getitem__(self, key): - return self._store[key.lower()][1] - - def __delitem__(self, key): - del self._store[key.lower()] - - def __iter__(self): - return (casedkey for casedkey, mappedvalue in self._store.values()) - - def __len__(self): - return len(self._store) - - def lower_items(self): - """Like iteritems(), but with all lowercase keys.""" - return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items()) - - def __eq__(self, other): - if isinstance(other, Mapping): - other = CaseInsensitiveDict(other) - else: - return NotImplemented - # Compare insensitively - return dict(self.lower_items()) == dict(other.lower_items()) - - # Copy is required - def copy(self): - return CaseInsensitiveDict(self._store.values()) - - def __repr__(self): - return str(dict(self.items())) - - -class LookupDict(dict): - """Dictionary lookup object.""" - - def __init__(self, name=None): - self.name = name - super().__init__() - - def __repr__(self): - return f"" - - def __getitem__(self, key): - # We allow fall-through here, so values default to None - - return self.__dict__.get(key, None) - - def get(self, key, default=None): - return self.__dict__.get(key, default) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py deleted file mode 100644 index fd3a7b79b6b7a3608ad7cb3918de020a5a600d2f..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.modeling.box_regression import ( - Box2BoxTransform, - Box2BoxTransformLinear, - Box2BoxTransformRotated, -) -from detectron2.utils.testing import random_boxes - -logger = logging.getLogger(__name__) - - -class TestBox2BoxTransform(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - src_boxes = random_boxes(10) - dst_boxes = random_boxes(10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed)) - - def test_apply_deltas_tracing(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - - with torch.no_grad(): - func = torch.jit.trace(b2b_tfm.apply_deltas, (torch.randn(10, 20), torch.randn(10, 4))) - - o = func(torch.randn(10, 20), torch.randn(10, 4)) - self.assertEqual(o.shape, (10, 20)) - o = func(torch.randn(5, 20), torch.randn(5, 4)) - self.assertEqual(o.shape, (5, 20)) - - -def random_rotated_boxes(mean_box, std_length, std_angle, N): - return torch.cat( - [torch.rand(N, 4) * std_length, torch.rand(N, 1) * std_angle], dim=1 - ) + torch.tensor(mean_box, dtype=torch.float) - - -class TestBox2BoxTransformRotated(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10, 1) - b2b_transform = Box2BoxTransformRotated(weights=weights) - src_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - dst_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_transform.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_transform.apply_deltas(deltas, src_boxes) - assert torch.allclose(dst_boxes[:, :4], dst_boxes_reconstructed[:, :4], atol=1e-5) - # angle difference has to be normalized - assert torch.allclose( - (dst_boxes[:, 4] - dst_boxes_reconstructed[:, 4] + 180.0) % 360.0 - 180.0, - torch.zeros_like(dst_boxes[:, 4]), - atol=1e-4, - ) - - -class TestBox2BoxTransformLinear(unittest.TestCase): - def test_reconstruction(self): - b2b_tfm = Box2BoxTransformLinear() - src_boxes = random_boxes(10) - dst_boxes = torch.tensor([0, 0, 101, 101] * 10).reshape(10, 4).float() - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed, atol=1e-3)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md b/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md deleted file mode 100644 index f59e9475e9a89ac66fa61c700a078eb5b755ddbd..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md +++ /dev/null @@ -1,69 +0,0 @@ -
    -

    Simulador de autobús Indonesia: Cómo descargar y disfrutar de nuevos mapas

    -

    Bus Simulator Indonesia (alias BUSSID) es un popular juego de simulación que te permite experimentar lo que le gusta ser un conductor de autobús en Indonesia de una manera divertida y auténtica. BUSSID puede no ser el primero, pero es probablemente uno de los únicos juegos de simulador de bus con más características y el entorno indonesio más auténtico.

    -

    Algunas de las características principales de BUSSID son:

    -

    bus simulator indonesia nuevo mapa descargar


    DOWNLOAD >>>>> https://bltlly.com/2v6MdH



    -
      -
    • Diseña tu propia librea
    • -
    • Control muy fácil e intuitivo
    • -
    • Ciudades y lugares indonesios auténticos
    • -
    • Autobuses de Indonesia
    • -
    • Fresco y divertido bocinazos, incluyendo el icónico "Om Telolet Om!" bocina
    • -
    • Alta calidad y gráficos 3D detallados
    • -
    • No hay anuncios obstructivos durante la conducción
    • -
    • Tabla de clasificación y ahorro de datos en línea
    • -
    • Utilice su propio modelo 3D utilizando el sistema de vehículo mod
    • -
    • Convoy multijugador en línea
    • -
    -

    Para jugar BUSSID, es necesario elegir un autobús, una librea, y una ruta. Luego, debe conducir su autobús a lo largo de la ruta, recoger y dejar pasajeros, ganar dinero y evitar accidentes. También puede personalizar su autobús, actualizar su garaje y unirse a convoyes en línea con otros jugadores.

    -

    Uno de los beneficios de jugar BUSSID es que puedes descargar nuevos mapas para el juego, lo que puede agregar más variedad, desafío y diversión a tu experiencia de conducción. Los nuevos mapas pueden tener diferentes temas, como extremos, off-road o escénicos. También pueden tener diferentes características, como curvas afiladas, colinas empinadas o hitos realistas. Los nuevos mapas pueden hacerte sentir que conduces en diferentes regiones de Indonesia o incluso en otros países.

    -

    Pero, ¿cómo descargar nuevos mapas para BUSSID? ¿Y cómo los disfruta? En este artículo, le mostraremos cómo hacer ambos en pasos fáciles. ¡Vamos a empezar!

    -

    Cómo descargar nuevos mapas para Bus Simulator Indonesia

    - -

    Una de las mejores fuentes de mapas mod para BUSSID es [MediaRale]( 1 ), un sitio web que proporciona varios mods para juegos, incluyendo BUSSID. MediaRale tiene una sección dedicada a los mapas mod para BUSSID, donde puede encontrar muchas opciones para elegir. Puedes navegar por categorías, como extrema, todoterreno o escénica. También puede ver capturas de pantalla, descripciones, calificaciones y enlaces de descarga para cada mapa de mods.

    -

    Una vez que hayas encontrado un mapa mod que te guste, necesitas descargarlo en tu dispositivo. El archivo de mapa mod generalmente estará en formato ZIP o RAR, lo que significa que necesita extraerlo usando una aplicación de administrador de archivos o una aplicación extractora ZIP. Puedes encontrar muchas aplicaciones gratuitas para este propósito en la Google Play Store o en la App Store.

    -

    Después de haber extraído el archivo de mapa mod, debe copiarlo en la carpeta mod de BUSSID. La carpeta mod se encuentra en el almacenamiento interno de su dispositivo, bajo la carpeta Android/data/com.maleo.bussimulatorid/files/mod. Puede usar una aplicación de administrador de archivos para navegar a esta carpeta y pegar el archivo de mapa mod allí.

    -

    -

    El último paso es iniciar el juego y seleccionar el mapa mod desde el menú del mapa. Puede hacer esto pulsando en el icono del mapa en la esquina superior derecha de la pantalla, y luego desplazándose hacia abajo para encontrar el mapa mod que ha descargado. Toque en él para seleccionarlo, y luego toque en el botón de inicio para comenzar su viaje.

    -

    Cómo disfrutar de nuevos mapas para Bus Simulator Indonesia

    -

    Ahora que ha descargado e instalado un nuevo mapa para BUSSID, puede disfrutarlo conduciendo su autobús en él. Sin embargo, hay algunos consejos que pueden ayudarte a aprovechar al máximo tu experiencia. Estos son algunos de ellos:

    -
      - -
    • Consejo 2: Siga las reglas de tráfico y respete otros controladores. A pesar de que usted está jugando en un mapa mod, todavía tiene que seguir las reglas de tráfico y respetar a otros conductores en la carretera. Esto significa que debe obedecer el límite de velocidad, detenerse en las luces rojas, hacer una señal antes de girar y evitar colisiones. Esto no solo hará que su conducción sea más realista y segura, sino también más agradable y gratificante.
    • -
    • Consejo 3: Utilice el bocinazo y otras características para interactuar con el entorno. Uno de los aspectos más divertidos de BUSSID es que puedes utilizar la bocina y otras funciones para interactuar con el entorno. Por ejemplo, puedes usar la bocina para saludar a otros conductores, peatones o animales. También puede usar los limpiaparabrisas, los faros, los indicadores y las puertas para comunicarse con otros o expresarse. Incluso puede usar el "Om Telolet Om!" tocar el claxon para hacer que la gente te vitoree.
    • -
    • Consejo 4: Explora diferentes rutas y puntos de referencia en el mapa. Otra forma de disfrutar de nuevos mapas para BUSSID es explorar diferentes rutas y puntos de referencia en ellos. Puede hacer esto siguiendo la navegación GPS o eligiendo su propio camino. Puede descubrir nuevos lugares, paisajes o desafíos que no haya visto antes. También puede encontrar secretos ocultos o huevos de Pascua que el creador del mapa ha dejado para usted.
    • -
    • Consejo 5: Únete a convoyes multijugador en línea con otros jugadores. La mejor manera de disfrutar de nuevos mapas para BUSSID es unirse a convoyes multijugador en línea con otros jugadores. Puede hacer esto tocando el icono del convoy en la esquina superior izquierda de la pantalla, y luego elegir un convoy que está jugando en el mismo mapa que usted. También puede crear su propio convoy e invitar a sus amigos u otros jugadores a unirse a usted. Al unirte a un convoy, puedes chatear con otros jugadores, compartir tus experiencias y divertirte juntos.
    • -
    -

    Conclusión

    - -
      -
    1. Encuentra un mapa mod que te guste en MediaRale
    2. -
    3. Descargar el archivo de mapa mod y extraerlo si es necesario
    4. -
    5. Copiar el archivo de mapa mod a la carpeta mod de BUSSID
    6. -
    7. Iniciar el juego y seleccionar el mapa de mod desde el menú del mapa
    8. -
    -

    Para disfrutar de nuevos mapas para BUSSID, puedes seguir estos consejos:

    -
      -
    • Elegir un autobús adecuado y librea para el mapa
    • -
    • Siga las reglas de tráfico y respete otros controladores
    • -
    • Utilice el bocinazo y otras características para interactuar con el entorno
    • -
    • Explora diferentes rutas y puntos de referencia en el mapa
    • -
    • Únete a convoyes multijugador en línea con otros jugadores
    • -
    -

    Siguiendo estos pasos y consejos, puede descargar y disfrutar de nuevos mapas para BUSSID y divertirse conduciendo su autobús en ellos. Si aún no has probado BUSSID, puedes descargarlo gratis desde la Google Play Store o la App Store. También puedes visitar el sitio web oficial de BUSSID para aprender más sobre el juego y sus características. ¡Feliz conducción!

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre nuevos mapas para BUSSID:

    -
      -
    1. Q: ¿Cuántos mapas nuevos están disponibles para BUSSID?
    2. -
    3. A: No hay un número exacto de nuevos mapas para BUSSID, ya que los nuevos mapas mod son constantemente creados y subidos por los usuarios. Sin embargo, puede encontrar cientos de mapas mod para BUSSID en MediaRale, que van desde mapas extremos, off-road, escénicos, a mapas realistas.
    4. -
    5. Q: ¿Cómo sé si un mapa mod es compatible con mi versión de BUSSID?
    6. -
    7. A: Puede comprobar la compatibilidad de un mapa mod mirando su descripción, calificación y comentarios en MediaRale. También puede comprobar la fecha de subida del mapa mod y compararlo con la fecha de la última actualización de BUSSID. Generalmente, los mapas mod que se cargan después de la última actualización de BUSSID tienen más probabilidades de ser compatibles.
    8. -
    9. Q: ¿Cómo puedo desinstalar un mapa mod de BUSSID?
    10. - -
    11. Q: ¿Cómo puedo reportar un problema o un error con un mapa mod?
    12. -
    13. A: Si encuentras un problema o un error con un mapa mod, puedes reportarlo al creador de mapas mod o a MediaRale. Puede encontrar la información de contacto del creador de mapas mod en su página de perfil en MediaRale. También puede dejar un comentario o una valoración en la página del mapa de mods en MediaRale para compartir sus comentarios.
    14. -
    15. Q: ¿Cómo puedo crear mi propio mapa mod para BUSSID?
    16. -
    17. A: Si quieres crear tu propio mapa mod para BUSSID, necesitas usar un software de modelado 3D, como Blender, SketchUp o Maya. También debe seguir las directrices y especificaciones de BUSSID para crear mapas mod. Puede encontrar más información y tutoriales sobre cómo crear mapas mod para BUSSID en el sitio web oficial de BUSSID o en YouTube.
    18. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md deleted file mode 100644 index 0f7ad8bfc173eac554f0b6ef7c684861e8014bbe..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,5 +0,0 @@ -# Code of Conduct - -Facebook has adopted a Code of Conduct that we expect project participants to adhere to. -Please read the [full text](https://code.fb.com/codeofconduct/) -so that you can understand what actions will and will not be tolerated. diff --git a/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake b/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake deleted file mode 100644 index 2b3a40284e6f9fd5515b0fe708b42a0bcc9d3bf2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake +++ /dev/null @@ -1,127 +0,0 @@ -# This file defines thrust_configure_multiconfig(), which sets up and handles -# the MultiConfig options that allow multiple host/device/dialect configurations -# to be generated from a single thrust build. - -function(thrust_configure_multiconfig) - option(THRUST_ENABLE_MULTICONFIG "Enable multiconfig options for coverage testing." OFF) - - # Dialects: - set(THRUST_CPP_DIALECT_OPTIONS - 11 14 17 - CACHE INTERNAL "C++ dialects supported by Thrust." FORCE - ) - - if (THRUST_ENABLE_MULTICONFIG) - # Handle dialect options: - foreach (dialect IN LISTS THRUST_CPP_DIALECT_OPTIONS) - set(default_value OFF) - if (dialect EQUAL 14) # Default to just 14 on: - set(default_value ON) - endif() - option(THRUST_MULTICONFIG_ENABLE_DIALECT_CPP${dialect} - "Generate C++${dialect} build configurations." - ${default_value} - ) - endforeach() - - # Supported versions of MSVC do not distinguish between C++11 and C++14. - # Warn the user that they may be generating a ton of redundant targets. - if ("MSVC" STREQUAL "${CMAKE_CXX_COMPILER_ID}" AND - THRUST_MULTICONFIG_ENABLE_DIALECT_CPP11) - message(WARNING - "Supported versions of MSVC (2017+) do not distinguish between C++11 " - "and C++14. The requested C++11 targets will be built with C++14." - ) - endif() - - # Systems: - option(THRUST_MULTICONFIG_ENABLE_SYSTEM_CPP "Generate build configurations that use CPP." ON) - option(THRUST_MULTICONFIG_ENABLE_SYSTEM_CUDA "Generate build configurations that use CUDA." ON) - option(THRUST_MULTICONFIG_ENABLE_SYSTEM_OMP "Generate build configurations that use OpenMP." OFF) - option(THRUST_MULTICONFIG_ENABLE_SYSTEM_TBB "Generate build configurations that use TBB." OFF) - - # CMake added C++17 support for CUDA targets in 3.18: - if (THRUST_MULTICONFIG_ENABLE_DIALECT_CPP17 AND - THRUST_MULTICONFIG_ENABLE_SYSTEM_CUDA) - cmake_minimum_required(VERSION 3.18) - endif() - - # Workload: - # - `SMALL`: [3 configs] Minimal coverage and validation of each device system against the `CPP` host. - # - `MEDIUM`: [6 configs] Cheap extended coverage. - # - `LARGE`: [8 configs] Expensive extended coverage. Include all useful build configurations. - # - `FULL`: [12 configs] The complete cross product of all possible build configurations. - # - # Config | Workloads | Value | Expense | Note - # ---------|-----------|------------|-----------|----------------------------- - # CPP/CUDA | F L M S | Essential | Expensive | Validates CUDA against CPP - # CPP/OMP | F L M S | Essential | Cheap | Validates OMP against CPP - # CPP/TBB | F L M S | Essential | Cheap | Validates TBB against CPP - # CPP/CPP | F L M | Important | Cheap | Tests CPP as device - # OMP/OMP | F L M | Important | Cheap | Tests OMP as host - # TBB/TBB | F L M | Important | Cheap | Tests TBB as host - # TBB/CUDA | F L | Important | Expensive | Validates TBB/CUDA interop - # OMP/CUDA | F L | Important | Expensive | Validates OMP/CUDA interop - # TBB/OMP | F | Not useful | Cheap | Mixes CPU-parallel systems - # OMP/TBB | F | Not useful | Cheap | Mixes CPU-parallel systems - # TBB/CPP | F | Not Useful | Cheap | Parallel host, serial device - # OMP/CPP | F | Not Useful | Cheap | Parallel host, serial device - - set(THRUST_MULTICONFIG_WORKLOAD SMALL CACHE STRING - "Limit host/device configs: SMALL (up to 3 h/d combos per dialect), MEDIUM(6), LARGE(8), FULL(12)" - ) - set_property(CACHE THRUST_MULTICONFIG_WORKLOAD PROPERTY STRINGS - SMALL MEDIUM LARGE FULL - ) - set(THRUST_MULTICONFIG_WORKLOAD_SMALL_CONFIGS - CPP_OMP CPP_TBB CPP_CUDA - CACHE INTERNAL "Host/device combos enabled for SMALL workloads." FORCE - ) - set(THRUST_MULTICONFIG_WORKLOAD_MEDIUM_CONFIGS - ${THRUST_MULTICONFIG_WORKLOAD_SMALL_CONFIGS} - CPP_CPP TBB_TBB OMP_OMP - CACHE INTERNAL "Host/device combos enabled for MEDIUM workloads." FORCE - ) - set(THRUST_MULTICONFIG_WORKLOAD_LARGE_CONFIGS - ${THRUST_MULTICONFIG_WORKLOAD_MEDIUM_CONFIGS} - OMP_CUDA TBB_CUDA - CACHE INTERNAL "Host/device combos enabled for LARGE workloads." FORCE - ) - set(THRUST_MULTICONFIG_WORKLOAD_FULL_CONFIGS - ${THRUST_MULTICONFIG_WORKLOAD_LARGE_CONFIGS} - OMP_CPP TBB_CPP OMP_TBB TBB_OMP - CACHE INTERNAL "Host/device combos enabled for FULL workloads." FORCE - ) - - # Hide the single config options if they exist from a previous run: - if (DEFINED THRUST_HOST_SYSTEM) - set_property(CACHE THRUST_HOST_SYSTEM PROPERTY TYPE INTERNAL) - set_property(CACHE THRUST_DEVICE_SYSTEM PROPERTY TYPE INTERNAL) - endif() - if (DEFINED THRUST_CPP_DIALECT) - set_property(CACHE THRUST_CPP_DIALECT PROPERTY TYPE INTERNAL) - endif() - - else() # Single config: - # Restore system option visibility if these cache options already exist - # from a previous run. - if (DEFINED THRUST_HOST_SYSTEM) - set_property(CACHE THRUST_HOST_SYSTEM PROPERTY TYPE STRING) - set_property(CACHE THRUST_DEVICE_SYSTEM PROPERTY TYPE STRING) - endif() - - set(THRUST_CPP_DIALECT 14 - CACHE STRING "The C++ standard to target: ${THRUST_CPP_DIALECT_OPTIONS}" - ) - set_property(CACHE THRUST_CPP_DIALECT - PROPERTY STRINGS - ${THRUST_CPP_DIALECT_OPTIONS} - ) - - # CMake added C++17 support for CUDA targets in 3.18: - if (THRUST_CPP_DIALECT EQUAL 17 AND - THRUST_DEVICE_SYSTEM STREQUAL "CUDA") - cmake_minimum_required(VERSION 3.18) - endif() - endif() -endfunction() diff --git a/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py b/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py deleted file mode 100644 index 840720b4a56f748f414592646ca68dbf4154e742..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py +++ /dev/null @@ -1,985 +0,0 @@ -# coding=utf-8 -# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. -# -# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX -# and OPT implementations in this library. It has been modified from its -# original forms to accommodate minor architectural differences compared -# to GPT-NeoX and OPT used by the Meta AI team that trained the model. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch LLaMA model.""" -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast -from transformers.modeling_utils import PreTrainedModel -from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from configuration_clex import CLEXLlamaConfig -from clex_layer import LlamaCLEXScalingRotaryEmbedding -from einops import rearrange -import importlib.metadata -import importlib.util - - -logger = logging.get_logger(__name__) - -def _is_package_available(pkg_name: str, return_version: bool = False) -> Union[Tuple[bool, str], bool]: - # Check we're not importing a "pkg_name" directory somewhere but the actual library by trying to grab the version - package_exists = importlib.util.find_spec(pkg_name) is not None - package_version = "N/A" - if package_exists: - try: - package_version = importlib.metadata.version(pkg_name) - package_exists = True - except importlib.metadata.PackageNotFoundError: - package_exists = False - logger.info(f"Detected {pkg_name} version {package_version}") - if return_version: - return package_exists, package_version - else: - return package_exists - -def is_flash_attn_available(): - if not _is_package_available("torch", return_version=True): - return False - - # Let's add an extra check to see if cuda is available - - return _is_package_available("flash_attn") and torch.cuda.is_available() - - - - - - -_CONFIG_FOR_DOC = "CLEXLlamaConfig" - - - - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -# Copied from transformers.models.bart.modeling_bart._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -class LlamaRMSNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-6): - """ - LlamaRMSNorm is equivalent to T5LayerNorm - """ - super().__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.variance_epsilon = eps - - def forward(self, hidden_states): - variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) - hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) - - # convert into half-precision if necessary - if self.weight.dtype in [torch.float16, torch.bfloat16]: - hidden_states = hidden_states.to(self.weight.dtype) - - return self.weight * hidden_states - - -class LlamaRotaryEmbedding(torch.nn.Module): - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): - super().__init__() - inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) - self.register_buffer("inv_freq", inv_freq) - - # Build here to make `torch.jit.trace` work. - self.max_seq_len_cached = max_position_embeddings - t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - - def forward(self, x, seq_len=None): - # x: [bs, num_attention_heads, seq_len, head_size] - # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. - if seq_len > self.max_seq_len_cached: - self.max_seq_len_cached = seq_len - t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - return ( - self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - ) - - -def rotate_half(x): - """Rotates half the hidden dims of the input.""" - x1 = x[..., : x.shape[-1] // 2] - x2 = x[..., x.shape[-1] // 2 :] - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(q, k, cos, sin, position_ids): - # The first two dimensions of cos and sin are always 1, so we can `squeeze` them. - cos = cos.squeeze(1).squeeze(0) # [seq_len, dim] - sin = sin.squeeze(1).squeeze(0) # [seq_len, dim] - cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] - sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] - q_embed = (q * cos) + (rotate_half(q) * sin) - k_embed = (k * cos) + (rotate_half(k) * sin) - return q_embed, k_embed - - -class LlamaMLP(nn.Module): - def __init__( - self, - hidden_size: int, - intermediate_size: int, - hidden_act: str, - ): - super().__init__() - self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False) - self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.act_fn = ACT2FN[hidden_act] - - def forward(self, x): - return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)) - - -class LlamaAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config: CLEXLlamaConfig): - super().__init__() - self.config = config - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.hidden_size // self.num_heads - self.max_position_embeddings = config.max_position_embeddings - self.log_scale = config.log_scale - if (self.head_dim * self.num_heads) != self.hidden_size: - raise ValueError( - f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}" - f" and `num_heads`: {self.num_heads})." - ) - self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) - self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def flash_attn_forward( - self, - qkv: torch.Tensor, - key_padding_mask: Optional[torch.Tensor] = None, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel - - attention_mask: [bsz, q_len] - """ - if is_flash_attn_available(): - from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func, flash_attn_qkvpacked_func, flash_attn_with_kvcache - # from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_func - from flash_attn.bert_padding import unpad_input, pad_input - bsz, q_len, *_ = qkv.size() - - if key_padding_mask is None: - # qkv = rearrange(qkv, "b s ... -> (b s) ...") - max_s = q_len - cu_q_lens = torch.arange( - 0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device - ) - output = flash_attn_qkvpacked_func( - qkv, 0.0, softmax_scale=None, causal=True - ) - else: - nheads = qkv.shape[-2] - x = rearrange(qkv, "b s three h d -> b s (three h d)") - x_unpad, indices, cu_q_lens, max_s = unpad_input(x, key_padding_mask) - x_unpad = rearrange( - x_unpad, "nnz (three h d) -> nnz three h d", three=3, h=nheads - ) - output_unpad = flash_attn_varlen_qkvpacked_func( - x_unpad, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True - ) - output = rearrange( - pad_input( - rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices, bsz, q_len - ), - "b s (h d) -> b s h d", - h=nheads, - ) - return self.o_proj(rearrange(output, "b s h d -> b s (h d)")) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - pack_cos_sin = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - - if pack_cos_sin is not None: - cos, sin = pack_cos_sin.to(query_states.device) - else: - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - use_flashattn = self.config.use_flashattn and is_flash_attn_available() - - - - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - -class LlamaDecoderLayer(nn.Module): - def __init__(self, config: CLEXLlamaConfig): - super().__init__() - self.hidden_size = config.hidden_size - self.self_attn = LlamaAttention(config=config) - self.mlp = LlamaMLP( - hidden_size=self.hidden_size, - intermediate_size=config.intermediate_size, - hidden_act=config.hidden_act, - ) - self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - pack_cos_sin=None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`, *optional*): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states - """ - - residual = hidden_states - - hidden_states = self.input_layernorm(hidden_states) - - # Self Attention - hidden_states, self_attn_weights, present_key_value = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - pack_cos_sin=pack_cos_sin, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = residual + hidden_states - - # Fully Connected - residual = hidden_states - hidden_states = self.post_attention_layernorm(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights,) - - if use_cache: - outputs += (present_key_value,) - - return outputs - - -LLAMA_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`CLEXLlamaConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaPreTrainedModel(PreTrainedModel): - config_class = CLEXLlamaConfig - base_model_prefix = "model" - supports_gradient_checkpointing = True - _no_split_modules = ["LlamaDecoderLayer"] - _keys_to_ignore_on_load_unexpected = [r"decoder\.version"] - _keep_in_fp32_modules = ["model.clex_layer.proj_func.ode_up_proj", "model.clex_layer.proj_func.ode_down_proj", "model.clex_layer.inv_freq"] - - def _init_weights(self, module): - std = self.config.initializer_range - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LlamaModel): - module.gradient_checkpointing = value - - -LLAMA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see - `past_key_values`). - - If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`] - and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more - information on the default strategy. - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaModel(LlamaPreTrainedModel): - """ - Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`] - - Args: - config: CLEXLlamaConfig - """ - - def __init__(self, config: CLEXLlamaConfig): - super().__init__(config) - self.padding_idx = config.pad_token_id - self.vocab_size = config.vocab_size - - self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) - self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - head_dim = config.hidden_size // config.num_attention_heads - if config.rope_scaling["type"] == "clex": - self.clex_layer = LlamaCLEXScalingRotaryEmbedding(head_dim, config.max_position_embeddings, config.rope_scaling) - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, - inputs_embeds.dtype, - device=inputs_embeds.device, - past_key_values_length=past_key_values_length, - ) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to( - inputs_embeds.device - ) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - batch_size, seq_length = input_ids.shape - elif inputs_embeds is not None: - batch_size, seq_length, _ = inputs_embeds.shape - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - seq_length_with_past = seq_length - past_key_values_length = 0 - - if past_key_values is not None: - past_key_values_length = past_key_values[0][0].shape[2] - seq_length_with_past = seq_length_with_past + past_key_values_length - - if position_ids is None: - device = input_ids.device if input_ids is not None else inputs_embeds.device - position_ids = torch.arange( - past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device - ) - position_ids = position_ids.unsqueeze(0).view(-1, seq_length) - else: - position_ids = position_ids.view(-1, seq_length).long() - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) - # embed positions - if attention_mask is None: - attention_mask = torch.ones( - (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device - ) - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length - ) - # attention_mask = None - - - hidden_states = inputs_embeds - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - next_decoder_cache = () if use_cache else None - - pack_cos_sin = None - if self.config.rope_scaling["type"] == "clex": - pack_cos_sin = self.clex_layer(inputs_embeds.device, inputs_embeds.dtype, seq_length_with_past, self.training) - - for idx, decoder_layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, None) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - position_ids, - pack_cos_sin, - None, - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - pack_cos_sin=pack_cos_sin, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[2 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - hidden_states = self.norm(hidden_states) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - ) - - -class LlamaForCausalLM(LlamaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.model = LlamaModel(config) - - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_decoder(self, decoder): - self.model = decoder - - def get_decoder(self): - return self.model - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - Args: - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, LlamaForCausalLM - - >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) - >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER) - - >>> prompt = "Hey, are you consciours? Can you talk to me?" - >>> inputs = tokenizer(prompt, return_tensors="pt") - - >>> # Generate - >>> generate_ids = model.generate(inputs.input_ids, max_length=30) - >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you." - ```""" - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) - outputs = self.model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - shift_logits = shift_logits.view(-1, self.config.vocab_size) - shift_labels = shift_labels.view(-1) - # Enable model parallelism - shift_labels = shift_labels.to(shift_logits.device) - loss = loss_fct(shift_logits, shift_labels) - if not return_dict: - output = (logits,) + outputs[1:] - return (loss,) + output if loss is not None else output - return CausalLMOutputWithPast( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs - ): - if past_key_values: - input_ids = input_ids[:, -1:] - - position_ids = kwargs.get("position_ids", None) - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "position_ids": position_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "attention_mask": attention_mask, - } - ) - return model_inputs - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - - -@add_start_docstrings( - """ - The LLaMa Model transformer with a sequence classification head on top (linear layer). - - [`LlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-2) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - LLAMA_START_DOCSTRING, -) -class LlamaForSequenceClassification(LlamaPreTrainedModel): - _keys_to_ignore_on_load_missing = [r"lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.model = LlamaModel(config) - self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.model( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size = input_ids.shape[0] - else: - batch_size = inputs_embeds.shape[0] - - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device) - else: - sequence_lengths = -1 - - pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] - - loss = None - if labels is not None: - labels = labels.to(logits.device) - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py deleted file mode 100644 index 3e2f574e24d2a32a18533a11492cfd481ff2cfbb..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py +++ /dev/null @@ -1,162 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import time -import random -import torch -from video_llama.datasets.data_utils import move_to_cuda -from torch.utils.data import DataLoader - - -class MultiIterLoader: - """ - A simple wrapper for iterating over multiple iterators. - - Args: - loaders (List[Loader]): List of Iterator loaders. - ratios (List[float]): List of ratios to sample from each loader. If None, all loaders are sampled uniformly. - """ - - def __init__(self, loaders, ratios=None): - # assert all loaders has __next__ method - for loader in loaders: - assert hasattr( - loader, "__next__" - ), "Loader {} has no __next__ method.".format(loader) - - if ratios is None: - ratios = [1.0] * len(loaders) - else: - assert len(ratios) == len(loaders) - ratios = [float(ratio) / sum(ratios) for ratio in ratios] - - self.loaders = loaders - self.ratios = ratios - - def __next__(self): - # random sample from each loader by ratio - loader_idx = random.choices(range(len(self.loaders)), self.ratios, k=1)[0] - return next(self.loaders[loader_idx]) - - -class PrefetchLoader(object): - """ - Modified from https://github.com/ChenRocks/UNITER. - - overlap compute and cuda data transfer - (copied and then modified from nvidia apex) - """ - - def __init__(self, loader): - self.loader = loader - self.stream = torch.cuda.Stream() - - def __iter__(self): - loader_it = iter(self.loader) - self.preload(loader_it) - batch = self.next(loader_it) - while batch is not None: - is_tuple = isinstance(batch, tuple) - if is_tuple: - task, batch = batch - - if is_tuple: - yield task, batch - else: - yield batch - batch = self.next(loader_it) - - def __len__(self): - return len(self.loader) - - def preload(self, it): - try: - self.batch = next(it) - except StopIteration: - self.batch = None - return - # if record_stream() doesn't work, another option is to make sure - # device inputs are created on the main stream. - # self.next_input_gpu = torch.empty_like(self.next_input, - # device='cuda') - # self.next_target_gpu = torch.empty_like(self.next_target, - # device='cuda') - # Need to make sure the memory allocated for next_* is not still in use - # by the main stream at the time we start copying to next_*: - # self.stream.wait_stream(torch.cuda.current_stream()) - with torch.cuda.stream(self.stream): - self.batch = move_to_cuda(self.batch) - # more code for the alternative if record_stream() doesn't work: - # copy_ will record the use of the pinned source tensor in this - # side stream. - # self.next_input_gpu.copy_(self.next_input, non_blocking=True) - # self.next_target_gpu.copy_(self.next_target, non_blocking=True) - # self.next_input = self.next_input_gpu - # self.next_target = self.next_target_gpu - - def next(self, it): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - if batch is not None: - record_cuda_stream(batch) - self.preload(it) - return batch - - def __getattr__(self, name): - method = self.loader.__getattribute__(name) - return method - - -def record_cuda_stream(batch): - if isinstance(batch, torch.Tensor): - batch.record_stream(torch.cuda.current_stream()) - elif isinstance(batch, list) or isinstance(batch, tuple): - for t in batch: - record_cuda_stream(t) - elif isinstance(batch, dict): - for t in batch.values(): - record_cuda_stream(t) - else: - pass - - -class IterLoader: - """ - A wrapper to convert DataLoader as an infinite iterator. - - Modified from: - https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/iter_based_runner.py - """ - - def __init__(self, dataloader: DataLoader, use_distributed: bool = False): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._use_distributed = use_distributed - self._epoch = 0 - - @property - def epoch(self) -> int: - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, "set_epoch") and self._use_distributed: - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __iter__(self): - return self - - def __len__(self): - return len(self._dataloader) diff --git a/spaces/Dao3/OpenArt/README.md b/spaces/Dao3/OpenArt/README.md deleted file mode 100644 index 3a10a46ec9c8edc71c9e4e95df35f5f7a95678b1..0000000000000000000000000000000000000000 --- a/spaces/Dao3/OpenArt/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: OpenArt -emoji: 🧘🏻‍♂️ -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -duplicated_from: Dao3/DreamlikeArt-Diffusion-1.0 ---- ---- -title: DreamlikeArt-Diffusion .0 -emoji: 🧘🏻‍♂️ -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py \ No newline at end of file diff --git a/spaces/Dao3/openai-translator/app.py b/spaces/Dao3/openai-translator/app.py deleted file mode 100644 index 8b72693b05e3b3f74ac47619a655fc097baa86f0..0000000000000000000000000000000000000000 --- a/spaces/Dao3/openai-translator/app.py +++ /dev/null @@ -1,255 +0,0 @@ -import os -import openai -import gradio as gr - -openai.api_key = os.environ['OPENAI_KEY'] - -supportLanguages = [ - ["auto", "自动识别"], - ["粤语", "粤语"], - ["古文", "文言文"], - ["af","Afrikaans"], - ["ak","Akan"], - ["sq","Albanian"], - ["am","Amharic"], - ["ar","Arabic"], - ["hy","Armenian"], - ["az","Azerbaijani"], - ["eu","Basque"], - ["be","Belarusian"], - ["bem","Bemba"], - ["bn","Bengali"], - ["bh","Bihari"], - ["xx-bork","Bork, bork, bork!"], - ["bs","Bosnian"], - ["br","Breton"], - ["bg","Bulgarian"], - ["km","Cambodian"], - ["ca","Catalan"], - ["chr","Cherokee"], - ["ny","Chichewa"], - ["zh-CN","中文(简体)"], - ["zh-TW","中文 (繁体)"], - ["co","Corsican"], - ["hr","Croatian"], - ["cs","Czech"], - ["da","Danish"], - ["nl","Dutch"], - ["xx-elmer","Elmer Fudd"], - ["en","English"], - ["eo","Esperanto"], - ["et","Estonian"], - ["ee","Ewe"], - ["fo","Faroese"], - ["tl","Filipino"], - ["fi","Finnish"], - ["fr","French"], - ["fy","Frisian"], - ["gaa","Ga"], - ["gl","Galician"], - ["ka","Georgian"], - ["de","German"], - ["el","Greek"], - ["gn","Guarani"], - ["gu","Gujarati"], - ["xx-hacker","Hacker"], - ["ht","Haitian Creole"], - ["ha","Hausa"], - ["haw","Hawaiian"], - ["iw","Hebrew"], - ["hi","Hindi"], - ["hu","Hungarian"], - ["is","Icelandic"], - ["ig","Igbo"], - ["id","Indonesian"], - ["ia","Interlingua"], - ["ga","Irish"], - ["it","Italian"], - ["ja","Japanese"], - ["jw","Javanese"], - ["kn","Kannada"], - ["kk","Kazakh"], - ["rw","Kinyarwanda"], - ["rn","Kirundi"], - ["xx-klingon","Klingon"], - ["kg","Kongo"], - ["ko","Korean"], - ["kri","Krio (Sierra Leone)"], - ["ku","Kurdish"], - ["ckb","Kurdish (Soranî)"], - ["ky","Kyrgyz"], - ["lo","Laothian"], - ["la","Latin"], - ["lv","Latvian"], - ["ln","Lingala"], - ["lt","Lithuanian"], - ["loz","Lozi"], - ["lg","Luganda"], - ["ach","Luo"], - ["mk","Macedonian"], - ["mg","Malagasy"], - ["ms","Malay"], - ["ml","Malayalam"], - ["mt","Maltese"], - ["mi","Maori"], - ["mr","Marathi"], - ["mfe","Mauritian Creole"], - ["mo","Moldavian"], - ["mn","Mongolian"], - ["sr-ME","Montenegrin"], - ["ne","Nepali"], - ["pcm","Nigerian Pidgin"], - ["nso","Northern Sotho"], - ["no","Norwegian"], - ["nn","Norwegian (Nynorsk)"], - ["oc","Occitan"], - ["or","Oriya"], - ["om","Oromo"], - ["ps","Pashto"], - ["fa","Persian"], - ["xx-pirate","Pirate"], - ["pl","Polish"], - ["pt-BR","Portuguese (Brazil)"], - ["pt-PT","Portuguese (Portugal)"], - ["pa","Punjabi"], - ["qu","Quechua"], - ["ro","Romanian"], - ["rm","Romansh"], - ["nyn","Runyakitara"], - ["ru","Russian"], - ["gd","Scots Gaelic"], - ["sr","Serbian"], - ["sh","Serbo-Croatian"], - ["st","Sesotho"], - ["tn","Setswana"], - ["crs","Seychellois Creole"], - ["sn","Shona"], - ["sd","Sindhi"], - ["si","Sinhalese"], - ["sk","Slovak"], - ["sl","Slovenian"], - ["so","Somali"], - ["es","Spanish"], - ["es-419","Spanish (Latin American)"], - ["su","Sundanese"], - ["sw","Swahili"], - ["sv","Swedish"], - ["tg","Tajik"], - ["ta","Tamil"], - ["tt","Tatar"], - ["te","Telugu"], - ["th","Thai"], - ["ti","Tigrinya"], - ["to","Tonga"], - ["lua","Tshiluba"], - ["tum","Tumbuka"], - ["tr","Turkish"], - ["tk","Turkmen"], - ["tw","Twi"], - ["ug","Uighur"], - ["uk","Ukrainian"], - ["ur","Urdu"], - ["uz","Uzbek"], - ["vi","Vietnamese"], - ["cy","Welsh"], - ["wo","Wolof"], - ["xh","Xhosa"], - ["yi","Yiddish"], - ["yo","Yoruba"], - ["zu","Zulu"], -] -prompt_template = "You are a translation engine that can only translate text and cannot interpret it. Keep the indent of the original text, only modify when you need." - -def submit_message(detectFrom, detectTo, user_token, prompt): - if user_token != "": - openai.api_key = user_token - - if not prompt: - return gr.update(value="") - - for lc, lang in supportLanguages: - if detectFrom == lang: - detectFrom = lc - if detectTo == lang: - detectTo = lc - - systemInstruct = prompt_template - translateInstruct = f"translate from {detectFrom} to {detectTo}" - if detectFrom == "auto": - translateInstruct = f"translate to {detectTo}" - if detectFrom in ["古文", "zh-CN", "zh-TW"]: - if detectTo == "zh-TW": - translateInstruct = "翻译成繁体白话文" - if detectTo == "zh-CN": - translateInstruct = "翻译成简体白话文" - if detectTo == "粤语": - translateInstruct = "翻译成粤语白话文" - - if detectFrom == detectTo: - systemInstruct = "You are a text embellisher, you can only embellish the text, don't interpret it." - if detectTo in ["zh-CN", "zh-TW"]: - translateInstruct = "润色此句" - else: - translateInstruct = "polish this sentence" - - prompt_msg = [ - {"role": "system", "content": systemInstruct}, - {"role": "user", "content": translateInstruct}, - {"role": "user", "content": prompt}, - ] - - try: - openai_response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=prompt_msg, - temperature=0, - max_tokens=1000, - top_p=1, - stream=True, - frequency_penalty=1, - presence_penalty=1, - ) - - combined = "" - for resp in openai_response: - delta = resp["choices"][0]["delta"] - if "content" in delta: - combined += delta["content"] - yield combined - - except Exception as e: - return f"Error: {e}" - -css = """ - #col-container {max-width: 80%; margin-left: auto; margin-right: auto;} - #chatbox {min-height: 400px;} - #header {text-align: center;} - #label {font-size: 0.8em; padding: 0.5em; margin: 0;} - .message { font-size: 1.2em; } - """ - -with gr.Blocks(css=css) as demo: - - state = gr.State([]) - - with gr.Column(elem_id="col-container"): - gr.Markdown("""## 多语言翻译 - 使用OpenAI官方 API (gpt-3.5-turbo model).""", elem_id="header") - - with gr.Row(): - with gr.Column(): - translateFrom = gr.Dropdown(label="原文", elem_id="translate-from", multiselect=False, value="自动识别", choices=[l[1] for l in supportLanguages]).style(container=False) - input_message = gr.Textbox(max_lines=100, show_label=False, lines=10, placeholder="Enter text and press enter", visible=True).style(container=False) - with gr.Column(): - translateTo = gr.Dropdown(label="译文", elem_id="translate-to", multiselect=False, value="中文 (简体)", choices=[l[1] for l in supportLanguages[1:]]).style(container=False) - output = gr.Textbox(max_lines=100, show_label=False, lines=10, label="Output", visible=True).style(container=False) - - btn_submit = gr.Button("急急如律令") - - with gr.Row(): - user_token = gr.Textbox(value='', placeholder="OpenAI API Key", type="password", label="输入你自己的OpenAI API Key翻译过程会更准确哦~.") - - btn_submit.click(submit_message, [translateFrom, translateTo, user_token, input_message], [output]) - -demo.queue(concurrency_count=10) -demo.launch(height='800px') diff --git a/spaces/DeclK/pose/tools/dtw.py b/spaces/DeclK/pose/tools/dtw.py deleted file mode 100644 index 0fa9495bbe752df0b5bfaba0466d558c66a18695..0000000000000000000000000000000000000000 --- a/spaces/DeclK/pose/tools/dtw.py +++ /dev/null @@ -1,116 +0,0 @@ -import numpy as np -from .utils import get_keypoint_weight - - -class DTWForKeypoints: - def __init__(self, keypoints1, keypoints2): - self.keypoints1 = keypoints1 - self.keypoints2 = keypoints2 - - def get_dtw_path(self): - - norm_kp1 = self.normalize_keypoints(self.keypoints1) - norm_kp2 = self.normalize_keypoints(self.keypoints2) - - kp_weight = get_keypoint_weight() - oks, oks_unnorm = self.object_keypoint_similarity(norm_kp1, - norm_kp2, keypoint_weights=kp_weight) - print(f"OKS max {oks.max():.2f} min {oks.min():.2f}") - - # do the DTW, and return the path - cost_matrix = 1 - oks - dtw_dist, dtw_path = self.dynamic_time_warp(cost_matrix) - - return dtw_path, oks, oks_unnorm - - def normalize_keypoints(self, keypoints): - centroid = keypoints.mean(axis=1)[:, None] - max_distance = np.max(np.sqrt(np.sum((keypoints - centroid) ** 2, axis=2)), - axis=1) + 1e-6 - - normalized_keypoints = (keypoints - centroid) / max_distance[:, None, None] - return normalized_keypoints - - def keypoints_areas(self, keypoints): - min_coords = np.min(keypoints, axis=1) - max_coords = np.max(keypoints, axis=1) - areas = np.prod(max_coords - min_coords, axis=1) - return areas - - def object_keypoint_similarity(self, keypoints1, - keypoints2, - scale_constant=0.2, - keypoint_weights=None): - """ Calculate the Object Keypoint Similarity (OKS) for multiple objects, - and add weight to each keypoint. Here we choose to normalize the points - using centroid and max distance instead of bounding box area. - """ - # Compute squared distances between all pairs of keypoints - sq_diff = np.sum((keypoints1[:, None] - keypoints2) ** 2, axis=-1) - - oks = np.exp(-sq_diff / (2 * scale_constant ** 2)) - oks_unnorm = oks.copy() - - if keypoint_weights is not None: - oks = oks * keypoint_weights - oks = np.sum(oks, axis=-1) - else: - oks = np.mean(oks, axis=-1) - - return oks, oks_unnorm - - def dynamic_time_warp(self, cost_matrix, R=1000): - """Compute the Dynamic Time Warping distance and path between two time series. - If the time series is too long, it will use the Sakoe-Chiba Band constraint, - so time complexity is bounded at O(MR). - """ - - M = len(self.keypoints1) - N = len(self.keypoints2) - - # Initialize the distance matrix with infinity - D = np.full((M, N), np.inf) - - # Initialize the first row and column of the matrix - D[0, 0] = cost_matrix[0, 0] - for i in range(1, M): - D[i, 0] = D[i - 1, 0] + cost_matrix[i, 0] - - for j in range(1, N): - D[0, j] = D[0, j - 1] + cost_matrix[0, j] - - # Fill the remaining elements of the matrix within the - # Sakoe-Chiba Band using dynamic programming - for i in range(1, M): - for j in range(max(1, i - R), min(N, i + R + 1)): - cost = cost_matrix[i, j] - D[i, j] = cost + min(D[i - 1, j], D[i, j - 1], D[i - 1, j - 1]) - - # Backtrack to find the optimal path - path = [(M - 1, N - 1)] - i, j = M - 1, N - 1 - while i > 0 or j > 0: - min_idx = np.argmin([D[i - 1, j], D[i, j - 1], D[i - 1, j - 1]]) - if min_idx == 0: - i -= 1 - elif min_idx == 1: - j -= 1 - else: - i -= 1 - j -= 1 - path.append((i, j)) - path.reverse() - - return D[-1, -1], path - -if __name__ == '__main__': - - from mmengine.fileio import load - - keypoints1, kp1_scores = load('tennis1.pkl') - keypoints2, kp2_scores = load('tennis3.pkl') - - # Normalize the keypoints - dtw = DTWForKeypoints(keypoints1, keypoints2) - path = dtw.get_dtw_path() - print(path) \ No newline at end of file diff --git a/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py b/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts b/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts deleted file mode 100644 index cb6ef0604fca9653408012fd6cef2a58b6acaf47..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts +++ /dev/null @@ -1,14 +0,0 @@ -import { createHash } from 'node:crypto' - -/** - * Returns a SHA256 hash using SHA-3 for the given `content`. - * - * @see https://en.wikipedia.org/wiki/SHA-3 - * - * @param {String} content - * - * @returns {String} - */ -export function computeSha256(strContent: string) { - return createHash('sha3-256').update(strContent).digest('hex') -} \ No newline at end of file diff --git a/spaces/DonaSmix/anime-remove-background/README.md b/spaces/DonaSmix/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/DonaSmix/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EDGAhab/Aatrox-Talking/app.py b/spaces/EDGAhab/Aatrox-Talking/app.py deleted file mode 100644 index 34c3aa10c478fd9114ca6af63dc8103b2eb88069..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import gradio as gr -import os -os.system('cd monotonic_align && python setup.py build_ext --inplace && cd ..') -import torch - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import text_to_sequence - -import IPython.display as ipd - -import json -import math - -#new imports -import matplotlib.pyplot as plt -import re - -from torch import nn -from torch.nn import functional as F -from torch.utils.data import DataLoader - -from models import SynthesizerTrn -import unicodedata -import openai - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("configs/biaobei_base.json") - -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) -_ = net_g.eval() - -_ = utils.load_checkpoint("G_aatrox.pth", net_g, None) - -def friend_chat(text, tts_input3): - call_name = "亚托克斯" - openai.api_key = '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' - identity = tts_input3 - start_sequence = '\n'+str(call_name)+':' - restart_sequence = "\nYou: " - all_text = identity + restart_sequence - if 1 == 1: - prompt0 = text #当期prompt - if text == 'quit': - return prompt0 - prompt = identity + prompt0 + start_sequence - - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.5, - max_tokens=1000, - top_p=1.0, - frequency_penalty=0.5, - presence_penalty=0.0, - stop=["\nYou:"] - ) - print(response) - return response['choices'][0]['text'].strip() - -def sle(text, tts_input3): - text = friend_chat(text, tts_input3).replace('\n','。').replace(' ',',') - return text - -def infer(text,tts_input3): - stn_tst = get_text(sle(text,tts_input3), hps) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - audio = net_g.infer(x_tst, x_tst_lengths, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() - sampling_rate = 22050 - return (sampling_rate, audio) - -app = gr.Blocks() - -with app: - with gr.Tabs(): - - with gr.TabItem("Basic"): - - tts_input1 = gr.TextArea(label="输入你想跟剑魔说的话", value="我是暮光星灵佐伊,我要三天之内杀了你") - tts_input3 = gr.TextArea(label="写上你给他的设定", value="你叫亚托克斯,俗称剑魔,世界的终结者。") - tts_submit = gr.Button("Generate", variant="primary") - tts_output2 = gr.Audio(label="Output") - tts_submit.click(infer, [tts_input1,tts_input3], [tts_output2]) - app.launch() \ No newline at end of file diff --git a/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py b/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py deleted file mode 100644 index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py +++ /dev/null @@ -1,253 +0,0 @@ -import os -import subprocess -import shutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from tqdm.notebook import tqdm -from pathlib import Path -import requests - -def run_script(): - def run_cmd(cmd): - process = subprocess.run(cmd, shell=True, check=True, text=True) - return process.stdout - - # Change the current directory to /content/ - os.chdir('/content/') - print("Changing dir to /content/") - - # Your function to edit the file - def edit_file(file_path): - temp_file_path = "/tmp/temp_file.py" - changes_made = False - with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file: - previous_line = "" - second_previous_line = "" - for line in file: - new_line = line.replace("value=160", "value=128") - if new_line != line: - print("Replaced 'value=160' with 'value=128'") - changes_made = True - line = new_line - - new_line = line.replace("crepe hop length: 160", "crepe hop length: 128") - if new_line != line: - print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'") - changes_made = True - line = new_line - - new_line = line.replace("value=0.88", "value=0.75") - if new_line != line: - print("Replaced 'value=0.88' with 'value=0.75'") - changes_made = True - line = new_line - - if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line: - new_line = line.replace("value=1,", "value=0.25,") - if new_line != line: - print("Replaced 'value=1,' with 'value=0.25,' based on the condition") - changes_made = True - line = new_line - - if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line: - new_line = line.replace("value=20,", "value=500,") - if new_line != line: - print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH") - changes_made = True - line = new_line - - if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line: - if 'value="pm",' in line: - new_line = line.replace('value="pm",', 'value="mangio-crepe",') - if new_line != line: - print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition") - changes_made = True - line = new_line - - new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"') - if new_line != line: - print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'") - changes_made = True - line = new_line - - if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST") - changes_made = True - line = new_line - - if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS") - changes_made = True - line = new_line - - temp_file.write(line) - second_previous_line = previous_line - previous_line = line - - # After finished, we replace the original file with the temp one - import shutil - shutil.move(temp_file_path, file_path) - - if changes_made: - print("Changes made and file saved successfully.") - else: - print("No changes were needed.") - - # Define the repo path - repo_path = '/content/Applio-RVC-Fork' - - def copy_all_files_in_directory(src_dir, dest_dir): - # Iterate over all files in source directory - for item in Path(src_dir).glob('*'): - if item.is_file(): - # Copy each file to destination directory - shutil.copy(item, dest_dir) - else: - # If it's a directory, make a new directory in the destination and copy the files recursively - new_dest = Path(dest_dir) / item.name - new_dest.mkdir(exist_ok=True) - copy_all_files_in_directory(str(item), str(new_dest)) - - def clone_and_copy_repo(repo_path): - # New repository link - new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/" - # Temporary path to clone the repository - temp_repo_path = "/content/temp_Applio-RVC-Fork" - # New folder name - new_folder_name = "Applio-RVC-Fork" - - # Clone the latest code from the new repository to a temporary location - run_cmd(f"git clone {new_repo_link} {temp_repo_path}") - os.chdir(temp_repo_path) - - run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402") - run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4") - run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679") - run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8") - run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61") - run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de") - run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec") - run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902") - run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27") - run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb") - run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764") - run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8") - run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51") - run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2") - run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7") - run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862") - run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9") - run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398") - run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2") - run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a") - run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b") - run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157") - run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742") - run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9") - run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9") - run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77") - - # Edit the file here, before copying - #edit_file(f"{temp_repo_path}/infer-web.py") - - # Copy all files from the cloned repository to the existing path - copy_all_files_in_directory(temp_repo_path, repo_path) - print(f"Copying all {new_folder_name} files from GitHub.") - - # Change working directory back to /content/ - os.chdir('/content/') - print("Changed path back to /content/") - - # Remove the temporary cloned repository - shutil.rmtree(temp_repo_path) - - # Call the function - clone_and_copy_repo(repo_path) - - # Download the credentials file for RVC archive sheet - os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True) - run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json") - - # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case - shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True) - shutil.rmtree('/content/torchcrepe', ignore_errors=True) - - # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository - run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git") - shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/') - shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder - - # Change the current directory to /content/Applio-RVC-Fork - os.chdir('/content/Applio-RVC-Fork') - os.makedirs('pretrained', exist_ok=True) - os.makedirs('uvr5_weights', exist_ok=True) - -def download_file(url, filepath): - response = requests.get(url, stream=True) - response.raise_for_status() - - with open(filepath, "wb") as file: - for chunk in response.iter_content(chunk_size=8192): - if chunk: - file.write(chunk) - -def download_pretrained_models(): - pretrained_models = { - "pretrained": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth" - ], - "pretrained_v2": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth", - "f0G48k.pth", - "f0D48k.pth" - ], - "uvr5_weights": [ - "HP2-人声vocals+非人声instrumentals.pth", - "HP5-主旋律人声vocals+其他instrumentals.pth", - "VR-DeEchoNormal.pth", - "VR-DeEchoDeReverb.pth", - "VR-DeEchoAggressive.pth", - "HP5_only_main_vocal.pth", - "HP3_all_vocals.pth", - "HP2_all_vocals.pth" - ] - } - part2 = "I" - base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/" - base_path = "/content/Applio-RVC-Fork/" - base_pathm = base_path - - # Calculate total number of files to download - total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt - - with tqdm(total=total_files, desc="Downloading files") as pbar: - for folder, models in pretrained_models.items(): - folder_path = os.path.join(base_path, folder) - os.makedirs(folder_path, exist_ok=True) - for model in models: - url = base_url + folder + "/" + model - filepath = os.path.join(folder_path, model) - download_file(url, filepath) - pbar.update() - - # Download hubert_base.pt to the base path - hubert_url = base_url + "hubert_base.pt" - hubert_filepath = os.path.join(base_pathm, "hubert_base.pt") - download_file(hubert_url, hubert_filepath) - pbar.update() -def clone_repository(run_download): - with ThreadPoolExecutor(max_workers=2) as executor: - executor.submit(run_script) - if run_download: - executor.submit(download_pretrained_models) diff --git a/spaces/Ekohai/bingAI/README.md b/spaces/Ekohai/bingAI/README.md deleted file mode 100644 index 58b10e6a9f6831fb806f2d8b4f33e806d0c1b45a..0000000000000000000000000000000000000000 --- a/spaces/Ekohai/bingAI/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: BingAI -emoji: 🐢 -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py b/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py deleted file mode 100644 index efbb51d691fc4ce35b4a11c3ae59f563649ca483..0000000000000000000000000000000000000000 --- a/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py +++ /dev/null @@ -1,108 +0,0 @@ -import abc -from typing import List -from src.config import ModelConfig - -from src.hooks.progressListener import ProgressListener -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -class AbstractWhisperCallback: - @abc.abstractmethod - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - raise NotImplementedError() - - def _concat_prompt(self, prompt1, prompt2): - if (prompt1 is None): - return prompt2 - elif (prompt2 is None): - return prompt1 - else: - return prompt1 + " " + prompt2 - -class AbstractWhisperContainer: - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - self.model_name = model_name - self.device = device - self.compute_type = compute_type - self.download_root = download_root - self.cache = cache - - # Will be created on demand - self.model = None - - # List of known models - self.models = models - - def get_model(self): - if self.model is None: - - if (self.cache is None): - self.model = self._create_model() - else: - model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '') - self.model = self.cache.get(model_key, self._create_model) - return self.model - - @abc.abstractmethod - def _create_model(self): - raise NotImplementedError() - - def ensure_downloaded(self): - pass - - @abc.abstractmethod - def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - initial_prompt: str - The initial prompt to use for the transcription. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - raise NotImplementedError() - - # This is required for multiprocessing - def __getstate__(self): - return { - "model_name": self.model_name, - "device": self.device, - "download_root": self.download_root, - "models": self.models, - "compute_type": self.compute_type - } - - def __setstate__(self, state): - self.model_name = state["model_name"] - self.device = state["device"] - self.download_root = state["download_root"] - self.models = state["models"] - self.compute_type = state["compute_type"] - self.model = None - # Depickled objects must use the global cache - self.cache = GLOBAL_MODEL_CACHE \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GV05/text-emotion-detector/app.py b/spaces/GV05/text-emotion-detector/app.py deleted file mode 100644 index 952984e4f361c3504a18841c85b169f0072d85de..0000000000000000000000000000000000000000 --- a/spaces/GV05/text-emotion-detector/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -from transformers import pipeline - -model_id = "GV05/distilbert-base-uncased-finetuned-emotion" -classifier = pipeline("text-classification", model=model_id) - -label_to_emotion = { - 'LABEL_0': 'sadness', - 'LABEL_1': 'joy', - 'LABEL_2': 'love', - 'LABEL_3': 'anger', - 'LABEL_4': 'fear', - 'LABEL_5': 'surprise', -} - -def classify_emotion(text): - preds = classifier(text, return_all_scores=True) - res = {} - for x in preds[0]: - res[label_to_emotion[x['label']]] = x['score'] - return res - -image = gr.Textbox() -label = gr.Label() -examples = ["you are not too sensitive. you are not overreacting", - "Thinking of you keeps me awake. Dreaming of you keeps me asleep. Being with you keeps me alive."] - -title = "Emotion Detector" -description = "This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset" - -intf = gr.Interface(fn=classify_emotion, inputs=image, outputs=label, examples=examples, title=title, - description=description) - -intf.launch(inline=False) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py deleted file mode 100644 index 81375f5d89d6dc0d3c766c599535f8799333825e..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py +++ /dev/null @@ -1,62 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -import pybullet as p - -class ColorCoordinatedInsertion(Task): - """Insert each block into the fixture of the same color""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "insert each block into the fixture of the same color" - self.task_completed_desc = "done with color-coordinated-insertion." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add pallet. - pallet_size = (0.35, 0.35, 0.01) - pallet_pose = self.get_random_pose(env, pallet_size) - pallet_urdf = 'pallet/pallet.urdf' - env.add_object(pallet_urdf, pallet_pose, 'fixed') - - # Add fixtures and blocks. - colors = ['red', 'blue', 'green', 'yellow'] - fixtures = [] - blocks = [] - fixture_size = (0.05, 0.05, 0.05) - block_size = (0.04, 0.04, 0.04) - fixture_urdf = 'insertion/fixture.urdf' - block_urdf = 'block/block.urdf' - for color in colors: - # Add fixture. - fixture_pose = self.get_random_pose(env, fixture_size) - fixture_id = env.add_object(fixture_urdf, fixture_pose, color=utils.COLORS[color]) - fixtures.append(fixture_id) - - # Add block. - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=utils.COLORS[color]) - blocks.append(block_id) - - # Goal: each block is in the fixture of the same color. - for i in range(len(blocks)): - self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[p.getBasePositionAndOrientation(fixtures[i])], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / len(blocks), - language_goal=self.lang_template) - - # Goal: each fixture is on the pallet. - for i in range(len(fixtures)): - self.add_goal(objs=[fixtures[i]], matches=np.ones((1, 1)), targ_poses=[pallet_pose], replace=False, - rotations=True, metric='zone', params=[(pallet_pose, pallet_size)], step_max_reward=1 / len(fixtures), - language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py deleted file mode 100644 index d7797b340514d9577dd77b9e9660babd0aa52b5e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for shape_helpers.""" - -from alphafold.model.tf import shape_helpers -import numpy as np -import tensorflow.compat.v1 as tf - - -class ShapeTest(tf.test.TestCase): - - def test_shape_list(self): - """Test that shape_list can allow for reshaping to dynamic shapes.""" - a = tf.zeros([10, 4, 4, 2]) - p = tf.placeholder(tf.float32, shape=[None, None, 1, 4, 4]) - shape_dyn = shape_helpers.shape_list(p)[:2] + [4, 4] - - b = tf.reshape(a, shape_dyn) - with self.session() as sess: - out = sess.run(b, feed_dict={p: np.ones((20, 1, 1, 4, 4))}) - - self.assertAllEqual(out.shape, (20, 1, 4, 4)) - - -if __name__ == '__main__': - tf.disable_v2_behavior() - tf.test.main() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py deleted file mode 100644 index 9ef6673c2d08f3c43a96cf08ce1710b19865acd4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py +++ /dev/null @@ -1,196 +0,0 @@ -# model settings -model = dict( - type='CascadeRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - type='CascadeRoIHead', - num_stages=3, - stage_loss_weights=[1, 0.5, 0.25], - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=[ - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=[ - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.6, - neg_iou_thr=0.6, - min_pos_iou=0.6, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False), - dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.7, - min_pos_iou=0.7, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False) - ]), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py deleted file mode 100644 index 54c605b94aa5fc8b1ddf2267ed349c2fcd08cc9e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './ms_rcnn_x101_64x4d_fpn_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py deleted file mode 100644 index e4107e7f8985deaaf0287d6b7347521970babf1e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py +++ /dev/null @@ -1,65 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://regnetx_3.2gf', - backbone=dict( - _delete_=True, - type='RegNet', - arch='regnetx_3.2gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[96, 192, 432, 1008], - out_channels=256, - num_outs=5)) -img_norm_cfg = dict( - # The mean and std are used in PyCls when training RegNets - mean=[103.53, 116.28, 123.675], - std=[57.375, 57.12, 58.395], - to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.00005) -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py deleted file mode 100644 index 6081bb6f073e1f170db1aa322532bda747fbab80..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import torch.nn as nn -from torch import Tensor - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - enable_attention_hooks=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - enable_attention_hooks=enable_attention_hooks, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - def forward(self, x: Tensor) -> Tensor: - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__( - self, path=None, non_negative=True, scale=1.0, shift=0.0, invert=False, **kwargs - ): - features = kwargs["features"] if "features" in kwargs else 256 - - self.scale = scale - self.shift = shift - self.invert = invert - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x: Tensor) -> Tensor: - """Input x of shape [b, c, h, w] - Return tensor of shape [b, c, h, w] - """ - inv_depth = super().forward(x) - - if self.invert: - depth = self.scale * inv_depth + self.shift - depth[depth < 1e-8] = 1e-8 - depth = 1.0 / depth - return depth - else: - return inv_depth - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py deleted file mode 100644 index 84f78136331c5ef4975697bc6a77910bba7429bd..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from transformers.file_utils import _LazyModule, is_torch_available - - -_import_structure = { - "configuration_megatron_t5": ["T5Config"], - "tokenization_megatron_t5": ["T5Tokenizer"], -} - -if is_torch_available(): - _import_structure["modeling_megatron_t5"] = [ - "T5Model", - "T5EncoderModel", - "T5ForConditionalGeneration" - ] - - -if TYPE_CHECKING: - from .configuration_megatron_t5 import T5Config - from .tokenization_megatron_t5 import T5Tokenizer - - if is_torch_available(): - from .modeling_megatron_t5 import ( - T5Model, - T5EncoderModel, - T5ForConditionalGeneration - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, globals()["__file__"], _import_structure) diff --git a/spaces/Happys/chatbot/Dockerfile b/spaces/Happys/chatbot/Dockerfile deleted file mode 100644 index 563b14e4c61040c222939ad2d1691912dc1c62e8..0000000000000000000000000000000000000000 --- a/spaces/Happys/chatbot/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -# Pull the base image -FROM happyclo/libre:latest - -# Install dependencies -RUN cd /app/api && npm install - -# Command to run on container start -CMD ["npm", "run", "backend"] \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py deleted file mode 100644 index bed44f34e5f8e506b6ae7ba30ddaa661bf4a7522..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq import utils -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("sentence_ranking") -class SentenceRankingTask(LegacyFairseqTask): - """ - Ranking task on multiple sentences. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", metavar="FILE", help="file prefix for data") - parser.add_argument( - "--num-classes", type=int, help="number of sentences to be ranked" - ) - parser.add_argument( - "--init-token", - type=int, - help="add token at the beginning of each batch item", - ) - parser.add_argument( - "--separator-token", type=int, help="add separator token between inputs" - ) - parser.add_argument("--no-shuffle", action="store_true") - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - parser.add_argument( - "--max-option-length", type=int, help="max length for each option" - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - - @classmethod - def load_dictionary(cls, args, filename, source=True): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data dictionary - data_dict = cls.load_dictionary( - args, - os.path.join(args.data, "input0", "dict.txt"), - source=True, - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - return SentenceRankingTask(args, data_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(type, split): - return os.path.join(self.args.data, type, split) - - def make_dataset(type, dictionary): - split_path = get_path(type, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - input_options = [ - make_dataset("input{idx}".format(idx=idx + 1), self.source_dictionary) - for idx in range(self.args.num_classes) - ] - - if self.args.separator_token is not None: - input0 = PrependTokenDataset(input0, self.args.separator_token) - - src_tokens = [] - for input_option in input_options: - if self.args.init_token is not None: - input_option = PrependTokenDataset(input_option, self.args.init_token) - if self.args.max_option_length is not None: - input_option = TruncateDataset( - input_option, self.args.max_option_length - ) - src_token = ConcatSentencesDataset(input_option, input0) - src_token = maybe_shorten_dataset( - src_token, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.max_positions, - self.args.seed, - ) - src_tokens.append(src_token) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(src_tokens[0])) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for src_token_idx in range(len(src_tokens)): - dataset.update( - { - "net_input{idx}".format(idx=src_token_idx + 1): { - "src_tokens": RightPadDataset( - src_tokens[src_token_idx], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset( - src_tokens[src_token_idx], reduce=False - ), - } - } - ) - - label_path = "{}.label".format(get_path("label", split)) - if os.path.exists(label_path): - with open(label_path) as h: - dataset.update( - target=RawLabelDataset([int(x.strip()) for x in h.readlines()]) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - if self.args.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args): - from fairseq import models - - model = models.build_model(args, self) - - model.register_classification_head( - getattr(args, "ranking_head_name", "sentence_classification_head"), - num_classes=1, - ) - - return model - - def max_positions(self): - return self.args.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py deleted file mode 100644 index 76d331d87fd99096e8228f34f297379221941045..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py +++ /dev/null @@ -1,25 +0,0 @@ -from collections import defaultdict - - -class TreeNode(): - def __init__(self): - self.child = defaultdict(TreeNode) - -class Trie: - - def __init__(self, eos): - self.root = TreeNode() - self.eos = eos - - def insert(self, word): - cur = self.root - for c in word: - cur = cur.child[c] - - def get_next_layer(self, word): - cur = self.root - for c in word: - cur = cur.child.get(c) - if cur is None: - return [self.eos] - return list(cur.child.keys()) \ No newline at end of file diff --git a/spaces/Haswanth/haswanthpalepu/app.py b/spaces/Haswanth/haswanthpalepu/app.py deleted file mode 100644 index 9ede0bd38a0bf7b5a72db19bf134e66df1d9d1cc..0000000000000000000000000000000000000000 --- a/spaces/Haswanth/haswanthpalepu/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py deleted file mode 100644 index 4a1365623316679dc4cb2d76a607deb505208ab5..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py +++ /dev/null @@ -1,462 +0,0 @@ -"""This module should not be used directly as its API is subject to change. Instead, -use the `gr.Blocks.load()` or `gr.Interface.load()` functions.""" - -from __future__ import annotations - -import json -import re -import uuid -import warnings -from copy import deepcopy -from typing import TYPE_CHECKING, Callable, Dict - -import requests - -import gradio -from gradio import components, utils -from gradio.exceptions import TooManyRequestsError -from gradio.external_utils import ( - cols_to_rows, - encode_to_base64, - get_tabular_examples, - get_ws_fn, - postprocess_label, - rows_to_cols, - streamline_spaces_interface, - use_websocket, -) -from gradio.processing_utils import to_binary - -if TYPE_CHECKING: - from gradio.blocks import Blocks - from gradio.interface import Interface - - -def load_blocks_from_repo( - name: str, - src: str | None = None, - api_key: str | None = None, - alias: str | None = None, - **kwargs, -) -> Blocks: - """Creates and returns a Blocks instance from a Hugging Face model or Space repo.""" - if src is None: - # Separate the repo type (e.g. "model") from repo name (e.g. "google/vit-base-patch16-224") - tokens = name.split("/") - assert ( - len(tokens) > 1 - ), "Either `src` parameter must be provided, or `name` must be formatted as {src}/{repo name}" - src = tokens[0] - name = "/".join(tokens[1:]) - - factory_methods: Dict[str, Callable] = { - # for each repo type, we have a method that returns the Interface given the model name & optionally an api_key - "huggingface": from_model, - "models": from_model, - "spaces": from_spaces, - } - assert src.lower() in factory_methods, "parameter: src must be one of {}".format( - factory_methods.keys() - ) - - blocks: gradio.Blocks = factory_methods[src](name, api_key, alias, **kwargs) - return blocks - - -def from_model(model_name: str, api_key: str | None, alias: str | None, **kwargs): - model_url = "https://huggingface.co/{}".format(model_name) - api_url = "https://api-inference.huggingface.co/models/{}".format(model_name) - print("Fetching model from: {}".format(model_url)) - - headers = {"Authorization": f"Bearer {api_key}"} if api_key is not None else {} - - # Checking if model exists, and if so, it gets the pipeline - response = requests.request("GET", api_url, headers=headers) - assert ( - response.status_code == 200 - ), f"Could not find model: {model_name}. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter." - p = response.json().get("pipeline_tag") - - pipelines = { - "audio-classification": { - # example model: ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition - "inputs": components.Audio(source="upload", type="filepath", label="Input"), - "outputs": components.Label(label="Class"), - "preprocess": lambda i: to_binary, - "postprocess": lambda r: postprocess_label( - {i["label"].split(", ")[0]: i["score"] for i in r.json()} - ), - }, - "audio-to-audio": { - # example model: facebook/xm_transformer_sm_all-en - "inputs": components.Audio(source="upload", type="filepath", label="Input"), - "outputs": components.Audio(label="Output"), - "preprocess": to_binary, - "postprocess": encode_to_base64, - }, - "automatic-speech-recognition": { - # example model: facebook/wav2vec2-base-960h - "inputs": components.Audio(source="upload", type="filepath", label="Input"), - "outputs": components.Textbox(label="Output"), - "preprocess": to_binary, - "postprocess": lambda r: r.json()["text"], - }, - "feature-extraction": { - # example model: julien-c/distilbert-feature-extraction - "inputs": components.Textbox(label="Input"), - "outputs": components.Dataframe(label="Output"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r.json()[0], - }, - "fill-mask": { - "inputs": components.Textbox(label="Input"), - "outputs": components.Label(label="Classification"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: postprocess_label( - {i["token_str"]: i["score"] for i in r.json()} - ), - }, - "image-classification": { - # Example: google/vit-base-patch16-224 - "inputs": components.Image(type="filepath", label="Input Image"), - "outputs": components.Label(label="Classification"), - "preprocess": to_binary, - "postprocess": lambda r: postprocess_label( - {i["label"].split(", ")[0]: i["score"] for i in r.json()} - ), - }, - "question-answering": { - # Example: deepset/xlm-roberta-base-squad2 - "inputs": [ - components.Textbox(lines=7, label="Context"), - components.Textbox(label="Question"), - ], - "outputs": [ - components.Textbox(label="Answer"), - components.Label(label="Score"), - ], - "preprocess": lambda c, q: {"inputs": {"context": c, "question": q}}, - "postprocess": lambda r: (r.json()["answer"], {"label": r.json()["score"]}), - }, - "summarization": { - # Example: facebook/bart-large-cnn - "inputs": components.Textbox(label="Input"), - "outputs": components.Textbox(label="Summary"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r.json()[0]["summary_text"], - }, - "text-classification": { - # Example: distilbert-base-uncased-finetuned-sst-2-english - "inputs": components.Textbox(label="Input"), - "outputs": components.Label(label="Classification"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: postprocess_label( - {i["label"].split(", ")[0]: i["score"] for i in r.json()[0]} - ), - }, - "text-generation": { - # Example: gpt2 - "inputs": components.Textbox(label="Input"), - "outputs": components.Textbox(label="Output"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r.json()[0]["generated_text"], - }, - "text2text-generation": { - # Example: valhalla/t5-small-qa-qg-hl - "inputs": components.Textbox(label="Input"), - "outputs": components.Textbox(label="Generated Text"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r.json()[0]["generated_text"], - }, - "translation": { - "inputs": components.Textbox(label="Input"), - "outputs": components.Textbox(label="Translation"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r.json()[0]["translation_text"], - }, - "zero-shot-classification": { - # Example: facebook/bart-large-mnli - "inputs": [ - components.Textbox(label="Input"), - components.Textbox(label="Possible class names (" "comma-separated)"), - components.Checkbox(label="Allow multiple true classes"), - ], - "outputs": components.Label(label="Classification"), - "preprocess": lambda i, c, m: { - "inputs": i, - "parameters": {"candidate_labels": c, "multi_class": m}, - }, - "postprocess": lambda r: postprocess_label( - { - r.json()["labels"][i]: r.json()["scores"][i] - for i in range(len(r.json()["labels"])) - } - ), - }, - "sentence-similarity": { - # Example: sentence-transformers/distilbert-base-nli-stsb-mean-tokens - "inputs": [ - components.Textbox( - value="That is a happy person", label="Source Sentence" - ), - components.Textbox( - lines=7, - placeholder="Separate each sentence by a newline", - label="Sentences to compare to", - ), - ], - "outputs": components.Label(label="Classification"), - "preprocess": lambda src, sentences: { - "inputs": { - "source_sentence": src, - "sentences": [s for s in sentences.splitlines() if s != ""], - } - }, - "postprocess": lambda r: postprocess_label( - {f"sentence {i}": v for i, v in enumerate(r.json())} - ), - }, - "text-to-speech": { - # Example: julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train - "inputs": components.Textbox(label="Input"), - "outputs": components.Audio(label="Audio"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": encode_to_base64, - }, - "text-to-image": { - # example model: osanseviero/BigGAN-deep-128 - "inputs": components.Textbox(label="Input"), - "outputs": components.Image(label="Output"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": encode_to_base64, - }, - "token-classification": { - # example model: huggingface-course/bert-finetuned-ner - "inputs": components.Textbox(label="Input"), - "outputs": components.HighlightedText(label="Output"), - "preprocess": lambda x: {"inputs": x}, - "postprocess": lambda r: r, # Handled as a special case in query_huggingface_api() - }, - } - - if p in ["tabular-classification", "tabular-regression"]: - example_data = get_tabular_examples(model_name) - col_names, example_data = cols_to_rows(example_data) - example_data = [[example_data]] if example_data else None - - pipelines[p] = { - "inputs": components.Dataframe( - label="Input Rows", - type="pandas", - headers=col_names, - col_count=(len(col_names), "fixed"), - ), - "outputs": components.Dataframe( - label="Predictions", type="array", headers=["prediction"] - ), - "preprocess": rows_to_cols, - "postprocess": lambda r: { - "headers": ["prediction"], - "data": [[pred] for pred in json.loads(r.text)], - }, - "examples": example_data, - } - - if p is None or not (p in pipelines): - raise ValueError("Unsupported pipeline type: {}".format(p)) - - pipeline = pipelines[p] - - def query_huggingface_api(*params): - # Convert to a list of input components - data = pipeline["preprocess"](*params) - if isinstance( - data, dict - ): # HF doesn't allow additional parameters for binary files (e.g. images or audio files) - data.update({"options": {"wait_for_model": True}}) - data = json.dumps(data) - response = requests.request("POST", api_url, headers=headers, data=data) - if not (response.status_code == 200): - errors_json = response.json() - errors, warns = "", "" - if errors_json.get("error"): - errors = f", Error: {errors_json.get('error')}" - if errors_json.get("warnings"): - warns = f", Warnings: {errors_json.get('warnings')}" - raise ValueError( - f"Could not complete request to HuggingFace API, Status Code: {response.status_code}" - + errors - + warns - ) - if ( - p == "token-classification" - ): # Handle as a special case since HF API only returns the named entities and we need the input as well - ner_groups = response.json() - input_string = params[0] - response = utils.format_ner_list(input_string, ner_groups) - output = pipeline["postprocess"](response) - return output - - if alias is None: - query_huggingface_api.__name__ = model_name - else: - query_huggingface_api.__name__ = alias - - interface_info = { - "fn": query_huggingface_api, - "inputs": pipeline["inputs"], - "outputs": pipeline["outputs"], - "title": model_name, - "examples": pipeline.get("examples"), - } - - kwargs = dict(interface_info, **kwargs) - kwargs["_api_mode"] = True # So interface doesn't run pre/postprocess. - interface = gradio.Interface(**kwargs) - return interface - - -def from_spaces( - space_name: str, api_key: str | None, alias: str | None, **kwargs -) -> Blocks: - space_url = "https://huggingface.co/spaces/{}".format(space_name) - - print("Fetching Space from: {}".format(space_url)) - - headers = {} - if api_key is not None: - headers["Authorization"] = f"Bearer {api_key}" - - iframe_url = ( - requests.get( - f"https://huggingface.co/api/spaces/{space_name}/host", headers=headers - ) - .json() - .get("host") - ) - - if iframe_url is None: - raise ValueError( - f"Could not find Space: {space_name}. If it is a private or gated Space, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter." - ) - - r = requests.get(iframe_url, headers=headers) - - result = re.search( - r"window.gradio_config = (.*?);[\s]*", r.text - ) # some basic regex to extract the config - try: - config = json.loads(result.group(1)) # type: ignore - except AttributeError: - raise ValueError("Could not load the Space: {}".format(space_name)) - if "allow_flagging" in config: # Create an Interface for Gradio 2.x Spaces - return from_spaces_interface( - space_name, config, alias, api_key, iframe_url, **kwargs - ) - else: # Create a Blocks for Gradio 3.x Spaces - if kwargs: - warnings.warn( - "You cannot override parameters for this Space by passing in kwargs. " - "Instead, please load the Space as a function and use it to create a " - "Blocks or Interface locally. You may find this Guide helpful: " - "https://gradio.app/using_blocks_like_functions/" - ) - return from_spaces_blocks(config, api_key, iframe_url) - - -def from_spaces_blocks(config: Dict, api_key: str | None, iframe_url: str) -> Blocks: - api_url = "{}/api/predict/".format(iframe_url) - - headers = {"Content-Type": "application/json"} - if api_key is not None: - headers["Authorization"] = f"Bearer {api_key}" - ws_url = "{}/queue/join".format(iframe_url).replace("https", "wss") - - ws_fn = get_ws_fn(ws_url, headers) - - fns = [] - for d, dependency in enumerate(config["dependencies"]): - if dependency["backend_fn"]: - - def get_fn(outputs, fn_index, use_ws): - def fn(*data): - data = json.dumps({"data": data, "fn_index": fn_index}) - hash_data = json.dumps( - {"fn_index": fn_index, "session_hash": str(uuid.uuid4())} - ) - if use_ws: - result = utils.synchronize_async(ws_fn, data, hash_data) - output = result["data"] - else: - response = requests.post(api_url, headers=headers, data=data) - result = json.loads(response.content.decode("utf-8")) - try: - output = result["data"] - except KeyError: - if "error" in result and "429" in result["error"]: - raise TooManyRequestsError( - "Too many requests to the Hugging Face API" - ) - raise KeyError( - f"Could not find 'data' key in response from external Space. Response received: {result}" - ) - if len(outputs) == 1: - output = output[0] - return output - - return fn - - fn = get_fn( - deepcopy(dependency["outputs"]), d, use_websocket(config, dependency) - ) - fns.append(fn) - else: - fns.append(None) - return gradio.Blocks.from_config(config, fns, iframe_url) - - -def from_spaces_interface( - model_name: str, - config: Dict, - alias: str | None, - api_key: str | None, - iframe_url: str, - **kwargs, -) -> Interface: - - config = streamline_spaces_interface(config) - api_url = "{}/api/predict/".format(iframe_url) - headers = {"Content-Type": "application/json"} - if api_key is not None: - headers["Authorization"] = f"Bearer {api_key}" - - # The function should call the API with preprocessed data - def fn(*data): - data = json.dumps({"data": data}) - response = requests.post(api_url, headers=headers, data=data) - result = json.loads(response.content.decode("utf-8")) - try: - output = result["data"] - except KeyError: - if "error" in result and "429" in result["error"]: - raise TooManyRequestsError("Too many requests to the Hugging Face API") - raise KeyError( - f"Could not find 'data' key in response from external Space. Response received: {result}" - ) - if ( - len(config["outputs"]) == 1 - ): # if the fn is supposed to return a single value, pop it - output = output[0] - if len(config["outputs"]) == 1 and isinstance( - output, list - ): # Needed to support Output.Image() returning bounding boxes as well (TODO: handle different versions of gradio since they have slightly different APIs) - output = output[0] - return output - - fn.__name__ = alias if (alias is not None) else model_name - config["fn"] = fn - - kwargs = dict(config, **kwargs) - kwargs["_api_mode"] = True - interface = gradio.Interface(**kwargs) - return interface diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py deleted file mode 100644 index 7e0aa3c20a4393013e05b0e69b1da43fea58ebdd..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py +++ /dev/null @@ -1,185 +0,0 @@ -""" -Defines helper methods useful for setting up ports, launching servers, and -creating tunnels. -""" -from __future__ import annotations - -import os -import socket -import threading -import time -import warnings -from typing import TYPE_CHECKING, Tuple - -import requests -import uvicorn - -from gradio.routes import App -from gradio.tunneling import Tunnel - -if TYPE_CHECKING: # Only import for type checking (to avoid circular imports). - from gradio.blocks import Blocks - -# By default, the local server will try to open on localhost, port 7860. -# If that is not available, then it will try 7861, 7862, ... 7959. -INITIAL_PORT_VALUE = int(os.getenv("GRADIO_SERVER_PORT", "7860")) -TRY_NUM_PORTS = int(os.getenv("GRADIO_NUM_PORTS", "100")) -LOCALHOST_NAME = os.getenv("GRADIO_SERVER_NAME", "127.0.0.1") -GRADIO_API_SERVER = "https://api.gradio.app/v2/tunnel-request" - - -class Server(uvicorn.Server): - def install_signal_handlers(self): - pass - - def run_in_thread(self): - self.thread = threading.Thread(target=self.run, daemon=True) - self.thread.start() - while not self.started: - time.sleep(1e-3) - - def close(self): - self.should_exit = True - self.thread.join() - - -def get_first_available_port(initial: int, final: int) -> int: - """ - Gets the first open port in a specified range of port numbers - Parameters: - initial: the initial value in the range of port numbers - final: final (exclusive) value in the range of port numbers, should be greater than `initial` - Returns: - port: the first open port in the range - """ - for port in range(initial, final): - try: - s = socket.socket() # create a socket object - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - s.bind((LOCALHOST_NAME, port)) # Bind to the port - s.close() - return port - except OSError: - pass - raise OSError( - "All ports from {} to {} are in use. Please close a port.".format( - initial, final - 1 - ) - ) - - -def configure_app(app: App, blocks: Blocks) -> App: - auth = blocks.auth - if auth is not None: - if not callable(auth): - app.auth = {account[0]: account[1] for account in auth} - else: - app.auth = auth - else: - app.auth = None - app.blocks = blocks - app.cwd = os.getcwd() - app.favicon_path = blocks.favicon_path - app.tokens = {} - return app - - -def start_server( - blocks: Blocks, - server_name: str | None = None, - server_port: int | None = None, - ssl_keyfile: str | None = None, - ssl_certfile: str | None = None, - ssl_keyfile_password: str | None = None, -) -> Tuple[str, int, str, App, Server]: - """Launches a local server running the provided Interface - Parameters: - blocks: The Blocks object to run on the server - server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. - server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. - auth: If provided, username and password (or list of username-password tuples) required to access the Blocks. Can also provide function that takes username and password and returns True if valid login. - ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https. - ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided. - ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https. - Returns: - port: the port number the server is running on - path_to_local_server: the complete address that the local server can be accessed at - app: the FastAPI app object - server: the server object that is a subclass of uvicorn.Server (used to close the server) - """ - server_name = server_name or LOCALHOST_NAME - # if port is not specified, search for first available port - if server_port is None: - port = get_first_available_port( - INITIAL_PORT_VALUE, INITIAL_PORT_VALUE + TRY_NUM_PORTS - ) - else: - try: - s = socket.socket() - s.bind((LOCALHOST_NAME, server_port)) - s.close() - except OSError: - raise OSError( - "Port {} is in use. If a gradio.Blocks is running on the port, you can close() it or gradio.close_all().".format( - server_port - ) - ) - port = server_port - - url_host_name = "localhost" if server_name == "0.0.0.0" else server_name - - if ssl_keyfile is not None: - if ssl_certfile is None: - raise ValueError( - "ssl_certfile must be provided if ssl_keyfile is provided." - ) - path_to_local_server = "https://{}:{}/".format(url_host_name, port) - else: - path_to_local_server = "http://{}:{}/".format(url_host_name, port) - - app = App.create_app(blocks) - - if blocks.save_to is not None: # Used for selenium tests - blocks.save_to["port"] = port - config = uvicorn.Config( - app=app, - port=port, - host=server_name, - log_level="warning", - ssl_keyfile=ssl_keyfile, - ssl_certfile=ssl_certfile, - ssl_keyfile_password=ssl_keyfile_password, - ws_max_size=1024 * 1024 * 1024, # Setting max websocket size to be 1 GB - ) - server = Server(config=config) - server.run_in_thread() - return server_name, port, path_to_local_server, app, server - - -def setup_tunnel(local_host: str, local_port: int) -> str: - response = requests.get(GRADIO_API_SERVER) - if response and response.status_code == 200: - try: - payload = response.json()[0] - remote_host, remote_port = payload["host"], int(payload["port"]) - tunnel = Tunnel(remote_host, remote_port, local_host, local_port) - address = tunnel.start_tunnel() - return address - except Exception as e: - raise RuntimeError(str(e)) - else: - raise RuntimeError("Could not get share link from Gradio API Server.") - - -def url_ok(url: str) -> bool: - try: - for _ in range(5): - with warnings.catch_warnings(): - warnings.filterwarnings("ignore") - r = requests.head(url, timeout=3, verify=False) - if r.status_code in (200, 401, 302): # 401 or 302 if auth is set - return True - time.sleep(0.500) - except (ConnectionError, requests.exceptions.ConnectionError): - return False - return False diff --git a/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py b/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,270 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(), - "dtype": torch.get_autocast_gpu_dtype(), - "cache_enabled": torch.is_autocast_cache_enabled()} - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(), \ - torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py deleted file mode 100644 index 2aaa3110d0a7bcd05de7eca1e45101589ca5af05..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py +++ /dev/null @@ -1,67 +0,0 @@ -import functools -import torch.nn as nn - - -from taming.modules.util import ActNorm - - -def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.normal_(m.weight.data, 0.0, 0.02) - elif classname.find('BatchNorm') != -1: - nn.init.normal_(m.weight.data, 1.0, 0.02) - nn.init.constant_(m.bias.data, 0) - - -class NLayerDiscriminator(nn.Module): - """Defines a PatchGAN discriminator as in Pix2Pix - --> see https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py - """ - def __init__(self, input_nc=3, ndf=64, n_layers=3, use_actnorm=False): - """Construct a PatchGAN discriminator - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - n_layers (int) -- the number of conv layers in the discriminator - norm_layer -- normalization layer - """ - super(NLayerDiscriminator, self).__init__() - if not use_actnorm: - norm_layer = nn.BatchNorm2d - else: - norm_layer = ActNorm - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func != nn.BatchNorm2d - else: - use_bias = norm_layer != nn.BatchNorm2d - - kw = 4 - padw = 1 - sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)] - nf_mult = 1 - nf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - nf_mult_prev = nf_mult - nf_mult = min(2 ** n, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - nf_mult_prev = nf_mult - nf_mult = min(2 ** n_layers, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - sequence += [ - nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map - self.main = nn.Sequential(*sequence) - - def forward(self, input): - """Standard forward.""" - return self.main(input) diff --git a/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py b/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py deleted file mode 100644 index fb299daecbc2b15fb66555bbfb8d1d983e481518..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class LogSumExpMoE(torch.autograd.Function): - """Standard LogSumExp forward pass, but use *posterior* for the backward. - - See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" - (Shen et al., 2019) `_. - """ - - @staticmethod - def forward(ctx, logp, posterior, dim=-1): - ctx.save_for_backward(posterior) - ctx.dim = dim - return torch.logsumexp(logp, dim=dim) - - @staticmethod - def backward(ctx, grad_output): - (posterior,) = ctx.saved_tensors - grad_logp = grad_output.unsqueeze(ctx.dim) * posterior - return grad_logp, None, None diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py deleted file mode 100644 index 2f051754af55966e26850e94c121e0ff439bfd28..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -import numpy as np -from fairseq.data import FairseqDataset - - -class DummyDataset(FairseqDataset): - def __init__(self, batch, num_items, item_size): - super().__init__() - self.batch = batch - self.num_items = num_items - self.item_size = item_size - - def __getitem__(self, index): - return index - - def __len__(self): - return self.num_items - - def collater(self, samples): - return self.batch - - @property - def sizes(self): - return np.array([self.item_size] * self.num_items) - - def num_tokens(self, index): - return self.item_size - - def size(self, index): - return self.item_size - - def ordered_indices(self): - return np.arange(self.num_items) - - @property - def supports_prefetch(self): - return False diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py deleted file mode 100644 index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class StripTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, id_to_strip): - super().__init__(dataset) - self.id_to_strip = id_to_strip - - def __getitem__(self, index): - item = self.dataset[index] - while len(item) > 0 and item[-1] == self.id_to_strip: - item = item[:-1] - while len(item) > 0 and item[0] == self.id_to_strip: - item = item[1:] - return item diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py b/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py deleted file mode 100644 index a201f79aaf030cdba710dd97c28af1b29a93ed2a..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py +++ /dev/null @@ -1,178 +0,0 @@ -import math -import numpy as np -import torch - - -def cubic(x): - """cubic function used for calculate_weights_indices.""" - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5 * absx3 - 2.5 * absx2 + 1) * ( - (absx <= 1).type_as(absx)) + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * (((absx > 1) * - (absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - """Calculate weights and indices, used for imresize function. - - Args: - in_length (int): Input length. - out_length (int): Output length. - scale (float): Scale factor. - kernel_width (int): Kernel width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - """ - - if (scale < 1) and antialiasing: - # Use a modified kernel (larger kernel width) to simultaneously - # interpolate and antialias - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5 + scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - p = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, p) + torch.linspace(0, p - 1, p).view(1, p).expand( - out_length, p) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, p) - indices - - # apply cubic kernel - if (scale < 1) and antialiasing: - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, p) - - # If a column in weights is all zero, get rid of it. only consider the - # first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, p - 2) - weights = weights.narrow(1, 1, p - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, p - 2) - weights = weights.narrow(1, 0, p - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -@torch.no_grad() -def imresize(img, scale, antialiasing=True): - """imresize function same as MATLAB. - - It now only supports bicubic. - The same scale applies for both height and width. - - Args: - img (Tensor | Numpy array): - Tensor: Input image with shape (c, h, w), [0, 1] range. - Numpy: Input image with shape (h, w, c), [0, 1] range. - scale (float): Scale factor. The same scale applies for both height - and width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - Default: True. - - Returns: - Tensor: Output image with shape (c, h, w), [0, 1] range, w/o round. - """ - squeeze_flag = False - if type(img).__module__ == np.__name__: # numpy type - numpy_type = True - if img.ndim == 2: - img = img[:, :, None] - squeeze_flag = True - img = torch.from_numpy(img.transpose(2, 0, 1)).float() - else: - numpy_type = False - if img.ndim == 2: - img = img.unsqueeze(0) - squeeze_flag = True - - in_c, in_h, in_w = img.size() - out_h, out_w = math.ceil(in_h * scale), math.ceil(in_w * scale) - kernel_width = 4 - kernel = 'cubic' - - # get weights and indices - weights_h, indices_h, sym_len_hs, sym_len_he = calculate_weights_indices(in_h, out_h, scale, kernel, kernel_width, - antialiasing) - weights_w, indices_w, sym_len_ws, sym_len_we = calculate_weights_indices(in_w, out_w, scale, kernel, kernel_width, - antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_c, in_h + sym_len_hs + sym_len_he, in_w) - img_aug.narrow(1, sym_len_hs, in_h).copy_(img) - - sym_patch = img[:, :sym_len_hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_he:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_hs + in_h, sym_len_he).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_c, out_h, in_w) - kernel_width = weights_h.size(1) - for i in range(out_h): - idx = int(indices_h[i][0]) - for j in range(in_c): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_h[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_c, out_h, in_w + sym_len_ws + sym_len_we) - out_1_aug.narrow(2, sym_len_ws, in_w).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_we:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_ws + in_w, sym_len_we).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_c, out_h, out_w) - kernel_width = weights_w.size(1) - for i in range(out_w): - idx = int(indices_w[i][0]) - for j in range(in_c): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_w[i]) - - if squeeze_flag: - out_2 = out_2.squeeze(0) - if numpy_type: - out_2 = out_2.numpy() - if not squeeze_flag: - out_2 = out_2.transpose(1, 2, 0) - - return out_2 diff --git a/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py b/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py deleted file mode 100644 index 0a66286b7d5019c6e221932a813768038f839c91..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py +++ /dev/null @@ -1,132 +0,0 @@ -import gzip -import html -import os -from functools import lru_cache - -import ftfy -import regex as re - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe()): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split('\n') - merges = merges[1:49152-256-2+1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v+'' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - vocab.extend(['<|startoftext|>', '<|endoftext|>']) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'} - self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + ( token[-1] + '',) - pairs = get_pairs(word) - - if not pairs: - return token+'' - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ') - return text diff --git a/spaces/Ikaros521/moe-tts/text/ngu_dialect.py b/spaces/Ikaros521/moe-tts/text/ngu_dialect.py deleted file mode 100644 index 69d0ce6fe5a989843ee059a71ccab793f20f9176..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/moe-tts/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Ilean/pdfGPTv2/README.md b/spaces/Ilean/pdfGPTv2/README.md deleted file mode 100644 index e5365d776238cb90c79278058b8c622388b22fa1..0000000000000000000000000000000000000000 --- a/spaces/Ilean/pdfGPTv2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PdfGPT -emoji: 🏢 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: cc-by-4.0 -duplicated_from: Ilean/pdfGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/IvaElen/find_my_pic/get_similiarty.py b/spaces/IvaElen/find_my_pic/get_similiarty.py deleted file mode 100644 index c7a8824b91a484ed0c1049152bb74fc62ccf1518..0000000000000000000000000000000000000000 --- a/spaces/IvaElen/find_my_pic/get_similiarty.py +++ /dev/null @@ -1,38 +0,0 @@ -import torchvision.datasets as datasets -import numpy as np -import clip -import torch -def get_similiarity(prompt, model_resnet, model_vit, top_k=3): - device = "cuda" if torch.cuda.is_available() else "cpu" - data_dir = 'sample/sample/data' - image_arr = np.loadtxt("embeddings.csv", delimiter=",") - raw_dataset = datasets.ImageFolder(data_dir) - # получите список всех изображений - # create transformer-readable tokens - inputs = clip.tokenize(prompt).to(device) - text_emb = model_resnet.encode_text(inputs) - text_emb = text_emb.cpu().detach().numpy() - scores = np.dot(text_emb, image_arr.T) - # score_vit - # get the top k indices for most similar vecs - idx = np.argsort(-scores[0])[:top_k] - image_files = [] - for i in idx: - image_files.append(raw_dataset.imgs[i][0]) - - image_arr_vit = np.loadtxt('embeddings_vit.csv', delimiter=",") - inputs_vit = clip.tokenize(prompt).to(device) - text_emb_vit = model_vit.encode_text(inputs_vit) - text_emb_vit = text_emb_vit.cpu().detach().numpy() - scores_vit = np.dot(text_emb_vit, image_arr_vit.T) - idx_vit = np.argsort(-scores_vit[0])[:top_k] - image_files_vit = [] - for i in idx_vit: - image_files_vit.append(raw_dataset.imgs[i][0]) - - return image_files, image_files_vit -# def get_text_enc(input_text: str): -# text = clip.tokenize([input_text]).to(device) -# text_features = model.encode_text(text).cpu() -# text_features = text_features.cpu().detach().numpy() -# return text_features \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py deleted file mode 100644 index a688a52a7a6ec65a5774dd6c6fe1ce1e9d66acab..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py +++ /dev/null @@ -1,687 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers.utils import is_accelerate_available -from packaging import version -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler -from ...utils import PIL_INTERPOLATION, deprecate, logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def posterior_sample(scheduler, latents, timestep, clean_latents, generator, eta): - # 1. get previous step value (=t-1) - prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps - - if prev_timestep <= 0: - return clean_latents - - # 2. compute alphas, betas - alpha_prod_t = scheduler.alphas_cumprod[timestep] - alpha_prod_t_prev = ( - scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod - ) - - variance = scheduler._get_variance(timestep, prev_timestep) - std_dev_t = eta * variance ** (0.5) - - # direction pointing to x_t - e_t = (latents - alpha_prod_t ** (0.5) * clean_latents) / (1 - alpha_prod_t) ** (0.5) - dir_xt = (1.0 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * e_t - noise = std_dev_t * torch.randn( - clean_latents.shape, dtype=clean_latents.dtype, device=clean_latents.device, generator=generator - ) - prev_latents = alpha_prod_t_prev ** (0.5) * clean_latents + dir_xt + noise - - return prev_latents - - -def compute_noise(scheduler, prev_latents, latents, timestep, noise_pred, eta): - # 1. get previous step value (=t-1) - prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps - - # 2. compute alphas, betas - alpha_prod_t = scheduler.alphas_cumprod[timestep] - alpha_prod_t_prev = ( - scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod - ) - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5) - - # 4. Clip "predicted x_0" - if scheduler.config.clip_sample: - pred_original_sample = torch.clamp(pred_original_sample, -1, 1) - - # 5. compute variance: "sigma_t(η)" -> see formula (16) - # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) - variance = scheduler._get_variance(timestep, prev_timestep) - std_dev_t = eta * variance ** (0.5) - - # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * noise_pred - - noise = (prev_latents - (alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction)) / ( - variance ** (0.5) * eta - ) - return noise - - -class CycleDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image to image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: DDIMScheduler, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate - # fix by only offloading self.safety_checker for now - cpu_offload(self.safety_checker.vision_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.check_inputs - def check_inputs(self, prompt, strength, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - image = image.to(device=device, dtype=dtype) - init_latent_dist = self.vae.encode(image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = torch.cat([init_latents] * additional_image_per_prompt * num_images_per_prompt, dim=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents] * num_images_per_prompt, dim=0) - - # add noise to latents using the timestep - noise = torch.randn(init_latents.shape, generator=generator, device=device, dtype=dtype) - - # get latents - clean_latents = init_latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents, clean_latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - source_prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - source_guidance_scale: Optional[float] = 1, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.1, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - source_guidance_scale (`float`, *optional*, defaults to 1): - Guidance scale for the source prompt. This is useful to control the amount of influence the source - prompt for encoding. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.1): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - message = "Please use `image` instead of `init_image`." - init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs) - image = init_image or image - - # 1. Check inputs - self.check_inputs(prompt, strength, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance, None) - source_text_embeddings = self._encode_prompt( - source_prompt, device, num_images_per_prompt, do_classifier_free_guidance, None - ) - - # 4. Preprocess image - if isinstance(image, PIL.Image.Image): - image = preprocess(image) - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - latents, clean_latents = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, device, generator - ) - source_latents = latents - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - generator = extra_step_kwargs.pop("generator", None) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) - source_latent_model_input = torch.cat([source_latents] * 2) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - source_latent_model_input = self.scheduler.scale_model_input(source_latent_model_input, t) - - # predict the noise residual - concat_latent_model_input = torch.stack( - [ - source_latent_model_input[0], - latent_model_input[0], - source_latent_model_input[1], - latent_model_input[1], - ], - dim=0, - ) - concat_text_embeddings = torch.stack( - [ - source_text_embeddings[0], - text_embeddings[0], - source_text_embeddings[1], - text_embeddings[1], - ], - dim=0, - ) - concat_noise_pred = self.unet( - concat_latent_model_input, t, encoder_hidden_states=concat_text_embeddings - ).sample - - # perform guidance - ( - source_noise_pred_uncond, - noise_pred_uncond, - source_noise_pred_text, - noise_pred_text, - ) = concat_noise_pred.chunk(4, dim=0) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - source_noise_pred = source_noise_pred_uncond + source_guidance_scale * ( - source_noise_pred_text - source_noise_pred_uncond - ) - - # Sample source_latents from the posterior distribution. - prev_source_latents = posterior_sample( - self.scheduler, source_latents, t, clean_latents, generator=generator, **extra_step_kwargs - ) - # Compute noise. - noise = compute_noise( - self.scheduler, prev_source_latents, source_latents, t, source_noise_pred, **extra_step_kwargs - ) - source_latents = prev_source_latents - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, t, latents, variance_noise=noise, **extra_step_kwargs - ).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 9. Post-processing - image = self.decode_latents(latents) - - # 10. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts deleted file mode 100644 index 8e914d329008deae4e14679597a76ca352b64925..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts +++ /dev/null @@ -1,19 +0,0 @@ -import { dirtyLLMResponseCleaner } from "./dirtyLLMResponseCleaner" - -export function cleanJson(input: string) { - - if (input.includes('```')) { - input = input.split('```')[0] - } - let tmp = dirtyLLMResponseCleaner(input) - - // we only keep what's after the first [ - tmp = `[${tmp.split("[").pop() || ""}` - - // and before the first ] - tmp = `${tmp.split("]").shift() || ""}]` - - tmp = dirtyLLMResponseCleaner(tmp) - - return tmp -} \ No newline at end of file diff --git a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css b/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Kamtera/Persian-tts-CoquiTTS/app.py b/spaces/Kamtera/Persian-tts-CoquiTTS/app.py deleted file mode 100644 index aeed4b0243a03b0b4ddc80b2549deb1ea8937d39..0000000000000000000000000000000000000000 --- a/spaces/Kamtera/Persian-tts-CoquiTTS/app.py +++ /dev/null @@ -1,115 +0,0 @@ - -import tempfile ,os -from TTS.config import load_config -import gradio as gr - -from TTS.utils.manage import ModelManager -from TTS.utils.synthesizer import Synthesizer - -MODEL_NAMES=[ - "vits male1 (best)", - "vits female (best)", - "vits-male", - "vits female1", - "glowtts-male", - "glowtts-female", - "female tacotron2" -] -MAX_TXT_LEN = 800 -model_path = os.getcwd() + "/best_model.pth" -config_path = os.getcwd() + "/config.json" - - - -from TTS.utils.download import download_url -modelInfo=[ - ["vits-male","best_model_65633.pth","config-0.json","https://huggingface.co/Kamtera/persian-tts-male-vits/resolve/main/"], - ["vits female (best)","checkpoint_48000.pth","config-2.json","https://huggingface.co/Kamtera/persian-tts-female-vits/resolve/main/"], - ["glowtts-male","best_model_77797.pth","config-1.json","https://huggingface.co/Kamtera/persian-tts-male-glow_tts/resolve/main/"], - ["glowtts-female","best_model.pth","config.json","https://huggingface.co/Kamtera/persian-tts-female-glow_tts/resolve/main/"], - ["vits male1 (best)","checkpoint_88000.pth","config.json","https://huggingface.co/Kamtera/persian-tts-male1-vits/resolve/main/"], - ["vits female1","checkpoint_50000.pth","config.json","https://huggingface.co/Kamtera/persian-tts-female1-vits/resolve/main/"], - ["female tacotron2","checkpoint_313000.pth","config-2.json","https://huggingface.co/Kamtera/persian-tts-female-tacotron2/resolve/main/"] -] - -for d in modelInfo: - directory=d[0] - if not os.path.exists(directory): - os.makedirs(directory) - print("|> Downloading: ",directory) - download_url( - d[3]+d[1],directory,"best_model.pth" - ) - download_url( - d[3]+d[2],directory,"config.json" - ) -def tts(text: str,model_name: str): - if len(text) > MAX_TXT_LEN: - text = text[:MAX_TXT_LEN] - print(f"Input text was cutoff since it went over the {MAX_TXT_LEN} character limit.") - print(text) - - - # synthesize - synthesizer = Synthesizer( - model_name+"/best_model.pth", model_name+"/config.json" - ) - if synthesizer is None: - raise NameError("model not found") - wavs = synthesizer.tts(text) - # return output - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - synthesizer.save_wav(wavs, fp) - return fp.name - - -description=""" -This is a demo of persian text to speech model. - -**Github : https://github.com/karim23657/Persian-tts-coqui ** - -Models can be found here:
    - -|Model|Dataset| -|----|------| -|[vits female (best)](https://huggingface.co/Kamtera/persian-tts-female-vits)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)| -|[vits male1 (best)](https://huggingface.co/Kamtera/persian-tts-male1-vits)|[persian-tts-dataset-male](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-male)| -|[vits female1](https://huggingface.co/Kamtera/persian-tts-female1-vits)|[ParsiGoo](https://github.com/karim23657/ParsiGoo)| -|[vits male](https://huggingface.co/Kamtera/persian-tts-male-vits)|[persian-tts-dataset](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset)| -|[glowtts female](https://huggingface.co/Kamtera/persian-tts-female-glow_tts)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)| -|[glowtts male](https://huggingface.co/Kamtera/persian-tts-male-glow_tts)|[persian-tts-dataset](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset)| -|[tacotron2 female](https://huggingface.co/Kamtera/persian-tts-female-tacotron2)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)| - - -""" -article= "" -examples=[ - ["و خداوند شما را با ارسال روح در جسم زندگانی و حیات بخشید","vits-male"], - ["تاجر تو چه تجارت می کنی ، تو را چه که چه تجارت می کنم؟","vits female (best)"], - ["شیش سیخ جیگر سیخی شیش هزار","vits female (best)"], - ["سه شیشه شیر ، سه سیر سرشیر","vits female (best)"], - ["دزدی دزدید ز بز دزدی بزی ، عجب دزدی که دزدید ز بز دزدی بزی","vits male1 (best)"], - ["مثنوی یکی از قالب های شعری است ک هر بیت قافیه ی جداگانه دارد","vits female1"], - ["در گلو ماند خس او سالها، چیست آن خس مهر جاه و مالها","vits male1 (best)"], -] -iface = gr.Interface( - fn=tts, - inputs=[ - gr.Textbox( - label="Text", - value="زندگی فقط یک بار است؛ از آن به خوبی استفاده کن", - ), - gr.Radio( - label="Pick a TTS Model ", - choices=MODEL_NAMES, - value="vits-female", - ), - ], - outputs=gr.Audio(label="Output",type='filepath'), - examples=examples, - title="🗣️ Persian tts 🗣️", - description=description, - article=article, - live=False -) -iface.launch(share=False) diff --git a/spaces/KaygNas/cut-it/src/App.ts b/spaces/KaygNas/cut-it/src/App.ts deleted file mode 100644 index b514bd63c930b8fd75ee5a4f79e4ca926b0729dd..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/src/App.ts +++ /dev/null @@ -1,155 +0,0 @@ -import type { Nullable } from '@babylonjs/core' -import { Engine, FollowCamera, HemisphericLight, Observable, Scene, Vector3 } from '@babylonjs/core' -import { Inspector } from '@babylonjs/inspector' -import { EAppState, EAppUIState } from './enums' -import { error } from './utils' -import type { Robot } from './Robot' -import type { Image } from './Image' -import type { ILoadingRectangle } from './AppUI' - -const ImageModule = import('./Image') -const GroundModule = import('./Ground') -const AppUIModule = import('./AppUI') -const RobotModule = import('./Robot') -export class App { - engine: Engine - scene?: Scene - - // App State - stateObservable = new Observable() - private _state: EAppState = EAppState.Initializing - get state() { - return this._state - } - - set state(value) { - this._state = value - this.stateObservable.notifyObservers(value) - } - - constructor(readonly canvas: HTMLCanvasElement) { - this.state = EAppState.Initializing - this.engine = new Engine(canvas) - window.addEventListener('resize', () => { - this.engine.resize() - }) - this.engine.displayLoadingUI() - this._createScene(this.engine, this.canvas).then(async (scene) => { - this.scene = scene - await this.scene.whenReadyAsync() - this.state = EAppState.Initialized - this.engine.hideLoadingUI() - }) - } - - run() { - if (import.meta.env.DEV) { - // for development: make inspector visible/invisible - window.addEventListener('keydown', (ev) => { - // Shift+Ctrl+Alt+I - if (ev.shiftKey && ev.ctrlKey && ev.altKey && ev.keyCode === 73) { - if (Inspector.IsVisible) - Inspector.Hide() - else if (this.scene) - Inspector.Show(this.scene, {}) - } - }) - } - this.engine.runRenderLoop(() => { - this.scene?.render() - }) - } - - private async _createScene(engine: Engine, canvas: HTMLCanvasElement) { - const scene = new Scene(engine) - const [{ Image }, { AppUI }, { Ground }, { Robot }] = await Promise.all([ImageModule, AppUIModule, GroundModule, RobotModule]) - const image = new Image() - const ui = new AppUI(scene) - const ground = Ground.create(scene, image) - const robot = Robot.create(scene) - const cammandRobotMoveToImage = async (robot: Robot, image: Image) => { - if (!image.isClassified()) - return - - const bbox = image.classification.detection.box - const groundBbox = ground.mesh.getBoundingInfo().boundingBox - const groundSize = ground.mesh.getBoundingInfo().boundingBox.extendSize.scale(2) - const imageSize = { x: image.image.imageWidth, y: 0, z: image.image.imageHeight } - const scales = { x: groundSize.x / imageSize.x, y: 0, z: groundSize.z / imageSize.z } - const bboxOrigin = new Vector3(groundBbox.minimum.x, 0, groundBbox.maximum.z) - const bboxLeftTop = new Vector3(bbox.xmax * scales.x, 0, -bbox.ymax * scales.z) - const bboxRightTop = new Vector3(bbox.xmin * scales.x, 0, -bbox.ymax * scales.z) - const bboxRightBottom = new Vector3(bbox.xmin * scales.x, 0, -bbox.ymin * scales.z) - const bboxLeftBottom = new Vector3(bbox.xmax * scales.x, 0, -bbox.ymin * scales.z) - const destination = bboxOrigin.add(bboxLeftTop) - await robot.moveTo(destination) - await robot.laserCutter.cut([bboxLeftTop, bboxRightTop, bboxRightBottom, bboxLeftBottom, bboxLeftTop].map(v => bboxOrigin.add(v))) - await robot.land() - } - - ui.observalbe.add(async (event) => { - try { - if (event.type === 'UploadImageButtonClick') { - image.clear() - this.state = EAppState.ImageUploading - await image.load() - this.state = EAppState.ImageUploaded - ui.setState(EAppUIState.Input) - await robot.takeOff() - } - else if (event.type === 'CreateImageButtonClick') { - image.clear() - event.target.isLoading = true - this.state = EAppState.ImageUploading - await image.fromText(event.value) - .finally(() => event.target.isLoading = false) - this.state = EAppState.ImageUploaded - ui.setState(EAppUIState.Input) - await robot.takeOff() - } - else if (event.type === 'InputTextChange') { - const takeoff = async () => { - if (robot.pose === Robot.Pose.Land) - await robot.takeOff() - } - const detect = async () => { - this.state = EAppState.ImageDetecting - await image.detect() - this.state = EAppState.ImageDetected - } - const classify = async () => { - this.state = EAppState.ImageClassifying - await image.classify(event.target.text) - this.state = EAppState.ImageClassified - } - takeoff() - const background = event.target.parent?.getChildByName('InputBackground') as Nullable - background && (background.isLoading = true) - try { - await detect() - await classify() - } - finally { - background && (background.isLoading = false) - } - await cammandRobotMoveToImage(robot, image) - } - } - catch (reason) { - this.state = EAppState.Error - error(reason) - } - }) - - const camera = new FollowCamera('RobotCamera', new Vector3(0, 1, 0), scene, robot.mesh) - camera.lowerHeightOffsetLimit = 0 - camera.maxCameraSpeed = 6 - camera.radius = 6 - camera.attachControl() - - const light = new HemisphericLight('light', new Vector3(0, 1, 0), scene) - light.intensity = 0.7 - - return scene - } -} diff --git a/spaces/Kevin676/SmartAI/README.md b/spaces/Kevin676/SmartAI/README.md deleted file mode 100644 index 42f20c89ed093ad8408c4b271c9b3db79161f0ce..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/SmartAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SmartAI -emoji: 🐠 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py deleted file mode 100644 index fb9e6b500063f0825b54dc2c713aa1f283b33e0d..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py +++ /dev/null @@ -1,355 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False) - -sub_model_train = [ - 'panoptic_head', - 'data_preprocessor' -] - -sub_model_optim = { - 'panoptic_head': {'lr_mult': 1}, -} - -max_epochs = 2000 - -optimizer = dict( - type='AdamW', - sub_model=sub_model_optim, - lr=0.0005, - weight_decay=1e-3 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=1e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ), -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - val_evaluator=evaluator_, -) - - -image_size = (1024, 1024) - -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32, - pad_mask=True, - mask_pad_value=0, -) - -num_things_classes = 1 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes -prompt_shape = (90, 4) - - -model_cfg = dict( - type='SegSAMAnchorPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - need_train_names=sub_model_train, - data_preprocessor=data_preprocessor, - backbone=dict( - type='vit_h', - checkpoint='pretrain/sam/sam_vit_h_4b8939.pth', - # type='vit_b', - # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth', - ), - panoptic_head=dict( - type='SAMAnchorInstanceHead', - neck=dict( - type='SAMAggregatorNeck', - in_channels=[1280] * 32, - # in_channels=[768] * 12, - inner_channels=32, - selected_channels=range(4, 32, 2), - # selected_channels=range(4, 12, 2), - out_channels=256, - up_sample_scale=4, - ), - rpn_head=dict( - type='mmdet.RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='mmdet.AnchorGenerator', - scales=[2, 4, 8, 16, 32, 64], - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32]), - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - roi_head=dict( - type='SAMAnchorPromptRoIHead', - bbox_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - bbox_head=dict( - type='mmdet.Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=num_classes, - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - mask_head=dict( - type='SAMPromptMaskHead', - per_query_point=prompt_shape[1], - with_sincos=True, - class_agnostic=True, - loss_mask=dict( - type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=512, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=256, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=1024, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5) - ) - ) -) - -task_name = 'whu_ins' -exp_name = 'E20230629_0' -logger = dict( - type='WandbLogger', - project=task_name, - group='sam-anchor', - name=exp_name -) - - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valsegm_map_0', - save_top_k=3, - filename='epoch_{epoch}-map_{valsegm_map_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - strategy="auto", - # strategy="ddp", - # strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=8, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=10, - check_val_every_n_epoch=5, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=0, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 2 -train_num_workers = 2 -test_batch_size_per_gpu = 2 -test_num_workers = 2 -persistent_workers = True - - -data_parent = '/mnt/search01/dataset/cky_data/WHU' -train_data_prefix = 'train/' -val_data_prefix = 'test/' -dataset_type = 'WHUInsSegDataset' - - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - # ann_file='NWPU_instances_val.json', - # data_prefix=dict(img_path='positive image set'), - # ann_file='annotations/SSDD_instances_val.json', - # data_prefix=dict(img_path='imgs'), - ann_file='annotations/WHU_building_test.json', - data_prefix=dict(img_path=val_data_prefix + '/image'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args)) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - # ann_file='NWPU_instances_train.json', - # data_prefix=dict(img_path='positive image set'), - # ann_file='annotations/SSDD_instances_train.json', - # data_prefix=dict(img_path='imgs'), - ann_file='annotations/WHU_building_train.json', - data_prefix=dict(img_path=train_data_prefix + '/image'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - # test_loader=val_loader - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md b/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md deleted file mode 100644 index 30edfc9088f527a332ec8d00f44c6a5120ad26ee..0000000000000000000000000000000000000000 --- a/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md +++ /dev/null @@ -1,36 +0,0 @@ -0. 环境维护和升级(示例): -%PYTHON% -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt -这条一般不用执行 - -1. 安装ffmpeg,将整合包内的ffmpeg加入环境变量,使用自动标注需要用到,执行一次即可。安装完可能需要重启生效: -%PYTHON% setup_ffmpeg.py - -1. 数据集重采样和标注: - -a. whisper通用标注:音频在2-10s。根据显存选择配置,large需要12G显存。 -%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size large -%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size medium -%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size small -如果已经标注好了,不希望使用本脚本,请将音频重采样至单声道44100Hz - -b. 下载的已标注的原神数据集: -%PYTHON% transcribe_genshin.py - -2. 文本处理: -%PYTHON% preprocess_text.py - -3. bert_gen -%PYTHON% bert_gen.py - -4. 训练: -首次训练: -%PYTHON% train_ms.py -c ./configs\config.json - -继续训练: -%PYTHON% train_ms.py -c ./configs\config.json --cont - -启动TensorBoard: -%PYTHON% -m tensorboard.main --logdir=logs\OUTPUT_MODEL - -5. 推理 --config_dir可选 --model_dir 为配置文件和模型指定目录: -%PYTHON% inference_webui.py --model_dir ./logs\OUTPUT_MODEL\G_100.pth \ No newline at end of file diff --git a/spaces/LanguageBind/LanguageBind/model/build_model.py b/spaces/LanguageBind/LanguageBind/model/build_model.py deleted file mode 100644 index 736476bd35a6b6210a810b74819be66061053b33..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/model/build_model.py +++ /dev/null @@ -1,193 +0,0 @@ -import logging -import argparse -import os.path - -import numpy as np -import torch -from torch import nn -from transformers import AutoConfig - -from model.base_model import CLIPModel -from model.process_clip import add_time_attn_block, convert_model_to_lora, set_global_value, resize_pos -from open_clip import convert_weights_to_lp -from open_clip.transformer import PatchDropout -from training.distributed import is_master - - -def SET_GLOBAL_VALUE(k, v): - set_global_value(k, v) - -def create_vat_model(args): - - config = AutoConfig.from_pretrained(args.model, cache_dir=args.cache_dir) - model = CLIPModel(config, args.num_frames, args.add_time_attn) - - model.vision_model.patch_dropout = PatchDropout(args.force_patch_dropout) - - device = args.device - precision = args.precision - if precision in ("fp16", "bf16"): - dtype = torch.float16 if 'fp16' in precision else torch.bfloat16 - model.to(device=device) - convert_weights_to_lp(model, dtype=dtype) - elif precision in ("pure_fp16", "pure_bf16"): - dtype = torch.float16 if 'fp16' in precision else torch.bfloat16 - model.to(device=device, dtype=dtype) - else: - model.to(device=device) - - if args.pretrained: - try: - args.pretrained = os.path.join(args.cache_dir, args.pretrained) - if is_master(args): - logging.info(f'Loading pretrained {args.model} weights ({args.pretrained}).') - # incompatible_keys = load_checkpoint(model, pretrained, strict=False) - ckpt = torch.load(args.pretrained, map_location='cpu') - incompatible_keys = model.load_state_dict(ckpt, strict=False if args.add_time_attn else True) - if is_master(args): - logging.info(incompatible_keys) - except Exception as e: - if is_master(args): - logging.info(f"Failed loading pretrained model with {e}") - else: - if is_master(args): - logging.info(f"No pretrained model to load in \'{args.pretrained}\'") - - if args.add_time_attn: - add_time_attn_block(model.vision_model.encoder, device=device) - if is_master(args): - logging.info(f'Convert spatial attention to time attention pretrained.') - - if args.clip_type == 'al': - resize_pos(model.vision_model.embeddings, args) - if is_master(args): - logging.info(f'Resize to position embedding successfully.') - - if args.init_temp != 0: - with torch.no_grad(): - model.logit_scale.fill_(np.log(1 / float(args.init_temp))) - if is_master(args): - logging.info(f'Reset logit scale to {args.init_temp} (log-scale) and trainable {args.learn_temp}.') - - if args.convert_to_lora: - convert_model_to_lora(args, model) - if is_master(args): - logging.info(f"Successfuly convert model to lora style.") - - # if output_dict and hasattr(model, "output_dict"): - # model.output_dict = True - - return model - - -if __name__ == '__main__': - MODEL_DICT = {"ViT-L-14": "laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K", - "ViT-H-14": "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"} - CHECKPOINT_DICT = {"ViT-L-14": "models--laion--CLIP-ViT-L-14-DataComp.XL-s13B-b90K/snapshots/84c9828e63dc9a9351d1fe637c346d4c1c4db341/pytorch_model.bin", - "ViT-H-14": "models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/94a64189c3535c1cb44acfcccd7b0908c1c8eb23/pytorch_model.bin"} - - parser = argparse.ArgumentParser() - args = parser.parse_args() - args.pretrained = True - args.model = MODEL_DICT["ViT-L-14"] - args.pretrained = CHECKPOINT_DICT["ViT-L-14"] - args.cache_dir = 'D:\Omni-modal-valdt-1kw' - args.device = 'cpu' - args.precision = None - args.lock_text = True - args.lock_image = True - args.init_temp = 0 - args.force_patch_dropout = 0.5 - args.add_time_attn = True - args.convert_to_lora = True - args.lora_r = 16 - args.lora_alpha = 16 - args.lora_dropout = 0.0 # 0.1? - args.num_frames = 8 - args.clip_type = 'vl' - args.num_mel_bins = 128 - args.target_length = 1024 - args.audio_sample_rate = 16000 - args.audio_mean = 1 - args.audio_std = 1 - args.rank = 0 - - SET_GLOBAL_VALUE('PATCH_DROPOUT', args.force_patch_dropout) - SET_GLOBAL_VALUE('NUM_FRAMES', args.num_frames) - - model = create_vat_model(args) - - - '''方法1,自定义函数 参考自 https://blog.csdn.net/qq_33757398/article/details/109210240''' - - - def model_structure(model): - blank = ' ' - print('-' * 150) - print('|' + ' ' * 44 + 'weight name' + ' ' * 45 + '|' \ - + ' ' * 10 + 'weight shape' + ' ' * 10 + '|' \ - + ' ' * 3 + 'number' + ' ' * 3 + '|') - print('-' * 150) - num_para = 0 - type_size = 1 # 如果是浮点数就是4 - - for index, (key, w_variable) in enumerate(model.named_parameters()): - if len(key) <= 100: - key = key + (100 - len(key)) * blank - shape = str(w_variable.shape) - if len(shape) <= 30: - shape = shape + (30 - len(shape)) * blank - each_para = 1 - for k in w_variable.shape: - each_para *= k - num_para += each_para - str_num = str(each_para) - if len(str_num) <= 10: - str_num = str_num + (10 - len(str_num)) * blank - - print('| {} | {} | {} |'.format(key, shape, str_num)) - print('-' * 150) - print('The total number of parameters: ' + str(num_para)) - print('The parameters of Model {}: {:4f}M'.format(model._get_name(), num_para * type_size / 1000 / 1000)) - print('-' * 150) - - - model_structure(model) - # model_structure(model.vision_model) - # model_structure(model.text_model) - - - # model.lock_image_tower(unlocked_groups=1) - # model.lock_text_tower(unlocked_layers=0) - # model.unlock_time_attn() - - if args.lock_image: - # if args.clip_type == 'al' or args.clip_type == 'dl': - # for param in model.vision_model.embeddings.parameters(): - # param.requires_grad = True - # for param in model.vision_model.pre_layrnorm.parameters(): - # param.requires_grad = True - # else: - for param in model.vision_model.embeddings.parameters(): - param.requires_grad = False - for param in model.vision_model.pre_layrnorm.parameters(): - param.requires_grad = False - for param in model.vision_model.embeddings.position_embedding.parameters(): - param.requires_grad = False - model.vision_model.embeddings.class_embedding.requires_grad = True - - - if args.lock_text: - for param in model.text_model.parameters(): - param.requires_grad = False - for param in model.text_projection.parameters(): - param.requires_grad = False - - - for n, p in model.named_parameters(): - # if p.requires_grad: - print(n, '--->', p.requires_grad) - b, c, t, h, w = 2, 3, args.num_frames, 224, 224 - x = torch.randn(b, c, t, h, w) - y = model(image=x) - print() \ No newline at end of file diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py deleted file mode 100644 index eb65a8a915d2cbc66a346e42a5f2a17ee07bb585..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py +++ /dev/null @@ -1,788 +0,0 @@ -from toolbox import update_ui, update_ui_lastest_msg # 刷新Gradio前端界面 -from toolbox import zip_folder, objdump, objload, promote_file_to_downloadzone -import os, shutil -import re -import numpy as np -pj = os.path.join - -""" -======================================================================== -Part One -Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1) -======================================================================== -""" -PRESERVE = 0 -TRANSFORM = 1 - -def set_forbidden_text(text, mask, pattern, flags=0): - """ - Add a preserve text area in this paper - e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}" - you can mask out (mask = PRESERVE so that text become untouchable for GPT) - everything between "\begin{equation}" and "\end{equation}" - """ - if isinstance(pattern, list): pattern = '|'.join(pattern) - pattern_compile = re.compile(pattern, flags) - for res in pattern_compile.finditer(text): - mask[res.span()[0]:res.span()[1]] = PRESERVE - return text, mask - -def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True): - """ - Move area out of preserve area (make text editable for GPT) - count the number of the braces so as to catch compelete text area. - e.g. - \begin{abstract} blablablablablabla. \end{abstract} - """ - if isinstance(pattern, list): pattern = '|'.join(pattern) - pattern_compile = re.compile(pattern, flags) - for res in pattern_compile.finditer(text): - if not forbid_wrapper: - mask[res.span()[0]:res.span()[1]] = TRANSFORM - else: - mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}' - mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract - mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract - return text, mask - -def set_forbidden_text_careful_brace(text, mask, pattern, flags=0): - """ - Add a preserve text area in this paper (text become untouchable for GPT). - count the number of the braces so as to catch compelete text area. - e.g. - \caption{blablablablabla\texbf{blablabla}blablabla.} - """ - pattern_compile = re.compile(pattern, flags) - for res in pattern_compile.finditer(text): - brace_level = -1 - p = begin = end = res.regs[0][0] - for _ in range(1024*16): - if text[p] == '}' and brace_level == 0: break - elif text[p] == '}': brace_level -= 1 - elif text[p] == '{': brace_level += 1 - p += 1 - end = p+1 - mask[begin:end] = PRESERVE - return text, mask - -def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True): - """ - Move area out of preserve area (make text editable for GPT) - count the number of the braces so as to catch compelete text area. - e.g. - \caption{blablablablabla\texbf{blablabla}blablabla.} - """ - pattern_compile = re.compile(pattern, flags) - for res in pattern_compile.finditer(text): - brace_level = 0 - p = begin = end = res.regs[1][0] - for _ in range(1024*16): - if text[p] == '}' and brace_level == 0: break - elif text[p] == '}': brace_level -= 1 - elif text[p] == '{': brace_level += 1 - p += 1 - end = p - mask[begin:end] = TRANSFORM - if forbid_wrapper: - mask[res.regs[0][0]:begin] = PRESERVE - mask[end:res.regs[0][1]] = PRESERVE - return text, mask - -def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42): - """ - Find all \begin{} ... \end{} text block that with less than limit_n_lines lines. - Add it to preserve area - """ - pattern_compile = re.compile(pattern, flags) - def search_with_line_limit(text, mask): - for res in pattern_compile.finditer(text): - cmd = res.group(1) # begin{what} - this = res.group(2) # content between begin and end - this_mask = mask[res.regs[2][0]:res.regs[2][1]] - white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof', - 'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate'] - if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42 - this, this_mask = search_with_line_limit(this, this_mask) - mask[res.regs[2][0]:res.regs[2][1]] = this_mask - else: - mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE - return text, mask - return search_with_line_limit(text, mask) - -class LinkedListNode(): - """ - Linked List Node - """ - def __init__(self, string, preserve=True) -> None: - self.string = string - self.preserve = preserve - self.next = None - # self.begin_line = 0 - # self.begin_char = 0 - -def convert_to_linklist(text, mask): - root = LinkedListNode("", preserve=True) - current_node = root - for c, m, i in zip(text, mask, range(len(text))): - if (m==PRESERVE and current_node.preserve) \ - or (m==TRANSFORM and not current_node.preserve): - # add - current_node.string += c - else: - current_node.next = LinkedListNode(c, preserve=(m==PRESERVE)) - current_node = current_node.next - return root -""" -======================================================================== -Latex Merge File -======================================================================== -""" - -def 寻找Latex主文件(file_manifest, mode): - """ - 在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。 - P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码) - """ - canidates = [] - for texf in file_manifest: - if os.path.basename(texf).startswith('merge'): - continue - with open(texf, 'r', encoding='utf8') as f: - file_content = f.read() - if r'\documentclass' in file_content: - canidates.append(texf) - else: - continue - - if len(canidates) == 0: - raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)') - elif len(canidates) == 1: - return canidates[0] - else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回 - canidates_score = [] - # 给出一些判定模板文档的词作为扣分项 - unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers'] - expected_words = ['\input', '\ref', '\cite'] - for texf in canidates: - canidates_score.append(0) - with open(texf, 'r', encoding='utf8') as f: - file_content = f.read() - for uw in unexpected_words: - if uw in file_content: - canidates_score[-1] -= 1 - for uw in expected_words: - if uw in file_content: - canidates_score[-1] += 1 - select = np.argmax(canidates_score) # 取评分最高者返回 - return canidates[select] - -def rm_comments(main_file): - new_file_remove_comment_lines = [] - for l in main_file.splitlines(): - # 删除整行的空注释 - if l.lstrip().startswith("%"): - pass - else: - new_file_remove_comment_lines.append(l) - main_file = '\n'.join(new_file_remove_comment_lines) - # main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令 - main_file = re.sub(r'(? 0 and node_string.count('\_') > final_tex.count('\_'): - # walk and replace any _ without \ - final_tex = re.sub(r"(?') - if not node.preserve: - segment_parts_for_gpt.append(node.string) - f.write(f'

    #{show_html}#

    ') - else: - f.write(f'

    {show_html}

    ') - node = node.next - if node is None: break - - for n in nodes: n.next = None # break - return_dict['nodes'] = nodes - return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt - return return_dict - - - -class LatexPaperSplit(): - """ - break down latex file to a linked list, - each node use a preserve flag to indicate whether it should - be proccessed by GPT. - """ - def __init__(self) -> None: - self.nodes = None - self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \ - "版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \ - "项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。" - # 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加REAME中的QQ联系开发者) - self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\" - - def merge_result(self, arr, mode, msg): - """ - Merge the result after the GPT process completed - """ - result_string = "" - p = 0 - for node in self.nodes: - if node.preserve: - result_string += node.string - else: - result_string += fix_content(arr[p], node.string) - p += 1 - if mode == 'translate_zh': - pattern = re.compile(r'\\begin\{abstract\}.*\n') - match = pattern.search(result_string) - if not match: - # match \abstract{xxxx} - pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL) - match = pattern_compile.search(result_string) - position = match.regs[1][0] - else: - # match \begin{abstract}xxxx\end{abstract} - position = match.end() - result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:] - return result_string - - def split(self, txt, project_folder, opts): - """ - break down latex file to a linked list, - each node use a preserve flag to indicate whether it should - be proccessed by GPT. - P.S. use multiprocessing to avoid timeout error - """ - import multiprocessing - manager = multiprocessing.Manager() - return_dict = manager.dict() - p = multiprocessing.Process( - target=split_subprocess, - args=(txt, project_folder, return_dict, opts)) - p.start() - p.join() - p.close() - self.nodes = return_dict['nodes'] - self.sp = return_dict['segment_parts_for_gpt'] - return self.sp - - - -class LatexPaperFileGroup(): - """ - use tokenizer to break down text according to max_token_limit - """ - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - use tokenizer to break down text according to max_token_limit - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - print('Segmentation: done') - - def merge_result(self): - self.file_result = ["" for _ in range(len(self.file_paths))] - for r, k in zip(self.sp_file_result, self.sp_file_index): - self.file_result[k] += r - - def write_result(self): - manifest = [] - for path, res in zip(self.file_paths, self.file_result): - with open(path + '.polish.tex', 'w', encoding='utf8') as f: - manifest.append(path + '.polish.tex') - f.write(res) - return manifest - -def write_html(sp_file_contents, sp_file_result, chatbot, project_folder): - - # write html - try: - import shutil - from .crazy_utils import construct_html - from toolbox import gen_time_str - ch = construct_html() - orig = "" - trans = "" - final = [] - for c,r in zip(sp_file_contents, sp_file_result): - final.append(c) - final.append(r) - for i, k in enumerate(final): - if i%2==0: - orig = k - if i%2==1: - trans = k - ch.add_row(a=orig, b=trans) - create_report_file_name = f"{gen_time_str()}.trans.html" - ch.save_file(create_report_file_name) - shutil.copyfile(pj('./gpt_log/', create_report_file_name), pj(project_folder, create_report_file_name)) - promote_file_to_downloadzone(file=f'./gpt_log/{create_report_file_name}', chatbot=chatbot) - except: - from toolbox import trimmed_format_exc - print('writing html result failed:', trimmed_format_exc()) - -def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from .latex_utils import LatexPaperFileGroup, merge_tex_files, LatexPaperSplit, 寻找Latex主文件 - - # <-------- 寻找主tex文件 ----------> - maintex = 寻找Latex主文件(file_manifest, mode) - chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - time.sleep(3) - - # <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ----------> - main_tex_basename = os.path.basename(maintex) - assert main_tex_basename.endswith('.tex') - main_tex_basename_bare = main_tex_basename[:-4] - may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl') - if os.path.exists(may_exist_bbl): - shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl')) - shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl')) - shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl')) - - with open(maintex, 'r', encoding='utf-8', errors='replace') as f: - content = f.read() - merged_content = merge_tex_files(project_folder, content, mode) - - with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f: - f.write(merged_content) - - # <-------- 精细切分latex文件 ----------> - chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - lps = LatexPaperSplit() - res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数 - - # <-------- 拆分过长的latex片段 ----------> - pfg = LatexPaperFileGroup() - for index, r in enumerate(res): - pfg.file_paths.append('segment-' + str(index)) - pfg.file_contents.append(r) - - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - # <-------- 根据需要切换prompt ----------> - inputs_array, sys_prompt_array = switch_prompt(pfg, mode) - inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag] - - if os.path.exists(pj(project_folder,'temp.pkl')): - - # <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ----------> - pfg = objload(file=pj(project_folder,'temp.pkl')) - - else: - # <-------- gpt 多线程请求 ----------> - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待 - scroller_max_len = 40 - ) - - # <-------- 文本碎片重组为完整的tex片段 ----------> - pfg.sp_file_result = [] - for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents): - pfg.sp_file_result.append(gpt_say) - pfg.merge_result() - - # <-------- 临时存储用于调试 ----------> - pfg.get_token_num = None - objdump(pfg, file=pj(project_folder,'temp.pkl')) - - write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder) - - # <-------- 写出文件 ----------> - msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}。" - final_tex = lps.merge_result(pfg.file_result, mode, msg) - with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f: - if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex) - - - # <-------- 整理结果, 退出 ----------> - chatbot.append((f"完成了吗?", 'GPT结果已输出, 正在编译PDF')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # <-------- 返回 ----------> - return project_folder + f'/merge_{mode}.tex' - - - -def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified): - try: - with open(log_path, 'r', encoding='utf-8', errors='replace') as f: - log = f.read() - with open(file_path, 'r', encoding='utf-8', errors='replace') as f: - file_lines = f.readlines() - import re - buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log) - buggy_lines = [int(l) for l in buggy_lines] - buggy_lines = sorted(buggy_lines) - print("removing lines that has errors", buggy_lines) - file_lines.pop(buggy_lines[0]-1) - with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f: - f.writelines(file_lines) - return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines - except: - print("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.") - return False, -1, [-1] - -def compile_latex_with_timeout(command, cwd, timeout=60): - import subprocess - process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd) - try: - stdout, stderr = process.communicate(timeout=timeout) - except subprocess.TimeoutExpired: - process.kill() - stdout, stderr = process.communicate() - print("Process timed out!") - return False - return True - -def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'): - import os, time - current_dir = os.getcwd() - n_fix = 1 - max_try = 32 - chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history) - chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面 - yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面 - - while True: - import os - - # https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面 - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original) - - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面 - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified) - - if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')): - # 只有第二步成功,才能继续下面的步骤 - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面 - if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')): - ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original) - if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')): - ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified) - - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面 - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original) - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified) - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original) - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified) - - if mode!='translate_zh': - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面 - print( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex') - ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex') - - yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面 - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder) - ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder) - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder) - ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder) - - - # <---------- 检查结果 -----------> - results_ = "" - original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf')) - modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')) - diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf')) - results_ += f"原始PDF编译是否成功: {original_pdf_success};" - results_ += f"转化PDF编译是否成功: {modified_pdf_success};" - results_ += f"对比PDF编译是否成功: {diff_pdf_success};" - yield from update_ui_lastest_msg(f'第{n_fix}编译结束:
    {results_}...', chatbot, history) # 刷新Gradio前端界面 - - if diff_pdf_success: - result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path - promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI - if modified_pdf_success: - yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面 - result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path - if os.path.exists(pj(work_folder, '..', 'translation')): - shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf')) - promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI - return True # 成功啦 - else: - if n_fix>=max_try: break - n_fix += 1 - can_retry, main_file_modified, buggy_lines = remove_buggy_lines( - file_path=pj(work_folder_modified, f'{main_file_modified}.tex'), - log_path=pj(work_folder_modified, f'{main_file_modified}.log'), - tex_name=f'{main_file_modified}.tex', - tex_name_pure=f'{main_file_modified}', - n_fix=n_fix, - work_folder_modified=work_folder_modified, - ) - yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面 - if not can_retry: break - - return False # 失败啦 - - - diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py deleted file mode 100644 index 5feb0c61ff2738338527e1aceaa569051a655cf8..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py', - '../../_base_/schedules/schedule_sgd_160e.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/maskrcnn_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py b/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py deleted file mode 100644 index 313b4f8f689735d0593e46ef154505fb40544c77..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py +++ /dev/null @@ -1,73 +0,0 @@ -# ------------------------------------------------------------------------ -# HOTR official code : engine/trainer.py -# Copyright (c) Kakao Brain, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -import math -import torch -import sys -import hotr.util.misc as utils -import hotr.util.logger as loggers -from hotr.util.ramp import * -from typing import Iterable -import wandb - -def train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module, - data_loader: Iterable, optimizer: torch.optim.Optimizer, - device: torch.device, epoch: int, max_epoch: int, ramp_up_epoch: int,rampdown_epoch: int,max_consis_coef: float=1.0,max_norm: float = 0,dataset_file: str = 'coco', log: bool = False): - model.train() - criterion.train() - metric_logger = loggers.MetricLogger(mode="train", delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}')) - space_fmt = str(len(str(max_epoch))) - header = 'Epoch [{start_epoch: >{fill}}/{end_epoch}]'.format(start_epoch=epoch+1, end_epoch=max_epoch, fill=space_fmt) - print_freq = int(len(data_loader)/5) - - if epoch<=rampdown_epoch: - consis_coef=sigmoid_rampup(epoch,ramp_up_epoch,max_consis_coef) - else: - consis_coef=cosine_rampdown(epoch-rampdown_epoch,max_epoch-rampdown_epoch,max_consis_coef) - - print(f"\n>>> Epoch #{(epoch+1)}") - for samples, targets in metric_logger.log_every(data_loader, print_freq, header): - samples = samples.to(device) - targets = [{k: v.to(device) for k, v in t.items()} for t in targets] - - outputs = model(samples) - loss_dict = criterion(outputs, targets, log) - #print(loss_dict) - weight_dict = criterion.weight_dict - - losses = sum(loss_dict[k] * weight_dict[k]*consis_coef if 'consistency' in k else loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict) - - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = utils.reduce_dict(loss_dict) - loss_dict_reduced_unscaled = {f'{k}_unscaled': v - for k, v in loss_dict_reduced.items()} - loss_dict_reduced_scaled = {k: v * weight_dict[k]*consis_coef if 'consistency' in k else v * weight_dict[k] for k, v in loss_dict_reduced.items() if k in weight_dict} - losses_reduced_scaled = sum(loss_dict_reduced_scaled.values()) - loss_value = losses_reduced_scaled.item() - - - if not math.isfinite(loss_value): - print("Loss is {}, stopping training".format(loss_value)) - print(loss_dict_reduced) - sys.exit(1) - - optimizer.zero_grad() - losses.backward() - if max_norm > 0: - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) - optimizer.step() - - metric_logger.update(loss=loss_value, **loss_dict_reduced_scaled) - if "obj_class_error" in loss_dict: - metric_logger.update(obj_class_error=loss_dict_reduced['obj_class_error']) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - # gather the stats from all processes - metric_logger.synchronize_between_processes() - if utils.get_rank() == 0 and log: wandb.log(loss_dict_reduced_scaled) - print("Averaged stats:", metric_logger) - return {k: meter.global_avg for k, meter in metric_logger.meters.items()} diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py deleted file mode 100644 index 8ffba6afd9bf5e9848c891a855943ede73568c3b..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .modeling.meta_arch import custom_rcnn -from .modeling.roi_heads import detic_roi_heads -from .modeling.roi_heads import res5_roi_heads -from .modeling.backbone import swintransformer -from .modeling.backbone import timm - - -from .data.datasets import lvis_v1 -from .data.datasets import imagenet -from .data.datasets import cc -from .data.datasets import objects365 -from .data.datasets import oid -from .data.datasets import coco_zeroshot - -try: - from .modeling.meta_arch import d2_deformable_detr -except: - pass \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py deleted file mode 100644 index 3f71b25488cc11a6b4d582ac52b5a24e1ad1cf8e..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'STAREDataset' -data_root = 'data/STARE' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (605, 700) -crop_size = (128, 128) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py b/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py deleted file mode 100644 index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py +++ /dev/null @@ -1,41 +0,0 @@ -""" Text to speech module """ -import threading -from threading import Semaphore - -from autogpt.config import Config -from autogpt.speech.brian import BrianSpeech -from autogpt.speech.eleven_labs import ElevenLabsSpeech -from autogpt.speech.gtts import GTTSVoice -from autogpt.speech.macos_tts import MacOSTTS - -CFG = Config() -DEFAULT_VOICE_ENGINE = GTTSVoice() -VOICE_ENGINE = None -if CFG.elevenlabs_api_key: - VOICE_ENGINE = ElevenLabsSpeech() -elif CFG.use_mac_os_tts == "True": - VOICE_ENGINE = MacOSTTS() -elif CFG.use_brian_tts == "True": - VOICE_ENGINE = BrianSpeech() -else: - VOICE_ENGINE = GTTSVoice() - - -QUEUE_SEMAPHORE = Semaphore( - 1 -) # The amount of sounds to queue before blocking the main thread - - -def say_text(text: str, voice_index: int = 0) -> None: - """Speak the given text using the given voice index""" - - def speak() -> None: - success = VOICE_ENGINE.say(text, voice_index) - if not success: - DEFAULT_VOICE_ENGINE.say(text) - - QUEUE_SEMAPHORE.release() - - QUEUE_SEMAPHORE.acquire(True) - thread = threading.Thread(target=speak) - thread.start() diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py deleted file mode 100644 index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Utilities for bounding box manipulation and GIoU. -""" -import torch -from torchvision.ops.boxes import box_area - - -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(-1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=-1) - - -def box_xyxy_to_cxcywh(x): - x0, y0, x1, y1 = x.unbind(-1) - b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)] - return torch.stack(b, dim=-1) - - -# modified from torchvision to also return the union -def box_iou(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - # import ipdb; ipdb.set_trace() - lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] - rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] - - wh = (rb - lt).clamp(min=0) # [N,M,2] - inter = wh[:, :, 0] * wh[:, :, 1] # [N,M] - - union = area1[:, None] + area2 - inter - - iou = inter / (union + 1e-6) - return iou, union - - -def generalized_box_iou(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - The boxes should be in [x0, y0, x1, y1] format - - Returns a [N, M] pairwise matrix, where N = len(boxes1) - and M = len(boxes2) - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - # except: - # import ipdb; ipdb.set_trace() - iou, union = box_iou(boxes1, boxes2) - - lt = torch.min(boxes1[:, None, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,M,2] - area = wh[:, :, 0] * wh[:, :, 1] - - return iou - (area - union) / (area + 1e-6) - - -# modified from torchvision to also return the union -def box_iou_pairwise(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2] - rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2] - - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - - union = area1 + area2 - inter - - iou = inter / union - return iou, union - - -def generalized_box_iou_pairwise(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - Input: - - boxes1, boxes2: N,4 - Output: - - giou: N, 4 - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - assert boxes1.shape == boxes2.shape - iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4 - - lt = torch.min(boxes1[:, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,2] - area = wh[:, 0] * wh[:, 1] - - return iou - (area - union) / area - - -def masks_to_boxes(masks): - """Compute the bounding boxes around the provided masks - - The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. - - Returns a [N, 4] tensors, with the boxes in xyxy format - """ - if masks.numel() == 0: - return torch.zeros((0, 4), device=masks.device) - - h, w = masks.shape[-2:] - - y = torch.arange(0, h, dtype=torch.float) - x = torch.arange(0, w, dtype=torch.float) - y, x = torch.meshgrid(y, x) - - x_mask = masks * x.unsqueeze(0) - x_max = x_mask.flatten(1).max(-1)[0] - x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - y_mask = masks * y.unsqueeze(0) - y_max = y_mask.flatten(1).max(-1)[0] - y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - return torch.stack([x_min, y_min, x_max, y_max], 1) - - -if __name__ == "__main__": - x = torch.rand(5, 4) - y = torch.rand(3, 4) - iou, union = box_iou(x, y) - import ipdb - - ipdb.set_trace() diff --git a/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md b/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md deleted file mode 100644 index b31f80d44cdcfff7ad316713f66a8e90eb7339e0..0000000000000000000000000000000000000000 --- a/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Soil Shear Strength prediction tool -emoji: 🏆 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py b/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/MrVicente/RA-BART/data/relation_utils.py b/spaces/MrVicente/RA-BART/data/relation_utils.py deleted file mode 100644 index ada9e4080d2e9040f22f14e3dd747200bc16c745..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/data/relation_utils.py +++ /dev/null @@ -1,53 +0,0 @@ - -############################# -# Imports -############################# - -# Python modules -from collections import deque -from ast import literal_eval - -# Remote modules -import torch - -# Local modules - -############################# -# Constants -############################# - -########################################################## -# Helper functions for Relations in dict format -########################################################## - -def clean_relations(word_relations): - new_relations = deque() - for r in word_relations: - rel = {} - for r_key, r_value in r.items(): - normal_k = literal_eval(r_key) - rel_d = {} - for r_d_key, r_d_value in r_value.items(): - normal_d_k = literal_eval(r_d_key) - rel_d[normal_d_k] = r_d_value - rel[normal_k] = rel_d - new_relations.append(rel) - list_new_relations = list(new_relations) - return list_new_relations - -########################################################## -# Helper functions for Relations in Matrix format -########################################################## - -def relation_binary_2d_to_1d(relations_binary_mask, dim=1): - relations_binary_mask = relations_binary_mask.sum(dim=dim) - relations_binary_mask[relations_binary_mask > 1] = 1 - return relations_binary_mask - -def tokens_with_relations(relations_binary_mask): - relations_binary_mask_dim1 = relations_binary_mask.sum(dim=0) - relations_binary_mask_dim2 = relations_binary_mask.sum(dim=1) - tokens_with_rels = relations_binary_mask_dim1 + relations_binary_mask_dim2 - tokens_with_rels[tokens_with_rels > 1] = 1 - mask_rels = torch.tensor(tokens_with_rels, dtype=torch.bool) - return mask_rels diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py deleted file mode 100644 index 4be97b377dcb95a0e6bceb876ac0ce93c8290249..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from . import base_processor -from . import common_processors diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py deleted file mode 100644 index 0a562e5331e638cab82bc8033bfa2c1fc355e960..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py +++ /dev/null @@ -1,87 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Tests for pg_train. - -These tests excersize code paths available through configuration options. -Training will be run for just a few steps with the goal being to check that -nothing crashes. -""" - -from absl import flags -import tensorflow as tf - -from single_task import defaults # brain coder -from single_task import run # brain coder - -FLAGS = flags.FLAGS - - -class TrainTest(tf.test.TestCase): - - def RunTrainingSteps(self, config_string, num_steps=10): - """Run a few training steps with the given config. - - Just check that nothing crashes. - - Args: - config_string: Config encoded in a string. See - $REPO_PATH/common/config_lib.py - num_steps: Number of training steps to run. Defaults to 10. - """ - config = defaults.default_config_with_updates(config_string) - FLAGS.master = '' - FLAGS.max_npe = num_steps * config.batch_size - FLAGS.summary_interval = 1 - FLAGS.logdir = tf.test.get_temp_dir() - FLAGS.config = config_string - tf.reset_default_graph() - run.main(None) - - def testVanillaPolicyGradient(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg"),' - 'timestep_limit=90,batch_size=64') - - def testVanillaPolicyGradient_VariableLengthSequences(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",eos_token=False),' - 'timestep_limit=90,batch_size=64') - - def testVanillaActorCritic(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",ema_baseline_decay=0.0),' - 'timestep_limit=90,batch_size=64') - - def testPolicyGradientWithTopK(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",topk_loss_hparam=1.0,topk=10),' - 'timestep_limit=90,batch_size=64') - - def testVanillaActorCriticWithTopK(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",ema_baseline_decay=0.0,topk_loss_hparam=1.0,' - 'topk=10),' - 'timestep_limit=90,batch_size=64') - - def testPolicyGradientWithTopK_VariableLengthSequences(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",topk_loss_hparam=1.0,topk=10,eos_token=False),' - 'timestep_limit=90,batch_size=64') - - def testPolicyGradientWithImportanceSampling(self): - self.RunTrainingSteps( - 'env=c(task="reverse"),' - 'agent=c(algorithm="pg",alpha=0.5),' - 'timestep_limit=90,batch_size=64') - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Nee001/bing0/src/components/external-link.tsx b/spaces/Nee001/bing0/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py b/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py deleted file mode 100644 index c74b28fb1dc6a7f5c5ad3f7d8bb96c19c52ee92b..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py +++ /dev/null @@ -1,267 +0,0 @@ -import numpy as np -import random -import torch -from collections import OrderedDict -from torch.nn import functional as F - -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt -from basicsr.data.transforms import paired_random_crop -from basicsr.losses.loss_util import get_refined_artifact_map -from basicsr.models.srgan_model import SRGANModel -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.utils.registry import MODEL_REGISTRY - - -@MODEL_REGISTRY.register(suffix='basicsr') -class RealESRGANModel(SRGANModel): - """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It mainly performs: - 1. randomly synthesize LQ images in GPU tensors - 2. optimize the networks with GAN training. - """ - - def __init__(self, opt): - super(RealESRGANModel, self).__init__(opt) - self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - self.usm_sharpener = USMSharp().cuda() # do usm sharpening - self.queue_size = opt.get('queue_size', 180) - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - @torch.no_grad() - def feed_data(self, data): - """Accept data from dataloader, and then add two-order degradations to obtain LQ images. - """ - if self.is_train and self.opt.get('high_order_degradation', True): - # training data synthesis - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - self.kernel1 = data['kernel1'].to(self.device) - self.kernel2 = data['kernel2'].to(self.device) - self.sinc_kernel = data['sinc_kernel'].to(self.device) - - ori_h, ori_w = self.gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(self.gt_usm, self.kernel1) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob'] - if np.random.uniform() < self.opt['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = self.jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if np.random.uniform() < self.opt['second_blur_prob']: - out = filter2D(out, self.kernel2) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range2'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob2'] - if np.random.uniform() < self.opt['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if np.random.uniform() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - - # clamp and round - self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255. - - # random crop - gt_size = self.opt['gt_size'] - (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size, - self.opt['scale']) - - # training pair pool - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.gt_usm = self.usm_sharpener(self.gt) - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - else: - # for paired training or validation - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - # do not use the synthetic process during validation - self.is_train = False - super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img) - self.is_train = True - - def optimize_parameters(self, current_iter): - # usm sharpening - l1_gt = self.gt_usm - percep_gt = self.gt_usm - gan_gt = self.gt_usm - if self.opt['l1_gt_usm'] is False: - l1_gt = self.gt - if self.opt['percep_gt_usm'] is False: - percep_gt = self.gt - if self.opt['gan_gt_usm'] is False: - gan_gt = self.gt - - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - if self.cri_ldl: - self.output_ema = self.net_g_ema(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, l1_gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - if self.cri_ldl: - pixel_weight = get_refined_artifact_map(self.gt, self.output, self.output_ema, 7) - l_g_ldl = self.cri_ldl(torch.mul(pixel_weight, self.output), torch.mul(pixel_weight, self.gt)) - l_g_total += l_g_ldl - loss_dict['l_g_ldl'] = l_g_ldl - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - real_d_pred = self.net_d(gan_gt) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9 - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - self.log_dict = self.reduce_loss_dict(loss_dict) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh deleted file mode 100644 index c523d92634d9b61b97bbcdbfd17dfc33465bfc09..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -SCRIPT=`realpath $0` -MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2 - -export PATH=$PATH:"$MECAB/bin":"$MECAB/lib" -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib" - -cat - | mecab -O wakati diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py deleted file mode 100644 index 5bf3e51e7a50ac3f07cc41739198cde946dc79aa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -from fairseq.data import Dictionary - - -def get_parser(): - parser = argparse.ArgumentParser( - description="filters a lexicon given a unit dictionary" - ) - parser.add_argument("-d", "--unit-dict", help="unit dictionary", required=True) - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - d = Dictionary.load(args.unit_dict) - symbols = set(d.symbols) - - for line in sys.stdin: - items = line.rstrip().split() - skip = len(items) < 2 - for x in items[1:]: - if x not in symbols: - skip = True - break - if not skip: - print(line, end="") - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py deleted file mode 100644 index abfcb2be4028889acd72c6f40d4c832e48cff344..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory.abs - -from pathlib import Path -from typing import List, Dict, Optional, Any -from dataclasses import dataclass - -import numpy as np -import torch - -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDataset, SpeechToTextDatasetCreator, S2TDataConfig, - _collate_frames, get_features_or_waveform -) -from fairseq.data import Dictionary, data_utils as fairseq_data_utils - - -@dataclass -class TextToSpeechDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - duration: Optional[torch.Tensor] = None - pitch: Optional[torch.Tensor] = None - energy: Optional[torch.Tensor] = None - - -class TextToSpeechDataset(SpeechToTextDataset): - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - durations: Optional[List[List[int]]] = None, - pitches: Optional[List[str]] = None, - energies: Optional[List[str]] = None - ): - super(TextToSpeechDataset, self).__init__( - split, is_train_split, cfg, audio_paths, n_frames, - src_texts=src_texts, tgt_texts=tgt_texts, speakers=speakers, - src_langs=src_langs, tgt_langs=tgt_langs, ids=ids, - tgt_dict=tgt_dict, pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - self.durations = durations - self.pitches = pitches - self.energies = energies - - def __getitem__(self, index: int) -> TextToSpeechDatasetItem: - s2t_item = super().__getitem__(index) - - duration, pitch, energy = None, None, None - if self.durations is not None: - duration = torch.tensor( - self.durations[index] + [0], dtype=torch.long # pad 0 for EOS - ) - if self.pitches is not None: - pitch = get_features_or_waveform(self.pitches[index]) - pitch = torch.from_numpy( - np.concatenate((pitch, [0])) # pad 0 for EOS - ).float() - if self.energies is not None: - energy = get_features_or_waveform(self.energies[index]) - energy = torch.from_numpy( - np.concatenate((energy, [0])) # pad 0 for EOS - ).float() - return TextToSpeechDatasetItem( - index=index, source=s2t_item.source, target=s2t_item.target, - speaker_id=s2t_item.speaker_id, duration=duration, pitch=pitch, - energy=energy - ) - - def collater(self, samples: List[TextToSpeechDatasetItem]) -> Dict[str, Any]: - if len(samples) == 0: - return {} - - src_lengths, order = torch.tensor( - [s.target.shape[0] for s in samples], dtype=torch.long - ).sort(descending=True) - id_ = torch.tensor([s.index for s in samples], - dtype=torch.long).index_select(0, order) - feat = _collate_frames( - [s.source for s in samples], self.cfg.use_audio_input - ).index_select(0, order) - target_lengths = torch.tensor( - [s.source.shape[0] for s in samples], dtype=torch.long - ).index_select(0, order) - - src_tokens = fairseq_data_utils.collate_tokens( - [s.target for s in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ).index_select(0, order) - - speaker = None - if self.speaker_to_id is not None: - speaker = torch.tensor( - [s.speaker_id for s in samples], dtype=torch.long - ).index_select(0, order).view(-1, 1) - - bsz, _, d = feat.size() - prev_output_tokens = torch.cat( - (feat.new_zeros((bsz, 1, d)), feat[:, :-1, :]), dim=1 - ) - - durations, pitches, energies = None, None, None - if self.durations is not None: - durations = fairseq_data_utils.collate_tokens( - [s.duration for s in samples], 0 - ).index_select(0, order) - assert src_tokens.shape[1] == durations.shape[1] - if self.pitches is not None: - pitches = _collate_frames([s.pitch for s in samples], True) - pitches = pitches.index_select(0, order) - assert src_tokens.shape[1] == pitches.shape[1] - if self.energies is not None: - energies = _collate_frames([s.energy for s in samples], True) - energies = energies.index_select(0, order) - assert src_tokens.shape[1] == energies.shape[1] - src_texts = [self.tgt_dict.string(samples[i].target) for i in order] - - return { - "id": id_, - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - "prev_output_tokens": prev_output_tokens, - }, - "speaker": speaker, - "target": feat, - "durations": durations, - "pitches": pitches, - "energies": energies, - "target_lengths": target_lengths, - "ntokens": sum(target_lengths).item(), - "nsentences": len(samples), - "src_texts": src_texts, - } - - -class TextToSpeechDatasetCreator(SpeechToTextDatasetCreator): - KEY_DURATION = "duration" - KEY_PITCH = "pitch" - KEY_ENERGY = "energy" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> TextToSpeechDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - - durations = [s.get(cls.KEY_DURATION, None) for s in samples] - durations = [ - None if dd is None else [int(d) for d in dd.split(" ")] - for dd in durations - ] - durations = None if any(dd is None for dd in durations) else durations - - pitches = [s.get(cls.KEY_PITCH, None) for s in samples] - pitches = [ - None if pp is None else (audio_root / pp).as_posix() - for pp in pitches - ] - pitches = None if any(pp is None for pp in pitches) else pitches - - energies = [s.get(cls.KEY_ENERGY, None) for s in samples] - energies = [ - None if ee is None else (audio_root / ee).as_posix() - for ee in energies] - energies = None if any(ee is None for ee in energies) else energies - - return TextToSpeechDataset( - split_name, is_train_split, cfg, audio_paths, n_frames, - src_texts, tgt_texts, speakers, src_langs, tgt_langs, ids, tgt_dict, - pre_tokenizer, bpe_tokenizer, n_frames_per_step, speaker_to_id, - durations, pitches, energies - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py deleted file mode 100644 index 7415e86abdf8ddc2d797092bf98f7a1331e038d6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq.data import MonolingualDataset -from fairseq.tasks.language_modeling import LanguageModelingTask, LanguageModelingConfig -from tests import utils as test_utils - - -class TestLMContextWindow(unittest.TestCase): - - def test_eval_dataloader(self): - dictionary = test_utils.dummy_dictionary(10) - assert len(dictionary) == 14 # 4 extra special symbols - assert dictionary.pad() == 1 - - dataset = test_utils.TestDataset([ - torch.tensor([4, 5, 6, 7], dtype=torch.long), - torch.tensor([8, 9, 10, 11], dtype=torch.long), - torch.tensor([12, 13], dtype=torch.long), - ]) - dataset = MonolingualDataset(dataset, sizes=[4, 4, 2], src_vocab=dictionary) - - config = LanguageModelingConfig(tokens_per_sample=4) - task = LanguageModelingTask(config, dictionary) - - eval_dataloader = task.eval_lm_dataloader( - dataset=dataset, - batch_size=1, - context_window=2, - ) - - batch = next(eval_dataloader) - assert batch["net_input"]["src_tokens"][0].tolist() == [4, 5, 6, 7, 1, 1] - assert batch["target"][0].tolist() == [4, 5, 6, 7, 1, 1] - - batch = next(eval_dataloader) - assert batch["net_input"]["src_tokens"][0].tolist() == [6, 7, 8, 9, 10, 11] - assert batch["target"][0].tolist() == [1, 1, 8, 9, 10, 11] - - batch = next(eval_dataloader) - assert batch["net_input"]["src_tokens"][0].tolist() == [10, 11, 12, 13] - assert batch["target"][0].tolist() == [1, 1, 12, 13] - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh b/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh deleted file mode 100644 index 24f6d705332c1568cd873171c7246c890b48d5ef..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/usr/bin/env - -log_dir=./refcocoplus_logs -save_dir=./refcocoplus_checkpoints -mkdir -p $log_dir $save_dir - -bpe_dir=../../utils/BPE -user_dir=../../ofa_module - -data_dir=../../dataset/refcocoplus_data -data=${data_dir}/refcocoplus_train.tsv,${data_dir}/refcocoplus_val.tsv -restore_file=../../checkpoints/ofa_large.pt -selected_cols=0,4,2,3 - -task=refcoco -arch=ofa_large -criterion=ajust_label_smoothed_cross_entropy -label_smoothing=0.1 -lr=3e-5 -max_epoch=5 -warmup_ratio=0.06 -batch_size=4 -update_freq=8 -resnet_drop_path_rate=0.0 -encoder_drop_path_rate=0.2 -decoder_drop_path_rate=0.2 -dropout=0.1 -attention_dropout=0.0 -max_src_length=80 -max_tgt_length=20 -num_bins=1000 -patch_image_size=512 - -for max_epoch in {10,}; do - echo "max_epoch "${max_epoch} - for lr in {3e-5,}; do - echo "lr "${lr} - for patch_image_size in {512,}; do - echo "patch_image_size "${patch_image_size} - - log_file=${log_dir}/${max_epoch}"_"${lr}"_"${patch_image_size}".log" - save_path=${save_dir}/${max_epoch}"_"${lr}"_"${patch_image_size} - mkdir -p $save_path - - CUDA_VISIBLE_DEVICES=0,1,2,3 python3 ../../train.py \ - $data \ - --selected-cols=${selected_cols} \ - --bpe-dir=${bpe_dir} \ - --user-dir=${user_dir} \ - --restore-file=${restore_file} \ - --reset-optimizer --reset-dataloader --reset-meters \ - --save-dir=${save_path} \ - --task=${task} \ - --arch=${arch} \ - --criterion=${criterion} \ - --label-smoothing=${label_smoothing} \ - --batch-size=${batch_size} \ - --update-freq=${update_freq} \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --share-decoder-input-output-embed \ - --share-all-embeddings \ - --layernorm-embedding \ - --patch-layernorm-embedding \ - --code-layernorm-embedding \ - --resnet-drop-path-rate=${resnet_drop_path_rate} \ - --encoder-drop-path-rate=${encoder_drop_path_rate} \ - --decoder-drop-path-rate=${decoder_drop_path_rate} \ - --dropout=${dropout} \ - --attention-dropout=${attention_dropout} \ - --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \ - --lr-scheduler=polynomial_decay --lr=${lr} \ - --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \ - --log-format=simple --log-interval=10 \ - --fixed-validation-seed=7 \ - --no-epoch-checkpoints --keep-best-checkpoints=1 \ - --save-interval=1 --validate-interval=1 \ - --save-interval-updates=500 --validate-interval-updates=500 \ - --eval-acc \ - --eval-args='{"beam":5,"min_len":4,"max_len_a":0,"max_len_b":4}' \ - --best-checkpoint-metric=score --maximize-best-checkpoint-metric \ - --max-src-length=${max_src_length} \ - --max-tgt-length=${max_tgt_length} \ - --find-unused-parameters \ - --add-type-embedding \ - --scale-attn \ - --scale-fc \ - --scale-heads \ - --disable-entangle \ - --num-bins=${num_bins} \ - --patch-image-size=${patch_image_size} \ - --fp16 \ - --fp16-scale-window=512 \ - --num-workers=0 >> ${log_file} 2>&1 - done - done -done \ No newline at end of file diff --git a/spaces/OlaWod/FreeVC/modules.py b/spaces/OlaWod/FreeVC/modules.py deleted file mode 100644 index 52ee14e41a5b6d67d875d1b694aecd2a51244897..0000000000000000000000000000000000000000 --- a/spaces/OlaWod/FreeVC/modules.py +++ /dev/null @@ -1,342 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py deleted file mode 100644 index 369fb884930c5dd82f94024c45303dafaab14d66..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from abc import ABCMeta, abstractmethod -import torch.nn as nn - -from detectron2.layers import ShapeSpec - -__all__ = ["Backbone"] - - -class Backbone(nn.Module, metaclass=ABCMeta): - """ - Abstract base class for network backbones. - """ - - def __init__(self): - """ - The `__init__` method of any subclass can specify its own set of arguments. - """ - super().__init__() - - @abstractmethod - def forward(self): - """ - Subclasses must override this method, but adhere to the same return type. - - Returns: - dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor - """ - pass - - @property - def size_divisibility(self) -> int: - """ - Some backbones require the input height and width to be divisible by a - specific integer. This is typically true for encoder / decoder type networks - with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific - input size divisibility is required. - """ - return 0 - - def output_shape(self): - """ - Returns: - dict[str->ShapeSpec] - """ - # this is a backward-compatible default - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py deleted file mode 100644 index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf8 -*- - -import torch.cuda as cuda -import torch.nn as nn -import torch -import collections -from torch.nn.parallel._functions import Gather - - -__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to'] - - -def async_copy_to(obj, dev, main_stream=None): - if torch.is_tensor(obj): - v = obj.cuda(dev, non_blocking=True) - if main_stream is not None: - v.data.record_stream(main_stream) - return v - elif isinstance(obj, collections.Mapping): - return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [async_copy_to(o, dev, main_stream) for o in obj] - else: - return obj - - -def dict_gather(outputs, target_device, dim=0): - """ - Gathers variables from different GPUs on a specified device - (-1 means the CPU), with dictionary support. - """ - def gather_map(outputs): - out = outputs[0] - if torch.is_tensor(out): - # MJY(20180330) HACK:: force nr_dims > 0 - if out.dim() == 0: - outputs = [o.unsqueeze(0) for o in outputs] - return Gather.apply(target_device, dim, *outputs) - elif out is None: - return None - elif isinstance(out, collections.Mapping): - return {k: gather_map([o[k] for o in outputs]) for k in out} - elif isinstance(out, collections.Sequence): - return type(out)(map(gather_map, zip(*outputs))) - return gather_map(outputs) - - -class DictGatherDataParallel(nn.DataParallel): - def gather(self, outputs, output_device): - return dict_gather(outputs, output_device, dim=self.dim) - - -class UserScatteredDataParallel(DictGatherDataParallel): - def scatter(self, inputs, kwargs, device_ids): - assert len(inputs) == 1 - inputs = inputs[0] - inputs = _async_copy_stream(inputs, device_ids) - inputs = [[i] for i in inputs] - assert len(kwargs) == 0 - kwargs = [{} for _ in range(len(inputs))] - - return inputs, kwargs - - -def user_scattered_collate(batch): - return batch - - -def _async_copy(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - for i, dev in zip(inputs, device_ids): - with cuda.device(dev): - outputs.append(async_copy_to(i, dev)) - - return tuple(outputs) - - -def _async_copy_stream(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - streams = [_get_stream(d) for d in device_ids] - for i, dev, stream in zip(inputs, device_ids, streams): - with cuda.device(dev): - main_stream = cuda.current_stream() - with cuda.stream(stream): - outputs.append(async_copy_to(i, dev, main_stream=main_stream)) - main_stream.wait_stream(stream) - - return outputs - - -"""Adapted from: torch/nn/parallel/_functions.py""" -# background streams used for copying -_streams = None - - -def _get_stream(device): - """Gets a background stream for copying between CPU and GPU""" - global _streams - if device == -1: - return None - if _streams is None: - _streams = [None] * cuda.device_count() - if _streams[device] is None: _streams[device] = cuda.Stream(device) - return _streams[device] diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/data/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py deleted file mode 100644 index 3d2ad69f5c22adfe79d5fdabf920217628987166..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='GCHead', - in_channels=2048, - in_index=3, - channels=512, - ratio=1 / 4., - pooling_type='att', - fusion_types=('channel_add', ), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py deleted file mode 100644 index cd514cc204c1d571ea5dc7e74b038c0f477a008b..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['masked_im2col_forward', 'masked_col2im_forward']) - - -class MaskedConv2dFunction(Function): - - @staticmethod - def symbolic(g, features, mask, weight, bias, padding, stride): - return g.op( - 'mmcv::MMCVMaskedConv2d', - features, - mask, - weight, - bias, - padding_i=padding, - stride_i=stride) - - @staticmethod - def forward(ctx, features, mask, weight, bias, padding=0, stride=1): - assert mask.dim() == 3 and mask.size(0) == 1 - assert features.dim() == 4 and features.size(0) == 1 - assert features.size()[2:] == mask.size()[1:] - pad_h, pad_w = _pair(padding) - stride_h, stride_w = _pair(stride) - if stride_h != 1 or stride_w != 1: - raise ValueError( - 'Stride could not only be 1 in masked_conv2d currently.') - out_channel, in_channel, kernel_h, kernel_w = weight.size() - - batch_size = features.size(0) - out_h = int( - math.floor((features.size(2) + 2 * pad_h - - (kernel_h - 1) - 1) / stride_h + 1)) - out_w = int( - math.floor((features.size(3) + 2 * pad_w - - (kernel_h - 1) - 1) / stride_w + 1)) - mask_inds = torch.nonzero(mask[0] > 0, as_tuple=False) - output = features.new_zeros(batch_size, out_channel, out_h, out_w) - if mask_inds.numel() > 0: - mask_h_idx = mask_inds[:, 0].contiguous() - mask_w_idx = mask_inds[:, 1].contiguous() - data_col = features.new_zeros(in_channel * kernel_h * kernel_w, - mask_inds.size(0)) - ext_module.masked_im2col_forward( - features, - mask_h_idx, - mask_w_idx, - data_col, - kernel_h=kernel_h, - kernel_w=kernel_w, - pad_h=pad_h, - pad_w=pad_w) - - masked_output = torch.addmm(1, bias[:, None], 1, - weight.view(out_channel, -1), data_col) - ext_module.masked_col2im_forward( - masked_output, - mask_h_idx, - mask_w_idx, - output, - height=out_h, - width=out_w, - channels=out_channel) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - return (None, ) * 5 - - -masked_conv2d = MaskedConv2dFunction.apply - - -class MaskedConv2d(nn.Conv2d): - """A MaskedConv2d which inherits the official Conv2d. - - The masked forward doesn't implement the backward function and only - supports the stride parameter to be 1 currently. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super(MaskedConv2d, - self).__init__(in_channels, out_channels, kernel_size, stride, - padding, dilation, groups, bias) - - def forward(self, input, mask=None): - if mask is None: # fallback to the normal Conv2d - return super(MaskedConv2d, self).forward(input) - else: - return masked_conv2d(input, mask, self.weight, self.bias, - self.padding) diff --git a/spaces/PAIR/Text2Video-Zero/app_canny.py b/spaces/PAIR/Text2Video-Zero/app_canny.py deleted file mode 100644 index 8cf1d22adf9add87a351abb6eae306d4ce29fdb7..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/app_canny.py +++ /dev/null @@ -1,78 +0,0 @@ -import gradio as gr -from model import Model -import os -on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR" - - -def create_demo(model: Model): - - examples = [ - ["__assets__/canny_videos_edge/butterfly.mp4", - "white butterfly, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/deer.mp4", - "oil painting of a deer, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/fox.mp4", - "wild red fox is walking on the grass, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/girl_dancing.mp4", - "oil painting of a girl dancing close-up, masterpiece, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/girl_turning.mp4", - "oil painting of a beautiful girl, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/halloween.mp4", - "beautiful girl halloween style, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/santa.mp4", - "a santa claus, a high-quality, detailed, and professional photo"], - ] - - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Text and Canny-Edge Conditional Video Generation') - with gr.Row(): - gr.HTML( - """ -
    -

    - Description: For performance purposes, our current preview release supports any input videos but caps output videos after 80 frames and the input videos are scaled down before processing. -

    -
    - """) - - with gr.Row(): - with gr.Column(): - input_video = gr.Video( - label="Input Video", source='upload', format="mp4", visible=True).style(height="auto") - with gr.Column(): - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - watermark = gr.Radio(["Picsart AI Research", "Text2Video-Zero", - "None"], label="Watermark", value='Picsart AI Research') - chunk_size = gr.Slider( - label="Chunk size", minimum=2, maximum=16, value=2, step=1, visible=not on_huggingspace, - info="Number of frames processed at once. Reduce for lower memory usage.") - merging_ratio = gr.Slider( - label="Merging ratio", minimum=0.0, maximum=0.9, step=0.1, value=0.0, visible=not on_huggingspace, - info="Ratio of how many tokens are merged. The higher the more compression (less memory and faster inference).") - with gr.Column(): - result = gr.Video(label="Generated Video").style(height="auto") - - inputs = [ - input_video, - prompt, - chunk_size, - watermark, - merging_ratio, - ] - - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=model.process_controlnet_canny, - # cache_examples=on_huggingspace, - cache_examples=False, - run_on_click=False, - ) - - run_button.click(fn=model.process_controlnet_canny, - inputs=inputs, - outputs=result,) - return demo diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/Dockerfile b/spaces/PSLD/PSLD/diffusion-posterior-sampling/Dockerfile deleted file mode 100644 index a021eeb1d3ca25fa2e0b7883355a98a57d4a462e..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -FROM nvidia/cuda:11.3.1-devel-ubuntu20.04 - -ENV TZ=Asiz/Seoul -ENV TERM=xterm-256color - -RUN ln -fs /usr/share/zoneinfo/Asia/Seoul /etc/localtime - -#### 0. Install python and pip -RUN apt-get -y update && apt-get install -y git wget curl -RUN apt-get update -RUN apt-get upgrade python3 -y -RUN apt-get install python3-pip -y -RUN alias python='python3' - -#### 1. Install Pytorch -RUN pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113 - -#### 2. Install other dependencies -WORKDIR /usr/app -COPY . ./ -RUN pip install -r ./requirements.txt - -#### 3. Clone external codes -RUN git clone https://github.com/VinAIResearch/blur-kernel-space-exploring bkse -RUN git clone https://github.com/LeviBorodenko/motionblur motionblur - -#### 4. change user -RUN useradd docker_user -u 1000 -m diff --git a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_rip_ldm.sh b/spaces/PSLD/PSLD/stable-diffusion/run/inverse_rip_ldm.sh deleted file mode 100644 index ef02ac0f086fdbb1a4d94bcabb0697072b4b79a0..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_rip_ldm.sh +++ /dev/null @@ -1,14 +0,0 @@ -export CUDA_VISIBLE_DEVICES='1' -python scripts/inverse.py \ - --file_id='00014.png' \ - --task_config='configs/inpainting_config.yaml' \ - --inpainting=1 \ - --general_inverse=0 \ - --gamma=1e-1 \ - --omega=1e-1 \ - --ffhq256 \ - --W=256 \ - --H=256 \ - --C=3 \ - --f=4 \ - --outdir='outputs/psld-ldm-samples-rip' \ No newline at end of file diff --git a/spaces/Patt/demo_hf/README.md b/spaces/Patt/demo_hf/README.md deleted file mode 100644 index 89581c65b26b462fc93fce82dc939c722c1092a5..0000000000000000000000000000000000000000 --- a/spaces/Patt/demo_hf/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo Hf -emoji: 📊 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PeepDaSlan9/facebook-wav2vec2-large-960h-lv60-self/README.md b/spaces/PeepDaSlan9/facebook-wav2vec2-large-960h-lv60-self/README.md deleted file mode 100644 index 021990cab9a9dbb936b6e6eeaeaf68d5c863f788..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/facebook-wav2vec2-large-960h-lv60-self/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Facebook Wav2vec2 Large 960h Lv60 Self -emoji: 👀 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/cv2_util.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/cv2_util.py deleted file mode 100644 index 0bbc0fb2d08337bfd8242cbedd514a41d8d7353f..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/cv2_util.py +++ /dev/null @@ -1,24 +0,0 @@ -""" -Module for cv2 utility functions and maintaining version compatibility -between 3.x and 4.x -""" -import cv2 - - -def findContours(*args, **kwargs): - """ - Wraps cv2.findContours to maintain compatiblity between versions - 3 and 4 - - Returns: - contours, hierarchy - """ - if cv2.__version__.startswith('4'): - contours, hierarchy = cv2.findContours(*args, **kwargs) - elif cv2.__version__.startswith('3'): - _, contours, hierarchy = cv2.findContours(*args, **kwargs) - else: - raise AssertionError( - 'cv2 must be either version 3 or 4 to call this method') - - return contours, hierarchy diff --git a/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/models.py b/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/models.py deleted file mode 100644 index e8b6cfe31496b5f3cc3bc4ff925be3a235baf18b..0000000000000000000000000000000000000000 --- a/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/models.py +++ /dev/null @@ -1,41 +0,0 @@ -from sqlalchemy import create_engine, Column, Integer, String, Boolean, Date, ForeignKey, CheckConstraint -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import relationship -from psql_database import engine, Base -from passlib.hash import bcrypt - -# Define the User model -class User(Base): - __tablename__ = 'users' - user_id = Column(Integer, primary_key=True) - first_name = Column(String(50), nullable=False) - last_name = Column(String(50), nullable=False) - email = Column(String(100), nullable=False, unique=True) - password = Column(String(100), nullable=False) # HASHED PASSWORD - - todos = relationship('Todo', back_populates='user') - - - def verify_password(self, plain_password): - return bcrypt.verify(plain_password, self.password) - - - -# Define the Todo model -class Todo(Base): - __tablename__ = 'todos' - todo_id = Column(Integer, primary_key=True) - user_id = Column(Integer, ForeignKey('users.user_id'), nullable=False) - task_name = Column(String(100), nullable=False) - task_description = Column(String) - priority = Column(Integer, CheckConstraint('priority >= 1 AND priority <= 3', name="priority should be either 1 or 2 or 3"), nullable=False) # 1: high, 2: medium, 3: low - category = Column(String(50)) - due_date = Column(Date, nullable=False) - status = Column(Boolean, default=False) - - user = relationship('User', back_populates='todos') - -# Create the tables -def create_database_tables(): - return Base.metadata.create_all(bind=engine) - \ No newline at end of file diff --git "a/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index 3868885d4cd1d610bbc882ee191e6d7965c5f6ad..0000000000000000000000000000000000000000 --- "a/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,160 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git a/spaces/Ralmao/glass_py/README.md b/spaces/Ralmao/glass_py/README.md deleted file mode 100644 index d3f94f569488fa766732dee5a2f07852964ab00f..0000000000000000000000000000000000000000 --- a/spaces/Ralmao/glass_py/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Glass Py -emoji: 🚀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py b/spaces/RamAnanth1/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/RamAnanth1/conformer-asr/README.md b/spaces/RamAnanth1/conformer-asr/README.md deleted file mode 100644 index 0b2c6211960076c9f473d5cfd609fa91cbbff35c..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/conformer-asr/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Conformer Asr -emoji: 📉 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RavenBloody/Prototype03/README.md b/spaces/RavenBloody/Prototype03/README.md deleted file mode 100644 index 7100c7ada7af20bf80459525cb4fd0950fbb9e50..0000000000000000000000000000000000000000 --- a/spaces/RavenBloody/Prototype03/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Prototype03 -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/model.py b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/model.py deleted file mode 100644 index 7a4b00e52902d850b78dea3736324198eb32e075..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/model.py +++ /dev/null @@ -1,719 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from model.stylegan.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - self.dilation = dilation ## modified - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, ## modified - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style, externalweight=None): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - if externalweight is None: - weight = self.scale * self.weight * style - else: - weight = self.scale * (self.weight + externalweight) * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, - return_feature_ind=999, - ): - if not input_is_latent: - if not z_plus_latent: - styles = [self.style(s) for s in styles] - else: - styles_ = [] - for s in styles: - style_ = [] - for i in range(s.shape[1]): - style_.append(self.style(s[:,i]).unsqueeze(1)) - styles_.append(torch.cat(style_,dim=1)) - styles = styles_ - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - if i > return_feature_ind: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - dilation=1, ## modified - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 + dilation-1 ## modified - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - dilation=dilation, ## modified - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out \ No newline at end of file diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/transforms.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab1.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab1.md deleted file mode 100644 index 495ec9925b467de9fa59fae157b35ba1a8172d60..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/overview/htrflow/htrflow_tab1.md +++ /dev/null @@ -1,7 +0,0 @@ -### Binarization - -The reason for binarizing the images before processing them is that we want the models to generalize as well as possible. By training on only binarized images and by binarizing images before running them through the pipeline, we take the target domain closer to the training domain, and reduce negative effects of background variation, background noise etc., on the final results. The pipeline implements a simple adaptive thresholding algorithm for binarization. - -
    -HTR_tool -
    diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/visualization/color.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/visualization/color.py deleted file mode 100644 index 9041e0e6b7581c3356795d6a3c5e84667c88f025..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/visualization/color.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - -import numpy as np - -from annotator.uniformer.mmcv.utils import is_str - - -class Color(Enum): - """An enum that defines common colors. - - Contains red, green, blue, cyan, yellow, magenta, white and black. - """ - red = (0, 0, 255) - green = (0, 255, 0) - blue = (255, 0, 0) - cyan = (255, 255, 0) - yellow = (0, 255, 255) - magenta = (255, 0, 255) - white = (255, 255, 255) - black = (0, 0, 0) - - -def color_val(color): - """Convert various input to color tuples. - - Args: - color (:obj:`Color`/str/tuple/int/ndarray): Color inputs - - Returns: - tuple[int]: A tuple of 3 integers indicating BGR channels. - """ - if is_str(color): - return Color[color].value - elif isinstance(color, Color): - return color.value - elif isinstance(color, tuple): - assert len(color) == 3 - for channel in color: - assert 0 <= channel <= 255 - return color - elif isinstance(color, int): - assert 0 <= color <= 255 - return color, color, color - elif isinstance(color, np.ndarray): - assert color.ndim == 1 and color.size == 3 - assert np.all((color >= 0) & (color <= 255)) - color = color.astype(np.uint8) - return tuple(color) - else: - raise TypeError(f'Invalid type for color: {type(color)}') diff --git a/spaces/Rojastopher/Image-to-3D/app.py b/spaces/Rojastopher/Image-to-3D/app.py deleted file mode 100644 index 20bdb836f38f77fb2d0a321650ffbbe5d03e2dc4..0000000000000000000000000000000000000000 --- a/spaces/Rojastopher/Image-to-3D/app.py +++ /dev/null @@ -1,264 +0,0 @@ -import os -from PIL import Image -import torch - -from point_e.diffusion.configs import DIFFUSION_CONFIGS, diffusion_from_config -from point_e.diffusion.sampler import PointCloudSampler -from point_e.models.download import load_checkpoint -from point_e.models.configs import MODEL_CONFIGS, model_from_config -from point_e.util.plotting import plot_point_cloud -from point_e.util.pc_to_mesh import marching_cubes_mesh - -import skimage.measure - -from pyntcloud import PyntCloud -import matplotlib.colors -import plotly.graph_objs as go - -import trimesh - -import gradio as gr - - -state = "" -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -def set_state(s): - print(s) - global state - state = s - -def get_state(): - return state - -set_state('Creating txt2mesh model...') -t2m_name = 'base40M-textvec' -t2m_model = model_from_config(MODEL_CONFIGS[t2m_name], device) -t2m_model.eval() -base_diffusion_t2m = diffusion_from_config(DIFFUSION_CONFIGS[t2m_name]) - -set_state('Downloading txt2mesh checkpoint...') -t2m_model.load_state_dict(load_checkpoint(t2m_name, device)) - - -def load_img2mesh_model(model_name): - set_state(f'Creating img2mesh model {model_name}...') - i2m_name = model_name - i2m_model = model_from_config(MODEL_CONFIGS[i2m_name], device) - i2m_model.eval() - base_diffusion_i2m = diffusion_from_config(DIFFUSION_CONFIGS[i2m_name]) - - set_state(f'Downloading img2mesh checkpoint {model_name}...') - i2m_model.load_state_dict(load_checkpoint(i2m_name, device)) - - return i2m_model, base_diffusion_i2m - -img2mesh_model_name = 'base40M' #'base300M' #'base1B' -i2m_model, base_diffusion_i2m = load_img2mesh_model(img2mesh_model_name) - - -set_state('Creating upsample model...') -upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device) -upsampler_model.eval() -upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample']) - -set_state('Downloading upsampler checkpoint...') -upsampler_model.load_state_dict(load_checkpoint('upsample', device)) - -set_state('Creating SDF model...') -sdf_name = 'sdf' -sdf_model = model_from_config(MODEL_CONFIGS[sdf_name], device) -sdf_model.eval() - -set_state('Loading SDF model...') -sdf_model.load_state_dict(load_checkpoint(sdf_name, device)) - -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - - -set_state('') - -def get_sampler(model_name, txt2obj, guidance_scale): - - global img2mesh_model_name - global base_diffusion_i2m - global i2m_model - if model_name != img2mesh_model_name: - img2mesh_model_name = model_name - i2m_model, base_diffusion_i2m = load_img2mesh_model(model_name) - - return PointCloudSampler( - device=device, - models=[t2m_model if txt2obj else i2m_model, upsampler_model], - diffusions=[base_diffusion_t2m if txt2obj else base_diffusion_i2m, upsampler_diffusion], - num_points=[1024, 4096 - 1024], - aux_channels=['R', 'G', 'B'], - guidance_scale=[guidance_scale, 0.0 if txt2obj else guidance_scale], - model_kwargs_key_filter=('texts', '') if txt2obj else ("*",) - ) - -def generate_txt2img(prompt): - - prompt = f"“a 3d rendering of {prompt}, full view, white background" - gallery_dir = stable_diffusion(prompt, fn_index=2) - imgs = [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir) if os.path.splitext(img)[1] == '.jpg'] - - return imgs[0], gr.update(visible=True) - -def generate_3D(input, model_name='base40M', guidance_scale=3.0, grid_size=32): - - set_state('Entered generate function...') - - if isinstance(input, Image.Image): - input = prepare_img(input) - - # if input is a string, it's a text prompt - sampler = get_sampler(model_name, txt2obj=True if isinstance(input, str) else False, guidance_scale=guidance_scale) - - # Produce a sample from the model. - set_state('Sampling...') - samples = None - kw_args = dict(texts=[input]) if isinstance(input, str) else dict(images=[input]) - for x in sampler.sample_batch_progressive(batch_size=1, model_kwargs=kw_args): - samples = x - - set_state('Converting to point cloud...') - pc = sampler.output_to_point_clouds(samples)[0] - - set_state('Saving point cloud...') - with open("point_cloud.ply", "wb") as f: - pc.write_ply(f) - - set_state('Converting to mesh...') - save_ply(pc, 'mesh.ply', grid_size) - - set_state('') - - return pc_to_plot(pc), ply_to_obj('mesh.ply', '3d_model.obj'), gr.update(value=['3d_model.obj', 'mesh.ply', 'point_cloud.ply'], visible=True) - -def prepare_img(img): - - w, h = img.size - if w > h: - img = img.crop((w - h) / 2, 0, w - (w - h) / 2, h) - else: - img = img.crop((0, (h - w) / 2, w, h - (h - w) / 2)) - - # resize to 256x256 - img = img.resize((256, 256)) - - return img - -def pc_to_plot(pc): - - return go.Figure( - data=[ - go.Scatter3d( - x=pc.coords[:,0], y=pc.coords[:,1], z=pc.coords[:,2], - mode='markers', - marker=dict( - size=2, - color=['rgb({},{},{})'.format(r,g,b) for r,g,b in zip(pc.channels["R"], pc.channels["G"], pc.channels["B"])], - ) - ) - ], - layout=dict( - scene=dict(xaxis=dict(visible=False), yaxis=dict(visible=False), zaxis=dict(visible=False)) - ), - ) - -def ply_to_obj(ply_file, obj_file): - mesh = trimesh.load(ply_file) - mesh.export(obj_file) - - return obj_file - -def save_ply(pc, file_name, grid_size): - - # Produce a mesh (with vertex colors) - mesh = marching_cubes_mesh( - pc=pc, - model=sdf_model, - batch_size=4096, - grid_size=grid_size, # increase to 128 for resolution used in evals - progress=True, - ) - - # Write the mesh to a PLY file to import into some other program. - with open(file_name, 'wb') as f: - mesh.write_ply(f) - - -with gr.Blocks() as app: - gr.Markdown("# Image-to-3D") - gr.Markdown("Turn any image or prompt to a 3D asset! Powered by StableDiffusion and OpenAI Point-E. Check out (https://twitter.com/angrypenguinPNG) for a tutorial on how to best use this space.") - gr.HTML("""To skip the queue you can duplicate this space: -
    Duplicate Space -
    Don't forget to change space hardware to GPU after duplicating it.""") - - with gr.Row(): - with gr.Column(): - with gr.Tab("Image to 3D"): - img = gr.Image(label="Image") - gr.Markdown("Best results with images of 3D objects with no shadows on a white background.") - btn_generate_img2obj = gr.Button(value="Generate") - - with gr.Tab("Text to 3D"): - gr.Markdown("Generate an image with Stable Diffusion, then convert it to 3D. Just enter the object you want to generate.") - prompt_sd = gr.Textbox(label="Prompt", placeholder="a 3d rendering of [your prompt], full view, white background") - btn_generate_txt2sd = gr.Button(value="Generate image") - img_sd = gr.Image(label="Image") - btn_generate_sd2obj = gr.Button(value="Convert to 3D", visible=False) - - with gr.Accordion("Advanced settings", open=False): - dropdown_models = gr.Dropdown(label="Model", value="base40M", choices=["base40M", "base300M"]) #, "base1B"]) - guidance_scale = gr.Slider(label="Guidance scale", value=3.0, minimum=3.0, maximum=10.0, step=0.1) - grid_size = gr.Slider(label="Grid size (for .obj 3D model)", value=32, minimum=16, maximum=128, step=16) - - with gr.Column(): - plot = gr.Plot(label="Point cloud") - # btn_pc_to_obj = gr.Button(value="Convert to OBJ", visible=False) - model_3d = gr.Model3D(value=None) - file_out = gr.File(label="Files", visible=False) - - # state_info = state_info = gr.Textbox(label="State", show_label=False).style(container=False) - - - # inputs = [dropdown_models, prompt, img, guidance_scale, grid_size] - outputs = [plot, model_3d, file_out] - - btn_generate_img2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs) - - prompt_sd.submit(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj]) - btn_generate_txt2sd.click(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj], queue=False) - btn_generate_sd2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs) - - # btn_pc_to_obj.click(ply_to_obj, inputs=plot, outputs=[model_3d, file_out]) - - gr.Examples( - examples=[ - ["images/corgi.png"], - ["images/cube_stack.jpg"], - ["images/chair.png"], - ], - inputs=[img], - outputs=outputs, - fn=generate_3D, - cache_examples=False - ) - - # app.load(get_state, inputs=[], outputs=state_info, every=0.5, show_progress=False) - - gr.HTML(""" -

    -
    -
    -

    Space by:
    - Twitter Follow
    - GitHub followers


    - Buy Me A Coffee

    -

    visitors

    -
    - """) - -app.queue(max_size=250, concurrency_count=6).launch() diff --git a/spaces/Ryukijano/canny_coyo1m/app.py b/spaces/Ryukijano/canny_coyo1m/app.py deleted file mode 100644 index 1d443658fa36c38c223cf36e1299765275231315..0000000000000000000000000000000000000000 --- a/spaces/Ryukijano/canny_coyo1m/app.py +++ /dev/null @@ -1,59 +0,0 @@ -import gradio as gr -import jax -import numpy as np -import jax.numpy as jnp -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from PIL import Image -from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel -import cv2 - -def create_key(seed=0): - return jax.random.PRNGKey(seed) - -def canny_filter(image): - gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - blurred_image = cv2.GaussianBlur(gray_image, (5, 5), 0) - edges_image = cv2.Canny(blurred_image, 50, 150) - return edges_image - -# load control net and stable diffusion v1-5 -controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( - "jax-diffusers-event/canny-coyo1m", dtype=jnp.bfloat16 -) -pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.bfloat16 -) - -def infer(prompts, negative_prompts, image): - params["controlnet"] = controlnet_params - - num_samples = 1 #jax.device_count() - rng = create_key(0) - rng = jax.random.split(rng, jax.device_count()) - im = canny_filter(image) - canny_image = Image.fromarray(im) - - prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) - negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) - processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) - - p_params = replicate(params) - prompt_ids = shard(prompt_ids) - negative_prompt_ids = shard(negative_prompt_ids) - processed_image = shard(processed_image) - - output = pipe( - prompt_ids=prompt_ids, - image=processed_image, - params=p_params, - prng_seed=rng, - num_inference_steps=50, - neg_prompt_ids=negative_prompt_ids, - jit=True, - ).images - - output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) - return output_images - -gr.Interface(infer, inputs=["text", "text", "image"], outputs="gallery").launch() diff --git a/spaces/ScottRobertsXR/image-captioning-01/vit_gpt2/configuration_vit_gpt2.py b/spaces/ScottRobertsXR/image-captioning-01/vit_gpt2/configuration_vit_gpt2.py deleted file mode 100644 index e78c09e2af38130aaff70dde1817c957749283d2..0000000000000000000000000000000000000000 --- a/spaces/ScottRobertsXR/image-captioning-01/vit_gpt2/configuration_vit_gpt2.py +++ /dev/null @@ -1,45 +0,0 @@ -import copy - -from transformers import GPT2Config, ViTConfig -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - -logger = logging.get_logger(__name__) - - -class ViTGPT2Config(PretrainedConfig): - - model_type = "vit-gpt2" - is_composition = True - - def __init__(self, **kwargs): - super().__init__(**kwargs) - - if "vit_config" not in kwargs: - raise ValueError("`vit_config` can not be `None`.") - - if "gpt2_config" not in kwargs: - raise ValueError("`gpt2_config` can not be `None`.") - - vit_config = kwargs.pop("vit_config") - gpt2_config = kwargs.pop("gpt2_config") - - self.vit_config = ViTConfig(**vit_config) - self.gpt2_config = GPT2Config(**gpt2_config) - - @classmethod - def from_vit_gpt2_configs( - cls, vit_config: PretrainedConfig, gpt2_config: PretrainedConfig, **kwargs - ): - return cls( - vit_config=vit_config.to_dict(), - gpt2_config=gpt2_config.to_dict(), - **kwargs - ) - - def to_dict(self): - output = copy.deepcopy(self.__dict__) - output["vit_config"] = self.vit_config.to_dict() - output["gpt2_config"] = self.gpt2_config.to_dict() - output["model_type"] = self.__class__.model_type - return output \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_retrieval.py b/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_retrieval.py deleted file mode 100644 index dafea6d806445bb851dc6b4d47281d65d81508cf..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_retrieval.py +++ /dev/null @@ -1,344 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from copy import deepcopy - -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.models.albef_models import AlbefBase, compute_sim_matrix -from lavis.models.albef_models.albef_outputs import ( - AlbefIntermediateOutput, - AlbefOutput, - AlbefSimilarity, -) -from lavis.models.base_model import MomentumDistilationMixin, SharedQueueMixin -from lavis.models.med import XBertEncoder -from lavis.models.vit import VisionTransformerEncoder -from torch import nn - - -@registry.register_model("albef_retrieval") -class AlbefRetrieval(AlbefBase, MomentumDistilationMixin, SharedQueueMixin): - """ - ALBEF retrieval model. - - Supported model types: - - coco: fine-tuned ALBEF base model on COCO dataset (Karparthy split). - - flickr: fine-tuned ALBEF base model on Flickr30k dataset. - - Usage: - >>> from lavis.models import load_model - >>> model = load_model("albef_retrieval", "coco") - >>> model = load_model("albef_retrieval", "flickr") - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "coco": "configs/models/albef_retrieval_coco.yaml", - "flickr": "configs/models/albef_retrieval_flickr.yaml", - } - - def __init__( - self, - image_encoder, - text_encoder, - queue_size, - embed_dim=256, - temp=0.07, - use_distill=True, - momentum=0.995, - alpha=0.4, - max_txt_len=30, - ): - super().__init__() - - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = image_encoder - self.text_encoder = text_encoder - - text_width = text_encoder.config.hidden_size - vision_width = image_encoder.vision_width - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create the momentum encoder - self.visual_encoder_m = deepcopy(self.visual_encoder) - self.text_encoder_m = deepcopy(self.text_encoder) - - self.vision_proj_m = deepcopy(self.vision_proj) - self.text_proj_m = deepcopy(self.text_proj) - - self.model_pairs = [ - [self.visual_encoder, self.visual_encoder_m], - [self.text_encoder, self.text_encoder_m], - [self.vision_proj, self.vision_proj_m], - [self.text_proj, self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("idx_queue", torch.full((1, queue_size), -100)) - self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(temp * torch.ones([])) - - self.alpha = alpha - self.max_txt_len = max_txt_len - self.use_distill = use_distill - - def _rampup_factor(self, epoch, iters, num_iters_per_epoch): - return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch)) - - def forward(self, samples): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images. - - text_input (list): A list of length batch_size, each element is a string of text/caption. - - image_id (torch.Tensor): A tensor of shape (batch_size, ). The image ids, used to identify same images in batch. - - epoch (int): The current epoch. - - iters (int): The current iteration. - - num_iters_per_epoch (int): The number of iterations per epoch. - - Returns: - BlipOutput: A BlipOutput object. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details. - - Examples: - >>> import torch - >>> from lavis.models import load_model - >>> model = load_model("albef_retrieval", "coco") - >>> images = torch.randn(4, 3, 384, 384) - >>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"] - >>> image_id = torch.tensor([1, 1, 2, 3]) - >>> samples = {"image": images, "text_input": text_input, "image_id": image_id, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100} - >>> output = model(samples) - >>> output.keys() - odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm']) - """ - image = samples["image"] - caption = samples["text_input"] - idx = samples["image_id"] - - alpha = self.alpha * self._rampup_factor( - epoch=samples["epoch"], - iters=samples["iters"], - num_iters_per_epoch=samples["num_iters_per_epoch"], - ) - - with torch.no_grad(): - self.temp.clamp_(0.001, 0.5) - - image_embeds = self.visual_encoder.forward_features(image) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - self.device - ) - - image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1) - - text = self.tokenizer( - caption, - padding="max_length", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(self.device) - - text_output = self.text_encoder.forward_text(text) - - text_embeds = text_output.last_hidden_state - text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1) - - idx = idx.view(-1, 1) - idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()], dim=1) - pos_idx = torch.eq(idx, idx_all).float() - sim_targets = pos_idx / pos_idx.sum(1, keepdim=True) - - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize( - self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1 - ) - image_feat_all = torch.cat( - [image_feat_m.t(), self.image_queue.clone().detach()], dim=1 - ) - text_output_m = self.text_encoder_m.forward_text(text) - text_embeds_m = text_output_m.last_hidden_state - text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1) - text_feat_all = torch.cat( - [text_feat_m.t(), self.text_queue.clone().detach()], dim=1 - ) - - if self.use_distill: - sim_i2t_m = image_feat_m @ text_feat_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_all / self.temp - - sim_i2t_targets = ( - alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - ) - sim_t2i_targets = ( - alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - ) - - sim_i2t = image_feat @ text_feat_all / self.temp - sim_t2i = text_feat @ image_feat_all / self.temp - - if self.use_distill: - loss_i2t = -torch.sum( - F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1 - ).mean() - loss_t2i = -torch.sum( - F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1 - ).mean() - else: - loss_i2t = -torch.sum( - F.log_softmax(sim_i2t, dim=1) * sim_targets, dim=1 - ).mean() - loss_t2i = -torch.sum( - F.log_softmax(sim_t2i, dim=1) * sim_targets, dim=1 - ).mean() - - loss_itc = (loss_i2t + loss_t2i) / 2 - - self._dequeue_and_enqueue(image_feat_m, text_feat_m, idx) - - encoder_output_pos = self.text_encoder( - encoder_embeds=text_embeds, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - mode="fusion", - ) - - with torch.no_grad(): - bs = image.size(0) - weights_i2t = F.softmax(sim_i2t[:, :bs] + 1e-4, dim=1) - weights_t2i = F.softmax(sim_t2i[:, :bs] + 1e-4, dim=1) - - mask = torch.eq(idx, idx.T) - weights_i2t.masked_fill_(mask, 0) - weights_t2i.masked_fill_(mask, 0) - - # select a negative image for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg, dim=0) - - # select a negative text for each image - text_embeds_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_embeds_neg.append(text_embeds[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - text_embeds_neg = torch.stack(text_embeds_neg, dim=0) - text_atts_neg = torch.stack(text_atts_neg, dim=0) - - text_embeds_all = torch.cat([text_embeds, text_embeds_neg], dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0) - - image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0) - image_atts_all = torch.cat([image_atts, image_atts], dim=0) - - encoder_output_neg = self.text_encoder( - encoder_embeds=text_embeds_all, - attention_mask=text_atts_all, - encoder_hidden_states=image_embeds_all, - encoder_attention_mask=image_atts_all, - return_dict=True, - mode="fusion", - ) - - vl_embeddings = torch.cat( - [ - encoder_output_pos.last_hidden_state[:, 0, :], - encoder_output_neg.last_hidden_state[:, 0, :], - ], - dim=0, - ) - itm_logits = self.itm_head(vl_embeddings) - - itm_labels = torch.cat( - [torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)], - dim=0, - ).to(self.device) - loss_itm = F.cross_entropy(itm_logits, itm_labels) - - return AlbefOutput( - loss=loss_itc + loss_itm, - loss_itc=loss_itc, - loss_itm=loss_itm, - sims=AlbefSimilarity( - sim_i2t=sim_i2t, - sim_t2i=sim_t2i, - sim_i2t_m=sim_i2t_m, - sim_t2i_m=sim_t2i_m, - sim_i2t_targets=sim_i2t_targets, - sim_t2i_targets=sim_t2i_targets, - ), - intermediate_output=AlbefIntermediateOutput( - image_embeds=image_embeds, - image_embeds_m=image_embeds_m, - text_embeds=text_embeds, - text_embeds_m=text_embeds_m, - encoder_output=encoder_output_pos, - encoder_output_neg=encoder_output_neg, - itm_logits=itm_logits, - itm_labels=itm_labels, - ), - ) - - @classmethod - def from_config(cls, cfg=None): - image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=False) - text_encoder = XBertEncoder.from_config(cfg) - - embed_dim = cfg.get("embed_dim", 256) - momentum = cfg.get("momentum", 0.995) - alpha = cfg.get("alpha", 0.4) - temp = cfg.get("temp", 0.07) - max_txt_len = cfg.get("max_txt_len", 30) - queue_size = cfg.get("queue_size", 0) - use_distill = cfg.get("use_distill", True) - - model = cls( - image_encoder=image_encoder, - text_encoder=text_encoder, - queue_size=queue_size, - embed_dim=embed_dim, - temp=temp, - momentum=momentum, - alpha=alpha, - max_txt_len=max_txt_len, - use_distill=use_distill, - ) - - model.load_checkpoint_from_config(cfg) - - return model - - def compute_sim_matrix(self, data_loader, task_cfg): - """ - Compute similarity i2t, t2i matrix for the given data loader. - """ - k_test = task_cfg.k_test - - return compute_sim_matrix(model=self, data_loader=data_loader, k_test=k_test) diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_image_text_matching.py b/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_image_text_matching.py deleted file mode 100644 index c9dd78d945662784a0c83ac8fdb8c3c389ab094b..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_image_text_matching.py +++ /dev/null @@ -1,111 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.models.blip2_models.blip2_qformer import Blip2Qformer - - -@registry.register_model("blip2_image_text_matching") -class Blip2ITM(Blip2Qformer): - """ - BLIP Image-Text Matching (ITM) model. - Supported model types: - - pretrained: pretrained model - - coco: fintuned model on coco - Usage: - >>> from lavis.models import load_model - >>> model = load_model("blip2_image_text_matching", "pretrained") - >>> model = load_model("blip2_image_text_matching", "coco") - """ - - def __init__( - self, - img_size=224, - drop_path_rate=0, - use_grad_checkpoint=False, - vit_precision="fp16", - freeze_vit=True, - num_query_token=32, - embed_dim=256, - max_txt_len=32, - ): - super().__init__( - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - num_query_token=num_query_token, - embed_dim=embed_dim, - max_txt_len=max_txt_len, - ) - - def forward(self, samples, match_head="itm"): - image = samples["image"] - caption = samples["text_input"] - - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - image.device - ) - - text = self.tokenizer( - caption, - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - - if match_head == "itm": - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to( - image.device - ) - attention_mask = torch.cat([query_atts, text.attention_mask], dim=1) - output_itm = self.Qformer.bert( - text.input_ids, - query_embeds=query_tokens, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - itm_embeddings = output_itm.last_hidden_state[:, : query_tokens.size(1), :] - itm_logit = self.itm_head(itm_embeddings) - itm_logit = itm_logit.mean(dim=1) - - return itm_logit - - elif match_head == "itc": - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - image_feats = F.normalize( - self.vision_proj(query_output.last_hidden_state), dim=-1 - ) - - text_output = self.Qformer.bert( - text.input_ids, - attention_mask=text.attention_mask, - return_dict=True, - ) - text_feat = F.normalize( - self.text_proj(text_output.last_hidden_state[:, 0, :]), dim=-1 - ) - - sims = torch.bmm(image_feats, text_feat.unsqueeze(-1)) - sim, _ = torch.max(sims, dim=1) - - return sim diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/train/train.py b/spaces/ServerX/PorcoDiaz/infer/modules/train/train.py deleted file mode 100644 index 550bef391444c9b6c0d8c44ae3a3809b3ade4218..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/modules/train/train.py +++ /dev/null @@ -1,723 +0,0 @@ -import os -import sys -import logging - -logger = logging.getLogger(__name__) - -now_dir = os.getcwd() -sys.path.append(os.path.join(now_dir)) - -import datetime - -from infer.lib.train import utils - -hps = utils.get_hparams() -os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",") -n_gpus = len(hps.gpus.split("-")) -from random import randint, shuffle - -import torch -try: - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - from infer.modules.ipex.gradscaler import gradscaler_init - from torch.xpu.amp import autocast - GradScaler = gradscaler_init() - ipex_init() - else: - from torch.cuda.amp import GradScaler, autocast -except Exception: - from torch.cuda.amp import GradScaler, autocast - -torch.backends.cudnn.deterministic = False -torch.backends.cudnn.benchmark = False -from time import sleep -from time import time as ttime - -import torch.distributed as dist -import torch.multiprocessing as mp - -from torch.nn import functional as F -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -from infer.lib.infer_pack import commons -from infer.lib.train.data_utils import ( - DistributedBucketSampler, - TextAudioCollate, - TextAudioCollateMultiNSFsid, - TextAudioLoader, - TextAudioLoaderMultiNSFsid, -) - -if hps.version == "v1": - from infer.lib.infer_pack.models import MultiPeriodDiscriminator - from infer.lib.infer_pack.models import SynthesizerTrnMs256NSFsid as RVC_Model_f0 - from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0, - ) -else: - from infer.lib.infer_pack.models import ( - SynthesizerTrnMs768NSFsid as RVC_Model_f0, - SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0, - MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator, - ) - -from infer.lib.train.losses import ( - discriminator_loss, - feature_loss, - generator_loss, - kl_loss, -) -from infer.lib.train.mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from infer.lib.train.process_ckpt import savee - -global_step = 0 -import csv - -class EpochRecorder: - def __init__(self): - self.last_time = ttime() - - def record(self): - now_time = ttime() - elapsed_time = now_time - self.last_time - self.last_time = now_time - elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time)) - current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - return f"[{current_time}] | ({elapsed_time_str})" - -def reset_stop_flag(): - with open("csvdb/stop.csv", "w+", newline="") as STOPCSVwrite: - csv_writer = csv.writer(STOPCSVwrite, delimiter=",") - csv_writer.writerow(["False"]) - -def create_model(hps, model_f0, model_nof0): - filter_length_adjusted = hps.data.filter_length // 2 + 1 - segment_size_adjusted = hps.train.segment_size // hps.data.hop_length - is_half = hps.train.fp16_run - sr = hps.sample_rate - - model = model_f0 if hps.if_f0 == 1 else model_nof0 - - return model( - filter_length_adjusted, - segment_size_adjusted, - **hps.model, - is_half=is_half, - sr=sr - ) - -def move_model_to_cuda_if_available(model, rank): - if torch.cuda.is_available(): - return model.cuda(rank) - else: - return model - -def create_optimizer(model, hps): - return torch.optim.AdamW( - model.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - -def create_ddp_model(model, rank): - if torch.cuda.is_available(): - return DDP(model, device_ids=[rank]) - else: - return DDP(model) - -def create_dataset(hps, if_f0=True): - return TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) if if_f0 else TextAudioLoader(hps.data.training_files, hps.data) - -def create_sampler(dataset, batch_size, n_gpus, rank): - return DistributedBucketSampler( - dataset, - batch_size * n_gpus, - # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s - [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - -def set_collate_fn(if_f0=True): - return TextAudioCollateMultiNSFsid() if if_f0 else TextAudioCollate() - - -def main(): - n_gpus = torch.cuda.device_count() - - if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True: - n_gpus = 1 - if n_gpus < 1: - # patch to unblock people without gpus. there is probably a better way. - logger.warn("NO GPU DETECTED: falling back to CPU - this may take a while") - n_gpus = 1 - os.environ["MASTER_ADDR"] = "localhost" - os.environ["MASTER_PORT"] = str(randint(20000, 55555)) - children = [] - for i in range(n_gpus): - subproc = mp.Process( - target=run, - args=( - i, - n_gpus, - hps, - ), - ) - children.append(subproc) - subproc.start() - - for i in range(n_gpus): - children[i].join() - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - # utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group( - backend="gloo", init_method="env://", world_size=n_gpus, rank=rank - ) - torch.manual_seed(hps.train.seed) - if torch.cuda.is_available(): - torch.cuda.set_device(rank) - - if hps.if_f0 == 1: - train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) - else: - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size * n_gpus, - # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s - [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - # It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit. - # num_workers=8 -> num_workers=4 - if hps.if_f0 == 1: - collate_fn = TextAudioCollateMultiNSFsid() - else: - collate_fn = TextAudioCollate() - train_loader = DataLoader( - train_dataset, - num_workers=4, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=8, - ) - if hps.if_f0 == 1: - net_g = RVC_Model_f0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - sr=hps.sample_rate, - ) - else: - net_g = RVC_Model_nof0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - ) - if torch.cuda.is_available(): - net_g = net_g.cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm) - if torch.cuda.is_available(): - net_d = net_d.cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - # net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - # net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if hasattr(torch, "xpu") and torch.xpu.is_available(): - pass - elif torch.cuda.is_available(): - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - else: - net_g = DDP(net_g) - net_d = DDP(net_d) - - try: # 如果能加载自动resume - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d - ) # D多半加载没事 - if rank == 0: - logger.info("loaded D") - # _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0) - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g - ) - global_step = (epoch_str - 1) * len(train_loader) - # epoch_str = 1 - # global_step = 0 - except: # 如果首次不能加载,加载pretrain - # traceback.print_exc() - epoch_str = 1 - global_step = 0 - if hps.pretrainG != "": - if rank == 0: - logger.info("loaded pretrained %s" % (hps.pretrainG)) - if hasattr(net_g, "module"): - logger.info( - net_g.module.load_state_dict( - torch.load(hps.pretrainG, map_location="cpu")["model"] - ) - ) ##测试不加载优化器 - else: - logger.info( - net_g.load_state_dict( - torch.load(hps.pretrainG, map_location="cpu")["model"] - ) - ) ##测试不加载优化器 - if hps.pretrainD != "": - if rank == 0: - logger.info("loaded pretrained %s" % (hps.pretrainD)) - if hasattr(net_d, "module"): - logger.info( - net_d.module.load_state_dict( - torch.load(hps.pretrainD, map_location="cpu")["model"] - ) - ) - else: - logger.info( - net_d.load_state_dict( - torch.load(hps.pretrainD, map_location="cpu")["model"] - ) - ) - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - cache = [] - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - logger, - [writer, writer_eval], - cache, - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - None, - None, - cache, - ) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache -): - net_g, net_d = nets - optim_g, optim_d = optims - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - - # Prepare data iterator - if hps.if_cache_data_in_gpu == True: - # Use Cache - data_iterator = cache - if cache == []: - # Make new cache - for batch_idx, info in enumerate(train_loader): - # Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - # Load on CUDA - if torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - # Cache on list - if hps.if_f0 == 1: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - # Load shuffled cache - shuffle(cache) - else: - # Loader - data_iterator = enumerate(train_loader) - - # Run steps - epoch_recorder = EpochRecorder() - for batch_idx, info in data_iterator: - # Data - ## Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info - ## Load on CUDA - if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - # wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - - # Calculate - with autocast(enabled=hps.train.fp16_run): - if hps.if_f0 == 1: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid) - else: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, spec, spec_lengths, sid) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - with autocast(enabled=False): - y_hat_mel = mel_spectrogram_torch( - y_hat.float().squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - if hps.train.fp16_run == True: - y_hat_mel = y_hat_mel.half() - wave = commons.slice_segments( - wave, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - optim_d.zero_grad() - scaler.scale(loss_disc).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - # Amor For Tensorboard display - if loss_mel > 75: - loss_mel = 75 - if loss_kl > 9: - loss_kl = 9 - - logger.info([global_step, lr]) - logger.info( - f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}" - ) - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/kl": loss_kl, - } - ) - - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - global_step += 1 - # /Run steps - - if epoch % hps.save_every_epoch == 0 and rank == 0: - if hps.if_latest == 0: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - else: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(2333333)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(2333333)), - ) - if rank == 0 and hps.save_every_weights == "1": - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "saving ckpt %s_e%s:%s" - % ( - hps.name, - epoch, - savee( - ckpt, - hps.sample_rate, - hps.if_f0, - hps.name + "_e%s_s%s" % (epoch, global_step), - epoch, - hps.version, - hps, - ), - ) - ) - - stopbtn = False - try: - with open("csvdb/stop.csv", 'r') as csv_file: - stopbtn_str = next(csv.reader(csv_file), [None])[0] - if stopbtn_str is not None: stopbtn = stopbtn_str.lower() == 'true' - except (ValueError, TypeError, FileNotFoundError, IndexError) as e: - print(f"Handling exception: {e}") - stopbtn = False - - if stopbtn: - logger.info("Stop Button was pressed. The program is closed.") - ckpt = net_g.module.state_dict() if hasattr(net_g, "module") else net_g.state_dict() - logger.info( - "saving final ckpt:%s" - % ( - savee( - ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps - ) - ) - ) - sleep(1) - reset_stop_flag() - os._exit(2333333) - - if rank == 0: - logger.info("====> Epoch: {} {}".format(epoch, epoch_recorder.record())) - if epoch >= hps.total_epoch and rank == 0: - logger.info("Training is done. The program is closed.") - - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "saving final ckpt:%s" - % ( - savee( - ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps - ) - ) - ) - sleep(1) - os._exit(2333333) - - -if __name__ == "__main__": - torch.multiprocessing.set_start_method("spawn") - main() diff --git a/spaces/Sharathhebbar24/One-stop-for-Open-source-models/README.md b/spaces/Sharathhebbar24/One-stop-for-Open-source-models/README.md deleted file mode 100644 index 23c463341cba096b20f29671d2fbbfc463dd6c11..0000000000000000000000000000000000000000 --- a/spaces/Sharathhebbar24/One-stop-for-Open-source-models/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: One Stop For Open Source Models -emoji: 😻 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ShawnAI/VectorDB-ChatBot/app.py b/spaces/ShawnAI/VectorDB-ChatBot/app.py deleted file mode 100644 index 2bdd42c698202568e67e47931e54d581dfca6904..0000000000000000000000000000000000000000 --- a/spaces/ShawnAI/VectorDB-ChatBot/app.py +++ /dev/null @@ -1,442 +0,0 @@ -import gradio as gr -import random -import time - -from langchain import PromptTemplate -from langchain.llms import OpenAI -from langchain.chat_models import ChatOpenAI -from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings, OpenAIEmbeddings -from langchain.vectorstores import Pinecone -from langchain.chains import LLMChain -from langchain.chains.question_answering import load_qa_chain -import pinecone - -import os -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -#OPENAI_API_KEY = "" -OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY", "") -OPENAI_TEMP = 1 -OPENAI_API_LINK = "[OpenAI API Key](https://platform.openai.com/account/api-keys)" -OPENAI_LINK = "[OpenAI](https://openai.com)" - -PINECONE_KEY = os.environ.get("PINECONE_KEY", "") -PINECONE_ENV = os.environ.get("PINECONE_ENV", "asia-northeast1-gcp") -PINECONE_INDEX = os.environ.get("PINECONE_INDEX", '3gpp-r16') - -PINECONE_LINK = "[Pinecone](https://www.pinecone.io)" -LANGCHAIN_LINK = "[LangChain](https://python.langchain.com/en/latest/index.html)" - -EMBEDDING_MODEL = os.environ.get("EMBEDDING_MODEL", "hkunlp/instructor-large") -EMBEDDING_LOADER = os.environ.get("EMBEDDING_LOADER", "HuggingFaceInstructEmbeddings") -EMBEDDING_LIST = ["HuggingFaceInstructEmbeddings", "HuggingFaceEmbeddings", "OpenAIEmbeddings"] - -# return top-k text chunks from vector store -TOP_K_DEFAULT = 15 -TOP_K_MAX = 30 -SCORE_DEFAULT = 0.33 - - -BUTTON_MIN_WIDTH = 215 - -LLM_NULL = "LLM-UNLOAD-critical" -LLM_DONE = "LLM-LOADED-9cf" - -DB_NULL = "DB-UNLOAD-critical" -DB_DONE = "DB-LOADED-9cf" - -FORK_BADGE = "Fork-HuggingFace Space-9cf" - - -def get_logo(inputs, logo) -> str: - return f"""https://img.shields.io/badge/{inputs}?style=flat&logo={logo}&logoColor=white""" - -def get_status(inputs, logo, pos) -> str: - return f"""""" - - -KEY_INIT = "Initialize Model" -KEY_SUBMIT = "Submit" -KEY_CLEAR = "Clear" - -MODEL_NULL = get_status(LLM_NULL, "openai", "right") -MODEL_DONE = get_status(LLM_DONE, "openai", "right") - -DOCS_NULL = get_status(DB_NULL, "processingfoundation", "right") -DOCS_DONE = get_status(DB_DONE, "processingfoundation", "right") - -TAB_1 = "Chatbot" -TAB_2 = "Details" -TAB_3 = "Database" -TAB_4 = "TODO" - - - -FAVICON = './icon.svg' - -LLM_LIST = ["gpt-3.5-turbo", "text-davinci-003"] - - -DOC_1 = '3GPP' -DOC_2 = 'HTTP2' - -DOC_SUPPORTED = [DOC_1] -DOC_DEFAULT = [DOC_1] -DOC_LABEL = "Reference Docs" - - -MODEL_WARNING = f"Please paste your **{OPENAI_API_LINK}** and then **{KEY_INIT}**" - -DOCS_WARNING = f"""Database Unloaded -Please check your **{TAB_3}** config and then **{KEY_INIT}** -Or you could uncheck **{DOC_LABEL}** to ask LLM directly""" - - -webui_title = """ -# OpenAI Chatbot Based on Vector Database -""" - -dup_link = f''' -''' - -init_message = f"""This demonstration website is based on \ -**{OPENAI_LINK}** with **{LANGCHAIN_LINK}** and **{PINECONE_LINK}** - 1. Insert your **{OPENAI_API_LINK}** and click `{KEY_INIT}` - 2. Insert your **Question** and click `{KEY_SUBMIT}` -""" - -PROMPT_DOC = PromptTemplate( - input_variables=["context", "chat_history", "question"], - template="""Context: -## -{context} -## - -Chat History: -## -{chat_history} -## - -Question: -{question} - -Answer:""" -) - -PROMPT_BASE = PromptTemplate( - input_variables=['question', "chat_history"], - template="""Chat History: -## -{chat_history} -## - -Question: -## -{question} -## - -Answer:""" -) - -#---------------------------------------------------------------------------------------------------------- -#---------------------------------------------------------------------------------------------------------- -def init_rwkv(): - try: - import rwkv - return True - except Exception: - print("RWKV not found, skip local llm") - return False - - -def init_model(api_key, emb_name, emb_loader, db_api_key, db_env, db_index): - init_rwkv() - try: - if not (api_key and api_key.startswith("sk-") and len(api_key) > 50): - return None,MODEL_NULL+DOCS_NULL,None,None,None,None - - - - llm_dict = {} - for llm_name in LLM_LIST: - if llm_name == "gpt-3.5-turbo": - llm_dict[llm_name] = ChatOpenAI(model_name=llm_name, - temperature = OPENAI_TEMP, - openai_api_key = api_key - ) - else: - llm_dict[llm_name] = OpenAI(model_name=llm_name, - temperature = OPENAI_TEMP, - openai_api_key = api_key) - - if not (emb_name and db_api_key and db_env and db_index): - return api_key,MODEL_DONE+DOCS_NULL,llm_dict,None,None,None - - if emb_loader == "OpenAIEmbeddings": - embeddings = eval(emb_loader)(openai_api_key=api_key) - else: - embeddings = eval(emb_loader)(model_name=emb_name) - - pinecone.init(api_key = db_api_key, - environment = db_env) - db = Pinecone.from_existing_index(index_name = db_index, - embedding = embeddings) - - return api_key, MODEL_DONE+DOCS_DONE, llm_dict, None, db, None - - except Exception as e: - print(e) - return None,MODEL_NULL+DOCS_NULL,None,None,None,None - - -def get_chat_history(inputs) -> str: - res = [] - for human, ai in inputs: - res.append(f"Q: {human}\nA: {ai}") - return "\n".join(res) - -def remove_duplicates(documents, score_min): - seen_content = set() - unique_documents = [] - for (doc, score) in documents: - if (doc.page_content not in seen_content) and (score >= score_min): - seen_content.add(doc.page_content) - unique_documents.append(doc) - return unique_documents - -def doc_similarity(query, db, top_k, score): - docs = db.similarity_search_with_score(query = query, - k=top_k) - #docsearch = db.as_retriever(search_kwargs={'k':top_k}) - #docs = docsearch.get_relevant_documents(query) - udocs = remove_duplicates(docs, score) - return udocs - -def user(user_message, history): - return "", history+[[user_message, None]] - -def bot(box_message, ref_message, - llm_dropdown, llm_dict, doc_list, - db, top_k, score): - - # bot_message = random.choice(["Yes", "No"]) - # 0 is user question, 1 is bot response - question = box_message[-1][0] - history = box_message[:-1] - - if (not llm_dict): - box_message[-1][1] = MODEL_WARNING - return box_message, "", "" - - if not ref_message: - ref_message = question - details = f"Q: {question}" - else: - details = f"Q: {question}\nR: {ref_message}" - - - llm = llm_dict[llm_dropdown] - - if DOC_1 in doc_list: - if (not db): - box_message[-1][1] = DOCS_WARNING - return box_message, "", "" - - docs = doc_similarity(ref_message, db, top_k, score) - delta_top_k = top_k - len(docs) - - if delta_top_k > 0: - docs = doc_similarity(ref_message, db, top_k+delta_top_k, score) - - prompt = PROMPT_DOC - #chain = load_qa_chain(llm, chain_type="stuff") - - else: - prompt = PROMPT_BASE - docs = [] - - chain = LLMChain(llm = llm, - prompt = prompt, - output_key = 'output_text') - - all_output = chain({"question": question, - "context": docs, - "chat_history": get_chat_history(history) - }) - - - bot_message = all_output['output_text'] - - source = "".join([f"""
    {doc.metadata["source"]} -{doc.page_content} - -
    """ for i, doc in enumerate(docs)]) - - #print(source) - - box_message[-1][1] = bot_message - return box_message, "", [[details, bot_message + '\n\nMetadata:\n' + source]] - -#---------------------------------------------------------------------------------------------------------- -#---------------------------------------------------------------------------------------------------------- - -with gr.Blocks( - title = TAB_1, - theme = "Base", - css = """.bigbox { - min-height:250px; -} -""") as demo: - llm = gr.State() - chain_2 = gr.State() # not inuse - vector_db = gr.State() - gr.Markdown(webui_title) - gr.Markdown(dup_link) - gr.Markdown(init_message) - - with gr.Row(): - with gr.Column(scale=10): - llm_api_textbox = gr.Textbox( - label = "OpenAI API Key", - # show_label = False, - value = OPENAI_API_KEY, - placeholder = "Paste Your OpenAI API Key (sk-...) and Hit ENTER", - lines=1, - type='password') - - with gr.Column(scale=1, min_width=BUTTON_MIN_WIDTH): - - init = gr.Button(KEY_INIT) #.style(full_width=False) - model_statusbox = gr.HTML(MODEL_NULL+DOCS_NULL) - - with gr.Tab(TAB_1): - with gr.Row(): - with gr.Column(scale=10): - chatbot = gr.Chatbot(elem_classes="bigbox") - #with gr.Column(scale=1): - with gr.Column(scale=1, min_width=BUTTON_MIN_WIDTH): - doc_check = gr.CheckboxGroup(choices = DOC_SUPPORTED, - value = DOC_DEFAULT, - label = DOC_LABEL, - interactive=True) - llm_dropdown = gr.Dropdown(LLM_LIST, - value=LLM_LIST[0], - multiselect=False, - interactive=True, - label="LLM Selection", - ) - with gr.Row(): - with gr.Column(scale=10): - query = gr.Textbox(label="Question:", - lines=2) - ref = gr.Textbox(label="Reference(optional):") - - with gr.Column(scale=1, min_width=BUTTON_MIN_WIDTH): - - clear = gr.Button(KEY_CLEAR) - submit = gr.Button(KEY_SUBMIT,variant="primary") - - - with gr.Tab(TAB_2): - with gr.Row(): - with gr.Column(): - top_k = gr.Slider(1, - TOP_K_MAX, - value=TOP_K_DEFAULT, - step=1, - label="Vector similarity top_k", - interactive=True) - with gr.Column(): - score = gr.Slider(0.01, - 0.99, - value=SCORE_DEFAULT, - step=0.01, - label="Vector similarity score", - interactive=True) - detail_panel = gr.Chatbot(label="Related Docs") - - with gr.Tab(TAB_3): - with gr.Row(): - with gr.Column(): - emb_textbox = gr.Textbox( - label = "Embedding Model", - # show_label = False, - value = EMBEDDING_MODEL, - placeholder = "Paste Your Embedding Model Repo on HuggingFace", - lines=1, - interactive=True, - type='email') - - with gr.Column(): - emb_dropdown = gr.Dropdown( - EMBEDDING_LIST, - value=EMBEDDING_LOADER, - multiselect=False, - interactive=True, - label="Embedding Loader") - - with gr.Accordion("Pinecone Database for "+DOC_1): - with gr.Row(): - db_api_textbox = gr.Textbox( - label = "Pinecone API Key", - # show_label = False, - value = PINECONE_KEY, - placeholder = "Paste Your Pinecone API Key (xx-xx-xx-xx-xx) and Hit ENTER", - lines=1, - interactive=True, - type='password') - with gr.Row(): - db_env_textbox = gr.Textbox( - label = "Pinecone Environment", - # show_label = False, - value = PINECONE_ENV, - placeholder = "Paste Your Pinecone Environment (xx-xx-xx) and Hit ENTER", - lines=1, - interactive=True, - type='email') - db_index_textbox = gr.Textbox( - label = "Pinecone Index", - # show_label = False, - value = PINECONE_INDEX, - placeholder = "Paste Your Pinecone Index (xxxx) and Hit ENTER", - lines=1, - interactive=True, - type='email') - with gr.Tab(TAB_4): - "TODO" - - - - init_input = [llm_api_textbox, emb_textbox, emb_dropdown, db_api_textbox, db_env_textbox, db_index_textbox] - init_output = [llm_api_textbox, model_statusbox, - llm, chain_2, - vector_db, chatbot] - - llm_api_textbox.submit(init_model, init_input, init_output) - init.click(init_model, init_input, init_output) - - submit.click(user, - [query, chatbot], - [query, chatbot], - queue=False).then( - bot, - [chatbot, ref, - llm_dropdown, llm, doc_check, - vector_db, top_k, score], - [chatbot, ref, detail_panel] - ) - - clear.click(lambda: (None,None,None), None, [query, ref, chatbot], queue=False) - -#---------------------------------------------------------------------------------------------------------- -#---------------------------------------------------------------------------------------------------------- - -if __name__ == "__main__": - demo.launch(share = False, - inbrowser = True, - favicon_path = FAVICON) - diff --git a/spaces/SidKarthik/multi_doc_retrieval_agent/app.py b/spaces/SidKarthik/multi_doc_retrieval_agent/app.py deleted file mode 100644 index f62242188d355d47f7158eb9d27af51e51be07f9..0000000000000000000000000000000000000000 --- a/spaces/SidKarthik/multi_doc_retrieval_agent/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import streamlit as st -from PyPDF2 import PdfReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings -from langchain.vectorstores import FAISS -from langchain.chat_models import ChatOpenAI -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from htmlTemplates import css, bot_template, user_template -from langchain.llms import HuggingFaceHub - -def get_pdf_text(pdf_docs): - text = "" - for pdf in pdf_docs: - pdf_reader = PdfReader(pdf) - for page in pdf_reader.pages: - text += page.extract_text() - return text - - -def get_text_chunks(text): - text_splitter = CharacterTextSplitter( - separator="\n", - chunk_size=1000, - chunk_overlap=200, - length_function=len - ) - chunks = text_splitter.split_text(text) - return chunks - - -def get_vectorstore(text_chunks): - embeddings = OpenAIEmbeddings() - # embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl") - vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings) - return vectorstore - - -def get_conversation_chain(vectorstore): - llm = ChatOpenAI() - # llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":512}) - - memory = ConversationBufferMemory( - memory_key='chat_history', return_messages=True) - conversation_chain = ConversationalRetrievalChain.from_llm( - llm=llm, - retriever=vectorstore.as_retriever(), - memory=memory - ) - return conversation_chain - - -def handle_userinput(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - - for i, message in enumerate(st.session_state.chat_history): - if i % 2 == 0: - st.write(user_template.replace( - "{{MSG}}", message.content), unsafe_allow_html=True) - else: - st.write(bot_template.replace( - "{{MSG}}", message.content), unsafe_allow_html=True) - - -def main(): - st.set_page_config(page_title="Chat with multiple PDFs", - page_icon=":books:") - st.write(css, unsafe_allow_html=True) - - if "conversation" not in st.session_state: - st.session_state.conversation = None - if "chat_history" not in st.session_state: - st.session_state.chat_history = None - - st.header("Chat with multiple PDFs :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - handle_userinput(user_question) - - with st.sidebar: - st.subheader("Your documents") - pdf_docs = st.file_uploader( - "Upload your PDFs here and click on 'Process'", accept_multiple_files=True) - if st.button("Process"): - with st.spinner("Processing"): - # get pdf text - raw_text = get_pdf_text(pdf_docs) - - # get the text chunks - text_chunks = get_text_chunks(raw_text) - - # create vector store - vectorstore = get_vectorstore(text_chunks) - - # create conversation chain - st.session_state.conversation = get_conversation_chain( - vectorstore) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/modules/hifigan/hifigan.py b/spaces/Silentlin/DiffSinger/modules/hifigan/hifigan.py deleted file mode 100644 index ae7e61f56b00d60bcc49a18ece3edbe54746f7ea..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/hifigan/hifigan.py +++ /dev/null @@ -1,365 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from modules.parallel_wavegan.layers import UpsampleNetwork, ConvInUpsampleNetwork -from modules.parallel_wavegan.models.source import SourceModuleHnNSF -import numpy as np - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Conv1d1x1(Conv1d): - """1x1 Conv1d with customized initialization.""" - - def __init__(self, in_channels, out_channels, bias): - """Initialize 1x1 Conv1d module.""" - super(Conv1d1x1, self).__init__(in_channels, out_channels, - kernel_size=1, padding=0, - dilation=1, bias=bias) - - -class HifiGanGenerator(torch.nn.Module): - def __init__(self, h, c_out=1): - super(HifiGanGenerator, self).__init__() - self.h = h - self.num_kernels = len(h['resblock_kernel_sizes']) - self.num_upsamples = len(h['upsample_rates']) - - if h['use_pitch_embed']: - self.harmonic_num = 8 - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h['upsample_rates'])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h['audio_sample_rate'], - harmonic_num=self.harmonic_num) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(80, h['upsample_initial_channel'], 7, 1, padding=3)) - resblock = ResBlock1 if h['resblock'] == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h['upsample_rates'], h['upsample_kernel_sizes'])): - c_cur = h['upsample_initial_channel'] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(c_cur * 2, c_cur, k, u, padding=(k - u) // 2))) - if h['use_pitch_embed']: - if i + 1 < len(h['upsample_rates']): - stride_f0 = np.prod(h['upsample_rates'][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h['upsample_initial_channel'] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h['resblock_kernel_sizes'], h['resblock_dilation_sizes'])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, c_out, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x, f0=None): - if f0 is not None: - # harmonic-source signal, noise-source signal, uv flag - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - if f0 is not None: - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False, use_cond=False, c_in=1): - super(DiscriminatorP, self).__init__() - self.use_cond = use_cond - if use_cond: - from utils.hparams import hparams - t = hparams['hop_size'] - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(c_in, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x, mel): - fmap = [] - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorP(2, use_cond=use_cond, c_in=c_in), - DiscriminatorP(3, use_cond=use_cond, c_in=c_in), - DiscriminatorP(5, use_cond=use_cond, c_in=c_in), - DiscriminatorP(7, use_cond=use_cond, c_in=c_in), - DiscriminatorP(11, use_cond=use_cond, c_in=c_in), - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False, use_cond=False, upsample_rates=None, c_in=1): - super(DiscriminatorS, self).__init__() - self.use_cond = use_cond - if use_cond: - t = np.prod(upsample_rates) - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(c_in, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x, mel): - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiScaleDiscriminator, self).__init__() - from utils.hparams import hparams - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True, use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 16], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 32], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 64], - c_in=c_in), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=1), - AvgPool1d(4, 2, padding=1) - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - r_losses = 0 - g_losses = 0 - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - r_losses += r_loss - g_losses += g_loss - r_losses = r_losses / len(disc_real_outputs) - g_losses = g_losses / len(disc_real_outputs) - return r_losses, g_losses - - -def cond_discriminator_loss(outputs): - loss = 0 - for dg in outputs: - g_loss = torch.mean(dg ** 2) - loss += g_loss - loss = loss / len(outputs) - return loss - - -def generator_loss(disc_outputs): - loss = 0 - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - loss += l - loss = loss / len(disc_outputs) - return loss diff --git a/spaces/Sowmyashetty/Mygenaibot/app.py b/spaces/Sowmyashetty/Mygenaibot/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Sowmyashetty/Mygenaibot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_ultratb.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_ultratb.py deleted file mode 100644 index c4de95d564a569378809a22cb8556b2023a96d28..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_ultratb.py +++ /dev/null @@ -1,430 +0,0 @@ -# encoding: utf-8 -"""Tests for IPython.core.ultratb -""" -import io -import os.path -import platform -import re -import sys -import traceback -import unittest -from textwrap import dedent - -from tempfile import TemporaryDirectory - -from IPython.core.ultratb import ColorTB, VerboseTB -from IPython.testing import tools as tt -from IPython.testing.decorators import onlyif_unicode_paths -from IPython.utils.syspathcontext import prepended_to_syspath - -file_1 = """1 -2 -3 -def f(): - 1/0 -""" - -file_2 = """def f(): - 1/0 -""" - - -def recursionlimit(frames): - """ - decorator to set the recursion limit temporarily - """ - - def inner(test_function): - def wrapper(*args, **kwargs): - rl = sys.getrecursionlimit() - sys.setrecursionlimit(frames) - try: - return test_function(*args, **kwargs) - finally: - sys.setrecursionlimit(rl) - - return wrapper - - return inner - - -class ChangedPyFileTest(unittest.TestCase): - def test_changing_py_file(self): - """Traceback produced if the line where the error occurred is missing? - - https://github.com/ipython/ipython/issues/1456 - """ - with TemporaryDirectory() as td: - fname = os.path.join(td, "foo.py") - with open(fname, "w", encoding="utf-8") as f: - f.write(file_1) - - with prepended_to_syspath(td): - ip.run_cell("import foo") - - with tt.AssertPrints("ZeroDivisionError"): - ip.run_cell("foo.f()") - - # Make the file shorter, so the line of the error is missing. - with open(fname, "w", encoding="utf-8") as f: - f.write(file_2) - - # For some reason, this was failing on the *second* call after - # changing the file, so we call f() twice. - with tt.AssertNotPrints("Internal Python error", channel='stderr'): - with tt.AssertPrints("ZeroDivisionError"): - ip.run_cell("foo.f()") - with tt.AssertPrints("ZeroDivisionError"): - ip.run_cell("foo.f()") - -iso_8859_5_file = u'''# coding: iso-8859-5 - -def fail(): - """дбИЖ""" - 1/0 # дбИЖ -''' - -class NonAsciiTest(unittest.TestCase): - @onlyif_unicode_paths - def test_nonascii_path(self): - # Non-ascii directory name as well. - with TemporaryDirectory(suffix=u'é') as td: - fname = os.path.join(td, u"fooé.py") - with open(fname, "w", encoding="utf-8") as f: - f.write(file_1) - - with prepended_to_syspath(td): - ip.run_cell("import foo") - - with tt.AssertPrints("ZeroDivisionError"): - ip.run_cell("foo.f()") - - def test_iso8859_5(self): - with TemporaryDirectory() as td: - fname = os.path.join(td, 'dfghjkl.py') - - with io.open(fname, 'w', encoding='iso-8859-5') as f: - f.write(iso_8859_5_file) - - with prepended_to_syspath(td): - ip.run_cell("from dfghjkl import fail") - - with tt.AssertPrints("ZeroDivisionError"): - with tt.AssertPrints(u'дбИЖ', suppress=False): - ip.run_cell('fail()') - - def test_nonascii_msg(self): - cell = u"raise Exception('é')" - expected = u"Exception('é')" - ip.run_cell("%xmode plain") - with tt.AssertPrints(expected): - ip.run_cell(cell) - - ip.run_cell("%xmode verbose") - with tt.AssertPrints(expected): - ip.run_cell(cell) - - ip.run_cell("%xmode context") - with tt.AssertPrints(expected): - ip.run_cell(cell) - - ip.run_cell("%xmode minimal") - with tt.AssertPrints(u"Exception: é"): - ip.run_cell(cell) - - # Put this back into Context mode for later tests. - ip.run_cell("%xmode context") - -class NestedGenExprTestCase(unittest.TestCase): - """ - Regression test for the following issues: - https://github.com/ipython/ipython/issues/8293 - https://github.com/ipython/ipython/issues/8205 - """ - def test_nested_genexpr(self): - code = dedent( - """\ - class SpecificException(Exception): - pass - - def foo(x): - raise SpecificException("Success!") - - sum(sum(foo(x) for _ in [0]) for x in [0]) - """ - ) - with tt.AssertPrints('SpecificException: Success!', suppress=False): - ip.run_cell(code) - - -indentationerror_file = """if True: -zoon() -""" - -class IndentationErrorTest(unittest.TestCase): - def test_indentationerror_shows_line(self): - # See issue gh-2398 - with tt.AssertPrints("IndentationError"): - with tt.AssertPrints("zoon()", suppress=False): - ip.run_cell(indentationerror_file) - - with TemporaryDirectory() as td: - fname = os.path.join(td, "foo.py") - with open(fname, "w", encoding="utf-8") as f: - f.write(indentationerror_file) - - with tt.AssertPrints("IndentationError"): - with tt.AssertPrints("zoon()", suppress=False): - ip.magic('run %s' % fname) - -se_file_1 = """1 -2 -7/ -""" - -se_file_2 = """7/ -""" - -class SyntaxErrorTest(unittest.TestCase): - - def test_syntaxerror_no_stacktrace_at_compile_time(self): - syntax_error_at_compile_time = """ -def foo(): - .. -""" - with tt.AssertPrints("SyntaxError"): - ip.run_cell(syntax_error_at_compile_time) - - with tt.AssertNotPrints("foo()"): - ip.run_cell(syntax_error_at_compile_time) - - def test_syntaxerror_stacktrace_when_running_compiled_code(self): - syntax_error_at_runtime = """ -def foo(): - eval("..") - -def bar(): - foo() - -bar() -""" - with tt.AssertPrints("SyntaxError"): - ip.run_cell(syntax_error_at_runtime) - # Assert syntax error during runtime generate stacktrace - with tt.AssertPrints(["foo()", "bar()"]): - ip.run_cell(syntax_error_at_runtime) - del ip.user_ns['bar'] - del ip.user_ns['foo'] - - def test_changing_py_file(self): - with TemporaryDirectory() as td: - fname = os.path.join(td, "foo.py") - with open(fname, "w", encoding="utf-8") as f: - f.write(se_file_1) - - with tt.AssertPrints(["7/", "SyntaxError"]): - ip.magic("run " + fname) - - # Modify the file - with open(fname, "w", encoding="utf-8") as f: - f.write(se_file_2) - - # The SyntaxError should point to the correct line - with tt.AssertPrints(["7/", "SyntaxError"]): - ip.magic("run " + fname) - - def test_non_syntaxerror(self): - # SyntaxTB may be called with an error other than a SyntaxError - # See e.g. gh-4361 - try: - raise ValueError('QWERTY') - except ValueError: - with tt.AssertPrints('QWERTY'): - ip.showsyntaxerror() - -import sys - -if platform.python_implementation() != "PyPy": - """ - New 3.9 Pgen Parser does not raise Memory error, except on failed malloc. - """ - class MemoryErrorTest(unittest.TestCase): - def test_memoryerror(self): - memoryerror_code = "(" * 200 + ")" * 200 - ip.run_cell(memoryerror_code) - - -class Python3ChainedExceptionsTest(unittest.TestCase): - DIRECT_CAUSE_ERROR_CODE = """ -try: - x = 1 + 2 - print(not_defined_here) -except Exception as e: - x += 55 - x - 1 - y = {} - raise KeyError('uh') from e - """ - - EXCEPTION_DURING_HANDLING_CODE = """ -try: - x = 1 + 2 - print(not_defined_here) -except Exception as e: - x += 55 - x - 1 - y = {} - raise KeyError('uh') - """ - - SUPPRESS_CHAINING_CODE = """ -try: - 1/0 -except Exception: - raise ValueError("Yikes") from None - """ - - def test_direct_cause_error(self): - with tt.AssertPrints(["KeyError", "NameError", "direct cause"]): - ip.run_cell(self.DIRECT_CAUSE_ERROR_CODE) - - def test_exception_during_handling_error(self): - with tt.AssertPrints(["KeyError", "NameError", "During handling"]): - ip.run_cell(self.EXCEPTION_DURING_HANDLING_CODE) - - def test_suppress_exception_chaining(self): - with tt.AssertNotPrints("ZeroDivisionError"), \ - tt.AssertPrints("ValueError", suppress=False): - ip.run_cell(self.SUPPRESS_CHAINING_CODE) - - def test_plain_direct_cause_error(self): - with tt.AssertPrints(["KeyError", "NameError", "direct cause"]): - ip.run_cell("%xmode Plain") - ip.run_cell(self.DIRECT_CAUSE_ERROR_CODE) - ip.run_cell("%xmode Verbose") - - def test_plain_exception_during_handling_error(self): - with tt.AssertPrints(["KeyError", "NameError", "During handling"]): - ip.run_cell("%xmode Plain") - ip.run_cell(self.EXCEPTION_DURING_HANDLING_CODE) - ip.run_cell("%xmode Verbose") - - def test_plain_suppress_exception_chaining(self): - with tt.AssertNotPrints("ZeroDivisionError"), \ - tt.AssertPrints("ValueError", suppress=False): - ip.run_cell("%xmode Plain") - ip.run_cell(self.SUPPRESS_CHAINING_CODE) - ip.run_cell("%xmode Verbose") - - -class RecursionTest(unittest.TestCase): - DEFINITIONS = """ -def non_recurs(): - 1/0 - -def r1(): - r1() - -def r3a(): - r3b() - -def r3b(): - r3c() - -def r3c(): - r3a() - -def r3o1(): - r3a() - -def r3o2(): - r3o1() -""" - def setUp(self): - ip.run_cell(self.DEFINITIONS) - - def test_no_recursion(self): - with tt.AssertNotPrints("skipping similar frames"): - ip.run_cell("non_recurs()") - - @recursionlimit(200) - def test_recursion_one_frame(self): - with tt.AssertPrints(re.compile( - r"\[\.\.\. skipping similar frames: r1 at line 5 \(\d{2,3} times\)\]") - ): - ip.run_cell("r1()") - - @recursionlimit(160) - def test_recursion_three_frames(self): - with tt.AssertPrints("[... skipping similar frames: "), \ - tt.AssertPrints(re.compile(r"r3a at line 8 \(\d{2} times\)"), suppress=False), \ - tt.AssertPrints(re.compile(r"r3b at line 11 \(\d{2} times\)"), suppress=False), \ - tt.AssertPrints(re.compile(r"r3c at line 14 \(\d{2} times\)"), suppress=False): - ip.run_cell("r3o2()") - - -class PEP678NotesReportingTest(unittest.TestCase): - ERROR_WITH_NOTE = """ -try: - raise AssertionError("Message") -except Exception as e: - try: - e.add_note("This is a PEP-678 note.") - except AttributeError: # Python <= 3.10 - e.__notes__ = ("This is a PEP-678 note.",) - raise - """ - - def test_verbose_reports_notes(self): - with tt.AssertPrints(["AssertionError", "Message", "This is a PEP-678 note."]): - ip.run_cell(self.ERROR_WITH_NOTE) - - def test_plain_reports_notes(self): - with tt.AssertPrints(["AssertionError", "Message", "This is a PEP-678 note."]): - ip.run_cell("%xmode Plain") - ip.run_cell(self.ERROR_WITH_NOTE) - ip.run_cell("%xmode Verbose") - - -#---------------------------------------------------------------------------- - -# module testing (minimal) -def test_handlers(): - def spam(c, d_e): - (d, e) = d_e - x = c + d - y = c * d - foo(x, y) - - def foo(a, b, bar=1): - eggs(a, b + bar) - - def eggs(f, g, z=globals()): - h = f + g - i = f - g - return h / i - - buff = io.StringIO() - - buff.write('') - buff.write('*** Before ***') - try: - buff.write(spam(1, (2, 3))) - except: - traceback.print_exc(file=buff) - - handler = ColorTB(ostream=buff) - buff.write('*** ColorTB ***') - try: - buff.write(spam(1, (2, 3))) - except: - handler(*sys.exc_info()) - buff.write('') - - handler = VerboseTB(ostream=buff) - buff.write('*** VerboseTB ***') - try: - buff.write(spam(1, (2, 3))) - except: - handler(*sys.exc_info()) - buff.write('') diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/jsonutil.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/jsonutil.py deleted file mode 100644 index 2672e09e16970b490a70f003cb1d596e6d20b941..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/jsonutil.py +++ /dev/null @@ -1,5 +0,0 @@ -from warnings import warn - -warn("IPython.utils.jsonutil has moved to jupyter_client.jsonutil", stacklevel=2) - -from jupyter_client.jsonutil import * diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/initialise_test.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/initialise_test.py deleted file mode 100644 index 89f9b07511c8fee74686d9cc434bf66345a46d6d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/colorama/tests/initialise_test.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import sys -from unittest import TestCase, main, skipUnless - -try: - from unittest.mock import patch, Mock -except ImportError: - from mock import patch, Mock - -from ..ansitowin32 import StreamWrapper -from ..initialise import init, just_fix_windows_console, _wipe_internal_state_for_tests -from .utils import osname, replace_by - -orig_stdout = sys.stdout -orig_stderr = sys.stderr - - -class InitTest(TestCase): - - @skipUnless(sys.stdout.isatty(), "sys.stdout is not a tty") - def setUp(self): - # sanity check - self.assertNotWrapped() - - def tearDown(self): - _wipe_internal_state_for_tests() - sys.stdout = orig_stdout - sys.stderr = orig_stderr - - def assertWrapped(self): - self.assertIsNot(sys.stdout, orig_stdout, 'stdout should be wrapped') - self.assertIsNot(sys.stderr, orig_stderr, 'stderr should be wrapped') - self.assertTrue(isinstance(sys.stdout, StreamWrapper), - 'bad stdout wrapper') - self.assertTrue(isinstance(sys.stderr, StreamWrapper), - 'bad stderr wrapper') - - def assertNotWrapped(self): - self.assertIs(sys.stdout, orig_stdout, 'stdout should not be wrapped') - self.assertIs(sys.stderr, orig_stderr, 'stderr should not be wrapped') - - @patch('colorama.initialise.reset_all') - @patch('colorama.ansitowin32.winapi_test', lambda *_: True) - @patch('colorama.ansitowin32.enable_vt_processing', lambda *_: False) - def testInitWrapsOnWindows(self, _): - with osname("nt"): - init() - self.assertWrapped() - - @patch('colorama.initialise.reset_all') - @patch('colorama.ansitowin32.winapi_test', lambda *_: False) - def testInitDoesntWrapOnEmulatedWindows(self, _): - with osname("nt"): - init() - self.assertNotWrapped() - - def testInitDoesntWrapOnNonWindows(self): - with osname("posix"): - init() - self.assertNotWrapped() - - def testInitDoesntWrapIfNone(self): - with replace_by(None): - init() - # We can't use assertNotWrapped here because replace_by(None) - # changes stdout/stderr already. - self.assertIsNone(sys.stdout) - self.assertIsNone(sys.stderr) - - def testInitAutoresetOnWrapsOnAllPlatforms(self): - with osname("posix"): - init(autoreset=True) - self.assertWrapped() - - def testInitWrapOffDoesntWrapOnWindows(self): - with osname("nt"): - init(wrap=False) - self.assertNotWrapped() - - def testInitWrapOffIncompatibleWithAutoresetOn(self): - self.assertRaises(ValueError, lambda: init(autoreset=True, wrap=False)) - - @patch('colorama.win32.SetConsoleTextAttribute') - @patch('colorama.initialise.AnsiToWin32') - def testAutoResetPassedOn(self, mockATW32, _): - with osname("nt"): - init(autoreset=True) - self.assertEqual(len(mockATW32.call_args_list), 2) - self.assertEqual(mockATW32.call_args_list[1][1]['autoreset'], True) - self.assertEqual(mockATW32.call_args_list[0][1]['autoreset'], True) - - @patch('colorama.initialise.AnsiToWin32') - def testAutoResetChangeable(self, mockATW32): - with osname("nt"): - init() - - init(autoreset=True) - self.assertEqual(len(mockATW32.call_args_list), 4) - self.assertEqual(mockATW32.call_args_list[2][1]['autoreset'], True) - self.assertEqual(mockATW32.call_args_list[3][1]['autoreset'], True) - - init() - self.assertEqual(len(mockATW32.call_args_list), 6) - self.assertEqual( - mockATW32.call_args_list[4][1]['autoreset'], False) - self.assertEqual( - mockATW32.call_args_list[5][1]['autoreset'], False) - - - @patch('colorama.initialise.atexit.register') - def testAtexitRegisteredOnlyOnce(self, mockRegister): - init() - self.assertTrue(mockRegister.called) - mockRegister.reset_mock() - init() - self.assertFalse(mockRegister.called) - - -class JustFixWindowsConsoleTest(TestCase): - def _reset(self): - _wipe_internal_state_for_tests() - sys.stdout = orig_stdout - sys.stderr = orig_stderr - - def tearDown(self): - self._reset() - - @patch("colorama.ansitowin32.winapi_test", lambda: True) - def testJustFixWindowsConsole(self): - if sys.platform != "win32": - # just_fix_windows_console should be a no-op - just_fix_windows_console() - self.assertIs(sys.stdout, orig_stdout) - self.assertIs(sys.stderr, orig_stderr) - else: - def fake_std(): - # Emulate stdout=not a tty, stderr=tty - # to check that we handle both cases correctly - stdout = Mock() - stdout.closed = False - stdout.isatty.return_value = False - stdout.fileno.return_value = 1 - sys.stdout = stdout - - stderr = Mock() - stderr.closed = False - stderr.isatty.return_value = True - stderr.fileno.return_value = 2 - sys.stderr = stderr - - for native_ansi in [False, True]: - with patch( - 'colorama.ansitowin32.enable_vt_processing', - lambda *_: native_ansi - ): - self._reset() - fake_std() - - # Regular single-call test - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(sys.stdout, prev_stdout) - if native_ansi: - self.assertIs(sys.stderr, prev_stderr) - else: - self.assertIsNot(sys.stderr, prev_stderr) - - # second call without resetting is always a no-op - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(sys.stdout, prev_stdout) - self.assertIs(sys.stderr, prev_stderr) - - self._reset() - fake_std() - - # If init() runs first, just_fix_windows_console should be a no-op - init() - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(prev_stdout, sys.stdout) - self.assertIs(prev_stderr, sys.stderr) - - -if __name__ == '__main__': - main() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_saved_modules.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_saved_modules.py deleted file mode 100644 index bcf5f9b26cd7d675e30ba2e217be62917ed8a4a7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_saved_modules.py +++ /dev/null @@ -1,110 +0,0 @@ -import sys -import os - - -def find_in_pythonpath(module_name): - # Check all the occurrences where we could match the given module/package in the PYTHONPATH. - # - # This is a simplistic approach, but probably covers most of the cases we're interested in - # (i.e.: this may fail in more elaborate cases of import customization or .zip imports, but - # this should be rare in general). - found_at = [] - - parts = module_name.split('.') # split because we need to convert mod.name to mod/name - for path in sys.path: - target = os.path.join(path, *parts) - target_py = target + '.py' - if os.path.isdir(target): - found_at.append(target) - if os.path.exists(target_py): - found_at.append(target_py) - return found_at - - -class DebuggerInitializationError(Exception): - pass - - -class VerifyShadowedImport(object): - - def __init__(self, import_name): - self.import_name = import_name - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - if exc_type is not None: - if exc_type == DebuggerInitializationError: - return False # It's already an error we generated. - - # We couldn't even import it... - found_at = find_in_pythonpath(self.import_name) - - if len(found_at) <= 1: - # It wasn't found anywhere or there was just 1 occurrence. - # Let's just return to show the original error. - return False - - # We found more than 1 occurrence of the same module in the PYTHONPATH - # (the user module and the standard library module). - # Let's notify the user as it seems that the module was shadowed. - msg = self._generate_shadowed_import_message(found_at) - raise DebuggerInitializationError(msg) - - def _generate_shadowed_import_message(self, found_at): - msg = '''It was not possible to initialize the debugger due to a module name conflict. - -i.e.: the module "%(import_name)s" could not be imported because it is shadowed by: -%(found_at)s -Please rename this file/folder so that the original module from the standard library can be imported.''' % { - 'import_name': self.import_name, 'found_at': found_at[0]} - - return msg - - def check(self, module, expected_attributes): - msg = '' - for expected_attribute in expected_attributes: - try: - getattr(module, expected_attribute) - except: - msg = self._generate_shadowed_import_message([module.__file__]) - break - - if msg: - raise DebuggerInitializationError(msg) - - -with VerifyShadowedImport('threading') as verify_shadowed: - import threading; verify_shadowed.check(threading, ['Thread', 'settrace', 'setprofile', 'Lock', 'RLock', 'current_thread']) - -with VerifyShadowedImport('time') as verify_shadowed: - import time; verify_shadowed.check(time, ['sleep', 'time', 'mktime']) - -with VerifyShadowedImport('socket') as verify_shadowed: - import socket; verify_shadowed.check(socket, ['socket', 'gethostname', 'getaddrinfo']) - -with VerifyShadowedImport('select') as verify_shadowed: - import select; verify_shadowed.check(select, ['select']) - -with VerifyShadowedImport('code') as verify_shadowed: - import code as _code; verify_shadowed.check(_code, ['compile_command', 'InteractiveInterpreter']) - -with VerifyShadowedImport('_thread') as verify_shadowed: - import _thread as thread; verify_shadowed.check(thread, ['start_new_thread', 'start_new', 'allocate_lock']) - -with VerifyShadowedImport('queue') as verify_shadowed: - import queue as _queue; verify_shadowed.check(_queue, ['Queue', 'LifoQueue', 'Empty', 'Full', 'deque']) - -with VerifyShadowedImport('xmlrpclib') as verify_shadowed: - import xmlrpc.client as xmlrpclib; verify_shadowed.check(xmlrpclib, ['ServerProxy', 'Marshaller', 'Server']) - -with VerifyShadowedImport('xmlrpc.server') as verify_shadowed: - import xmlrpc.server as xmlrpcserver; verify_shadowed.check(xmlrpcserver, ['SimpleXMLRPCServer']) - -with VerifyShadowedImport('http.server') as verify_shadowed: - import http.server as BaseHTTPServer; verify_shadowed.check(BaseHTTPServer, ['BaseHTTPRequestHandler']) - -# If set, this is a version of the threading.enumerate that doesn't have the patching to remove the pydevd threads. -# Note: as it can't be set during execution, don't import the name (import the module and access it through its name). -pydevd_saved_threading_enumerate = None diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_trace_api.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_trace_api.py deleted file mode 100644 index 77e8b3fadffd145d87af1aaf7d7ee60bd499af2f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_trace_api.py +++ /dev/null @@ -1,62 +0,0 @@ -def add_line_breakpoint(plugin, pydb, type, canonical_normalized_filename, breakpoint_id, line, condition, expression, func_name, hit_condition=None, is_logpoint=False, add_breakpoint_result=None, on_changed_breakpoint_state=None): - return None - - -def after_breakpoints_consolidated(py_db, canonical_normalized_filename, id_to_pybreakpoint, file_to_line_to_breakpoints): - return None - - -def add_exception_breakpoint(plugin, pydb, type, exception): - return False - - -def remove_exception_breakpoint(plugin, pydb, type, exception): - return False - - -def remove_all_exception_breakpoints(plugin, pydb): - return False - - -def get_breakpoints(plugin, pydb): - return None - - -def can_skip(plugin, pydb, frame): - return True - - -def has_exception_breaks(plugin): - return False - - -def has_line_breaks(plugin): - return False - - -def cmd_step_into(plugin, pydb, frame, event, args, stop_info, stop): - return False - - -def cmd_step_over(plugin, pydb, frame, event, args, stop_info, stop): - return False - - -def stop(plugin, pydb, frame, event, args, stop_info, arg, step_cmd): - return False - - -def get_breakpoint(plugin, pydb, pydb_frame, frame, event, args): - return None - - -def suspend(plugin, pydb, thread, frame): - return None - - -def exception_break(plugin, pydb, pydb_frame, frame, args, arg): - return None - - -def change_variable(plugin, frame, attr, expression): - return False diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/__init__.py deleted file mode 100644 index 4c4077f3cdbc565953bf2998d718871087447c93..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/__init__.py +++ /dev/null @@ -1,69 +0,0 @@ -import types - -from typing_extensions import TYPE_CHECKING - -from docarray.typing.tensor.audio import AudioNdArray -from docarray.typing.tensor.embedding import AnyEmbedding, NdArrayEmbedding -from docarray.typing.tensor.image import ImageNdArray, ImageTensor -from docarray.typing.tensor.ndarray import NdArray -from docarray.typing.tensor.tensor import AnyTensor -from docarray.typing.tensor.video import VideoNdArray -from docarray.utils._internal.misc import ( - _get_path_from_docarray_root_level, - import_library, -) - -if TYPE_CHECKING: - from docarray.typing.tensor.audio import AudioTensorFlowTensor # noqa: F401 - from docarray.typing.tensor.audio import AudioTorchTensor # noqa: F401 - from docarray.typing.tensor.embedding import TensorFlowEmbedding # noqa: F401 - from docarray.typing.tensor.embedding import TorchEmbedding # noqa: F401 - from docarray.typing.tensor.image import ImageTensorFlowTensor # noqa: F401 - from docarray.typing.tensor.image import ImageTorchTensor # noqa: F401 - from docarray.typing.tensor.tensorflow_tensor import TensorFlowTensor # noqa: F401 - from docarray.typing.tensor.torch_tensor import TorchTensor # noqa: F401 - from docarray.typing.tensor.video import VideoTensorFlowTensor # noqa: F401 - from docarray.typing.tensor.video import VideoTorchTensor # noqa: F401 - -__all__ = [ - 'NdArray', - 'AnyTensor', - 'AnyEmbedding', - 'NdArrayEmbedding', - 'ImageNdArray', - 'ImageTensor', - 'AudioNdArray', - 'VideoNdArray', -] - - -def __getattr__(name: str): - if 'Torch' in name: - import_library('torch', raise_error=True) - elif 'TensorFlow' in name: - import_library('tensorflow', raise_error=True) - - lib: types.ModuleType - if name == 'TorchTensor': - import docarray.typing.tensor.torch_tensor as lib - elif name == 'TensorFlowTensor': - import docarray.typing.tensor.tensorflow_tensor as lib - elif name in ['TorchEmbedding', 'TensorFlowEmbedding']: - import docarray.typing.tensor.embedding as lib - elif name in ['ImageTorchTensor', 'ImageTensorFlowTensor']: - import docarray.typing.tensor.image as lib - elif name in ['AudioTorchTensor', 'AudioTensorFlowTensor']: - import docarray.typing.tensor.audio as lib - elif name in ['VideoTorchTensor', 'VideoTensorFlowTensor']: - import docarray.typing.tensor.video as lib - else: - raise ImportError( - f'cannot import name \'{name}\' from \'{_get_path_from_docarray_root_level(__file__)}\'' - ) - - tensor_cls = getattr(lib, name) - - if name not in __all__: - __all__.append(name) - - return tensor_cls diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/musicgen.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/musicgen.py deleted file mode 100644 index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/musicgen.py +++ /dev/null @@ -1,362 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: float = 30): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.max_duration = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device=None): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - if not os.path.isfile(name) and not os.path.isdir(name): - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - if name == 'melody': - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 18): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - # now this gets a bit messier, we need to handle prompts, - # melody conditioning etc. - ref_wavs = [attr.wav['self_wav'] for attr in attributes] - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - for attr, ref_wav in zip(attributes, ref_wavs): - wav_length = ref_wav.length.item() - if wav_length == 0: - continue - # We will extend the wav periodically if it not long enough. - # we have to do it here rather than in conditioners.py as otherwise - # we wouldn't have the full wav. - initial_position = int(time_offset * self.sample_rate) - wav_target_length = int(self.max_duration * self.sample_rate) - print(initial_position / self.sample_rate, wav_target_length / self.sample_rate) - positions = torch.arange(initial_position, - initial_position + wav_target_length, device=self.device) - attr.wav['self_wav'] = WavCondition( - ref_wav[0][:, positions % wav_length], - torch.full_like(ref_wav[1], wav_target_length)) - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/Superlang/ImageProcessor/annotator/denthPose/__init__.py b/spaces/Superlang/ImageProcessor/annotator/denthPose/__init__.py deleted file mode 100644 index f38fb4535d2774c81a19e630b1fa4d7983db3c37..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/denthPose/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -import os - - -def install_package(): - print(os.system("pip install git+https://github.com/facebookresearch/detectron2.git")) - print(os.system("pip install git+https://github.com/facebookresearch/detectron2@main#subdirectory=projects/DensePose")) - - -install_package() - -print("finished") - - -class DenthPoseProcessor: - def __init__(self): - from .depth_pose import DenthPoseProcess - self.processor = DenthPoseProcess() - - def __call__(self, img): - return self.processor(img) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py deleted file mode 100644 index 19b1cb779e5f493cf75c8e6913a90da5c174735f..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py +++ /dev/null @@ -1,201 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/evaluation/cityscapes_evaluation.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import glob -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -import torch -from PIL import Image - -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.utils import comm -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class CityscapesEvaluator(DatasetEvaluator): - """ - Base class for evaluation using cityscapes API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): the name of the dataset. - It must have the following metadata associated with it: - "thing_classes", "gt_dir". - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") - self._temp_dir = self._working_dir.name - # All workers will write to the same results directory - # TODO this does not work in distributed training - assert ( - comm.get_local_size() == comm.get_world_size() - ), "CityscapesEvaluator currently do not work with multiple machines." - self._temp_dir = comm.all_gather(self._temp_dir)[0] - if self._temp_dir != self._working_dir.name: - self._working_dir.cleanup() - self._logger.info( - "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) - ) - - -class CityscapesInstanceEvaluator(CityscapesEvaluator): - """ - Evaluate instance segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import name2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") - - if "instances" in output: - output = output["instances"].to(self._cpu_device) - num_instances = len(output) - with open(pred_txt, "w") as fout: - for i in range(num_instances): - pred_class = output.pred_classes[i] - classes = self._metadata.stuff_classes[pred_class] - class_id = name2label[classes].id - score = output.scores[i] - mask = output.pred_masks[i].numpy().astype("uint8") - png_filename = os.path.join( - self._temp_dir, basename + "_{}_{}.png".format(i, classes) - ) - - Image.fromarray(mask * 255).save(png_filename) - fout.write( - "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) - ) - else: - # Cityscapes requires a prediction file for every ground truth image. - with open(pred_txt, "w") as fout: - pass - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP" and "AP50". - """ - comm.synchronize() - if comm.get_rank() > 0: - return - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - )["averages"] - - ret = OrderedDict() - ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} - self._working_dir.cleanup() - return ret - - -class CityscapesSemSegEvaluator(CityscapesEvaluator): - """ - Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import trainId2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") - - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() - pred = 255 * np.ones(output.shape, dtype=np.uint8) - for train_id, label in trainId2label.items(): - if label.ignoreInEval: - continue - pred[output == train_id] = label.id - Image.fromarray(pred).save(pred_filename) - - def evaluate(self): - comm.synchronize() - if comm.get_rank() > 0: - return - # Load the Cityscapes eval script *after* setting the required env var, - # since the script reads CITYSCAPES_DATASET into global variables at load time. - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - ) - ret = OrderedDict() - ret["sem_seg"] = { - "IoU": 100.0 * results["averageScoreClasses"], - "iIoU": 100.0 * results["averageScoreInstClasses"], - "IoU_sup": 100.0 * results["averageScoreCategories"], - "iIoU_sup": 100.0 * results["averageScoreInstCategories"], - } - self._working_dir.cleanup() - return ret diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/scene_dataset/scene_plotter.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/scene_dataset/scene_plotter.py deleted file mode 100644 index dfcd27af81e27e207c860a02c635793d8667b1ab..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/scene_dataset/scene_plotter.py +++ /dev/null @@ -1,276 +0,0 @@ -import os -from typing import Optional - -from matplotlib.axes import Axes -from matplotlib.collections import PatchCollection -from matplotlib.lines import Line2D -from matplotlib.patches import Rectangle, Ellipse -import matplotlib -import matplotlib.pyplot as plt -import numpy as np - -from risk_biased.scene_dataset.scene import RandomScene, RandomSceneParams - - -class ScenePlotter: - """ - This class defines plotting functions that takes in a scene and an optional axes to plot road agents and trajectories. - - Args: - scene: The scene to use for plotting - ax: Matplotlib axes in which the drawing is made - """ - - def __init__(self, scene: RandomScene, ax: Optional[Axes] = None) -> None: - self.scene = scene - if ax is None: - self.ax = plt.subplot() - else: - self.ax = ax - self._sidewalks_boxes = PatchCollection( - [ - Rectangle( - xy=[-scene.ego_length, scene.bottom], - height=scene.sidewalks_width, - width=scene.road_length + scene.ego_length, - ), - Rectangle( - xy=[-scene.ego_length, 3 * scene.lane_width / 2], - height=scene.sidewalks_width, - width=scene.road_length + scene.ego_length, - ), - ], - facecolor="gray", - alpha=0.3, - edgecolor="black", - ) - self._center_line = Line2D( - [-scene.ego_length / 2, scene.road_length], - [scene.lane_width / 2, scene.lane_width / 2], - linewidth=4, - color="black", - dashes=[10, 5], - ) - - self._set_agent_patches() - self._set_agent_paths() - self.ax.set_aspect("equal") - - def _set_current_time(self, time: float): - """ - Set the current time to draw the agents at the proper time along the trajectory. - - Args: - time: the present time in second - """ - self.scene.set_current_time(time) - self._set_agent_patches() - - def _set_agent_paths(self): - """ - Defines path as lines. - """ - self._ego_path = Line2D( - [0, self.scene.ego_ref_speed * self.scene.time_scene], - [0, 0], - linewidth=2, - color="red", - dashes=[4, 4], - alpha=0.3, - ) - - self._pedestrian_path = [ - [ - Line2D( - [init[agent, 0], final[agent, 0]], - [init[agent, 1], final[agent, 1]], - linewidth=2, - dashes=[4, 4], - alpha=0.3, - ) - for (init, final) in zip( - self.scene.pedestrians_positions, - self.scene.final_pedestrians_positions, - ) - ] - for agent in range(self.scene.pedestrians_positions.shape[1]) - ] - - def _set_agent_patches(self): - """ - Set the agent patches at their current position in the scene. - """ - current_step = int(round(self.scene.current_time / self.scene.dt)) - self._ego_box = Rectangle( - xy=( - -self.scene.ego_length / 2 - + self.scene.ego_ref_speed * self.scene.current_time, - -self.scene.ego_width / 2, - ), - height=self.scene.ego_width, - width=self.scene.ego_length, - fill=True, - facecolor="red", - alpha=0.4, - edgecolor="black", - ) - self._pedestrian_patches = [ - [ - Ellipse( - xy=xy, - width=1, - height=0.5, - angle=angle * 180 / np.pi + 90, - facecolor="blue", - alpha=0.4, - edgecolor="black", - ) - for xy, angle in zip( - self.scene.pedestrians_trajectories[:, agent, current_step], - self.scene.pedestrians.angle[:, agent], - ) - ] - for agent in range(self.scene.pedestrians_trajectories.shape[1]) - ] - - def plot_road(self) -> None: - """ - Plot the road as a two lanes, two sidewalks in straight lines with the ego vehicle. Plot is made in given ax. - """ - self.ax.add_collection(self._sidewalks_boxes) - self.ax.add_patch(self._ego_box) - self.ax.add_line(self._center_line) - self.ax.add_line(self._ego_path) - self.rescale() - - def draw_scene(self, index: int, time=None, prediction=None) -> None: - """ - Plot the scene of given index (road, ego vehicle with its path, pedestrian with its path) - Args: - index: index of the pedestrian in the batch - time: set current time to this value if not None - prediction: draw this instead of the actual future if not None - """ - if time is not None: - self._set_current_time(time) - self.plot_road() - for agent_patch in self._pedestrian_patches: - self.ax.add_patch(agent_patch[index]) - for agent_patch in self._pedestrian_path: - self.ax.add_line(agent_patch[index]) - if prediction is not None: - self.draw_trajectory(prediction) - - def rescale(self): - """ - Set the x and y limits to the road shape with a margin. - """ - self.ax.set_xlim( - left=-2 * self.scene.ego_length, - right=self.scene.road_length + self.scene.ego_length, - ) - self.ax.set_ylim( - bottom=self.scene.bottom - self.scene.lane_width, - top=2 * self.scene.lane_width + 2 * self.scene.sidewalks_width, - ) - - def draw_trajectory(self, prediction, color="b") -> None: - """ - Plot the given prediction in the scene. - """ - self.ax.scatter(prediction[..., 0], prediction[..., 1], color=color, alpha=0.3) - - def draw_all_trajectories( - self, - prediction: np.ndarray, - color=None, - color_value: np.ndarray = None, - alpha: float = 0.05, - label: str = "trajectory", - ) -> None: - """ - Plot all the given predictions in the scene - Args: - prediction : (batch, n_agents, time, 2) batch of trajectories - color: regular color name - color_value : (batch) Optional batch of values for coloring from green to red - """ - - if color_value is not None: - min = color_value.min() - max = color_value.max() - color_value = 0.9 * (color_value - min) / (max - min) - for agent in range(prediction.shape[1]): - for traj, val in zip(prediction[:, agent], color_value[:, agent]): - color = (val, 1 - val, 0.1) - self.ax.plot( - traj[:, 0], traj[:, 1], color=color, alpha=alpha, label=label - ) - self.ax.scatter(traj[-1, 0], traj[-1, 1], color=color, alpha=alpha) - cmap = matplotlib.colors.ListedColormap( - np.linspace( - [color_value.min(), 1 - color_value.min(), 0.1], - [color_value.max(), 1 - color_value.max(), 0.1], - 128, - ) - ) - norm = matplotlib.colors.Normalize(vmin=min, vmax=max, clip=True) - sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm) - plt.colorbar(sm, label="TTC cost") - else: - for agent in range(prediction.shape[1]): - for traj in prediction: - self.ax.plot( - traj[agent, :, 0], - traj[agent, :, 1], - color=color, - alpha=alpha, - label=label, - ) - self.ax.scatter( - prediction[:, agent, -1, 0], - prediction[:, agent, -1, 1], - color=color, - alpha=alpha, - ) - - def draw_legend(self): - """Draw legend without repeats and without transparency.""" - - handles, labels = self.ax.get_legend_handles_labels() - i = np.arange(len(labels)) - filter = np.array([]) - unique_labels = list(set(labels)) - for ul in unique_labels: - filter = np.append(filter, [i[np.array(labels) == ul][0]]) - filtered_handles = [] - for f in filter: - handles[int(f)].set_alpha(1) - filtered_handles.append(handles[int(f)]) - filtered_labels = [labels[int(f)] for f in filter] - self.ax.legend(filtered_handles, filtered_labels) - - -# Draw a random scene -if __name__ == "__main__": - from risk_biased.utils.config_argparse import config_argparse - - working_dir = os.path.dirname(os.path.realpath(__file__)) - config_path = os.path.join( - working_dir, "..", "..", "risk_biased", "config", "learning_config.py" - ) - config = config_argparse(config_path) - n_samples = 100 - - scene_params = RandomSceneParams.from_config(config) - scene_params.batch_size = n_samples - scene = RandomScene( - scene_params, - is_torch=False, - ) - - plotter = ScenePlotter(scene) - - plotter.draw_scene(0, time=1) - plt.tight_layout() - plt.show() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/base.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/base.py deleted file mode 100644 index cafb79fb3dcf43744393e2964056fe32c350bbc1..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/base.py +++ /dev/null @@ -1,688 +0,0 @@ -import csv -import email.message -import functools -import json -import logging -import pathlib -import re -import zipfile -from typing import ( - IO, - TYPE_CHECKING, - Any, - Collection, - Container, - Dict, - Iterable, - Iterator, - List, - NamedTuple, - Optional, - Tuple, - Union, -) - -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.specifiers import InvalidSpecifier, SpecifierSet -from pip._vendor.packaging.utils import NormalizedName -from pip._vendor.packaging.version import LegacyVersion, Version - -from pip._internal.exceptions import NoneMetadataError -from pip._internal.locations import site_packages, user_site -from pip._internal.models.direct_url import ( - DIRECT_URL_METADATA_NAME, - DirectUrl, - DirectUrlValidationError, -) -from pip._internal.utils.compat import stdlib_pkgs # TODO: Move definition here. -from pip._internal.utils.egg_link import egg_link_path_from_sys_path -from pip._internal.utils.misc import is_local, normalize_path -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.urls import url_to_path - -from ._json import msg_to_json - -if TYPE_CHECKING: - from typing import Protocol -else: - Protocol = object - -DistributionVersion = Union[LegacyVersion, Version] - -InfoPath = Union[str, pathlib.PurePath] - -logger = logging.getLogger(__name__) - - -class BaseEntryPoint(Protocol): - @property - def name(self) -> str: - raise NotImplementedError() - - @property - def value(self) -> str: - raise NotImplementedError() - - @property - def group(self) -> str: - raise NotImplementedError() - - -def _convert_installed_files_path( - entry: Tuple[str, ...], - info: Tuple[str, ...], -) -> str: - """Convert a legacy installed-files.txt path into modern RECORD path. - - The legacy format stores paths relative to the info directory, while the - modern format stores paths relative to the package root, e.g. the - site-packages directory. - - :param entry: Path parts of the installed-files.txt entry. - :param info: Path parts of the egg-info directory relative to package root. - :returns: The converted entry. - - For best compatibility with symlinks, this does not use ``abspath()`` or - ``Path.resolve()``, but tries to work with path parts: - - 1. While ``entry`` starts with ``..``, remove the equal amounts of parts - from ``info``; if ``info`` is empty, start appending ``..`` instead. - 2. Join the two directly. - """ - while entry and entry[0] == "..": - if not info or info[-1] == "..": - info += ("..",) - else: - info = info[:-1] - entry = entry[1:] - return str(pathlib.Path(*info, *entry)) - - -class RequiresEntry(NamedTuple): - requirement: str - extra: str - marker: str - - -class BaseDistribution(Protocol): - @classmethod - def from_directory(cls, directory: str) -> "BaseDistribution": - """Load the distribution from a metadata directory. - - :param directory: Path to a metadata directory, e.g. ``.dist-info``. - """ - raise NotImplementedError() - - @classmethod - def from_metadata_file_contents( - cls, - metadata_contents: bytes, - filename: str, - project_name: str, - ) -> "BaseDistribution": - """Load the distribution from the contents of a METADATA file. - - This is used to implement PEP 658 by generating a "shallow" dist object that can - be used for resolution without downloading or building the actual dist yet. - - :param metadata_contents: The contents of a METADATA file. - :param filename: File name for the dist with this metadata. - :param project_name: Name of the project this dist represents. - """ - raise NotImplementedError() - - @classmethod - def from_wheel(cls, wheel: "Wheel", name: str) -> "BaseDistribution": - """Load the distribution from a given wheel. - - :param wheel: A concrete wheel definition. - :param name: File name of the wheel. - - :raises InvalidWheel: Whenever loading of the wheel causes a - :py:exc:`zipfile.BadZipFile` exception to be thrown. - :raises UnsupportedWheel: If the wheel is a valid zip, but malformed - internally. - """ - raise NotImplementedError() - - def __repr__(self) -> str: - return f"{self.raw_name} {self.version} ({self.location})" - - def __str__(self) -> str: - return f"{self.raw_name} {self.version}" - - @property - def location(self) -> Optional[str]: - """Where the distribution is loaded from. - - A string value is not necessarily a filesystem path, since distributions - can be loaded from other sources, e.g. arbitrary zip archives. ``None`` - means the distribution is created in-memory. - - Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If - this is a symbolic link, we want to preserve the relative path between - it and files in the distribution. - """ - raise NotImplementedError() - - @property - def editable_project_location(self) -> Optional[str]: - """The project location for editable distributions. - - This is the directory where pyproject.toml or setup.py is located. - None if the distribution is not installed in editable mode. - """ - # TODO: this property is relatively costly to compute, memoize it ? - direct_url = self.direct_url - if direct_url: - if direct_url.is_local_editable(): - return url_to_path(direct_url.url) - else: - # Search for an .egg-link file by walking sys.path, as it was - # done before by dist_is_editable(). - egg_link_path = egg_link_path_from_sys_path(self.raw_name) - if egg_link_path: - # TODO: get project location from second line of egg_link file - # (https://github.com/pypa/pip/issues/10243) - return self.location - return None - - @property - def installed_location(self) -> Optional[str]: - """The distribution's "installed" location. - - This should generally be a ``site-packages`` directory. This is - usually ``dist.location``, except for legacy develop-installed packages, - where ``dist.location`` is the source code location, and this is where - the ``.egg-link`` file is. - - The returned location is normalized (in particular, with symlinks removed). - """ - raise NotImplementedError() - - @property - def info_location(self) -> Optional[str]: - """Location of the .[egg|dist]-info directory or file. - - Similarly to ``location``, a string value is not necessarily a - filesystem path. ``None`` means the distribution is created in-memory. - - For a modern .dist-info installation on disk, this should be something - like ``{location}/{raw_name}-{version}.dist-info``. - - Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If - this is a symbolic link, we want to preserve the relative path between - it and other files in the distribution. - """ - raise NotImplementedError() - - @property - def installed_by_distutils(self) -> bool: - """Whether this distribution is installed with legacy distutils format. - - A distribution installed with "raw" distutils not patched by setuptools - uses one single file at ``info_location`` to store metadata. We need to - treat this specially on uninstallation. - """ - info_location = self.info_location - if not info_location: - return False - return pathlib.Path(info_location).is_file() - - @property - def installed_as_egg(self) -> bool: - """Whether this distribution is installed as an egg. - - This usually indicates the distribution was installed by (older versions - of) easy_install. - """ - location = self.location - if not location: - return False - return location.endswith(".egg") - - @property - def installed_with_setuptools_egg_info(self) -> bool: - """Whether this distribution is installed with the ``.egg-info`` format. - - This usually indicates the distribution was installed with setuptools - with an old pip version or with ``single-version-externally-managed``. - - Note that this ensure the metadata store is a directory. distutils can - also installs an ``.egg-info``, but as a file, not a directory. This - property is *False* for that case. Also see ``installed_by_distutils``. - """ - info_location = self.info_location - if not info_location: - return False - if not info_location.endswith(".egg-info"): - return False - return pathlib.Path(info_location).is_dir() - - @property - def installed_with_dist_info(self) -> bool: - """Whether this distribution is installed with the "modern format". - - This indicates a "modern" installation, e.g. storing metadata in the - ``.dist-info`` directory. This applies to installations made by - setuptools (but through pip, not directly), or anything using the - standardized build backend interface (PEP 517). - """ - info_location = self.info_location - if not info_location: - return False - if not info_location.endswith(".dist-info"): - return False - return pathlib.Path(info_location).is_dir() - - @property - def canonical_name(self) -> NormalizedName: - raise NotImplementedError() - - @property - def version(self) -> DistributionVersion: - raise NotImplementedError() - - @property - def setuptools_filename(self) -> str: - """Convert a project name to its setuptools-compatible filename. - - This is a copy of ``pkg_resources.to_filename()`` for compatibility. - """ - return self.raw_name.replace("-", "_") - - @property - def direct_url(self) -> Optional[DirectUrl]: - """Obtain a DirectUrl from this distribution. - - Returns None if the distribution has no `direct_url.json` metadata, - or if `direct_url.json` is invalid. - """ - try: - content = self.read_text(DIRECT_URL_METADATA_NAME) - except FileNotFoundError: - return None - try: - return DirectUrl.from_json(content) - except ( - UnicodeDecodeError, - json.JSONDecodeError, - DirectUrlValidationError, - ) as e: - logger.warning( - "Error parsing %s for %s: %s", - DIRECT_URL_METADATA_NAME, - self.canonical_name, - e, - ) - return None - - @property - def installer(self) -> str: - try: - installer_text = self.read_text("INSTALLER") - except (OSError, ValueError, NoneMetadataError): - return "" # Fail silently if the installer file cannot be read. - for line in installer_text.splitlines(): - cleaned_line = line.strip() - if cleaned_line: - return cleaned_line - return "" - - @property - def requested(self) -> bool: - return self.is_file("REQUESTED") - - @property - def editable(self) -> bool: - return bool(self.editable_project_location) - - @property - def local(self) -> bool: - """If distribution is installed in the current virtual environment. - - Always True if we're not in a virtualenv. - """ - if self.installed_location is None: - return False - return is_local(self.installed_location) - - @property - def in_usersite(self) -> bool: - if self.installed_location is None or user_site is None: - return False - return self.installed_location.startswith(normalize_path(user_site)) - - @property - def in_site_packages(self) -> bool: - if self.installed_location is None or site_packages is None: - return False - return self.installed_location.startswith(normalize_path(site_packages)) - - def is_file(self, path: InfoPath) -> bool: - """Check whether an entry in the info directory is a file.""" - raise NotImplementedError() - - def iter_distutils_script_names(self) -> Iterator[str]: - """Find distutils 'scripts' entries metadata. - - If 'scripts' is supplied in ``setup.py``, distutils records those in the - installed distribution's ``scripts`` directory, a file for each script. - """ - raise NotImplementedError() - - def read_text(self, path: InfoPath) -> str: - """Read a file in the info directory. - - :raise FileNotFoundError: If ``path`` does not exist in the directory. - :raise NoneMetadataError: If ``path`` exists in the info directory, but - cannot be read. - """ - raise NotImplementedError() - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - raise NotImplementedError() - - def _metadata_impl(self) -> email.message.Message: - raise NotImplementedError() - - @functools.lru_cache(maxsize=1) - def _metadata_cached(self) -> email.message.Message: - # When we drop python 3.7 support, move this to the metadata property and use - # functools.cached_property instead of lru_cache. - metadata = self._metadata_impl() - self._add_egg_info_requires(metadata) - return metadata - - @property - def metadata(self) -> email.message.Message: - """Metadata of distribution parsed from e.g. METADATA or PKG-INFO. - - This should return an empty message if the metadata file is unavailable. - - :raises NoneMetadataError: If the metadata file is available, but does - not contain valid metadata. - """ - return self._metadata_cached() - - @property - def metadata_dict(self) -> Dict[str, Any]: - """PEP 566 compliant JSON-serializable representation of METADATA or PKG-INFO. - - This should return an empty dict if the metadata file is unavailable. - - :raises NoneMetadataError: If the metadata file is available, but does - not contain valid metadata. - """ - return msg_to_json(self.metadata) - - @property - def metadata_version(self) -> Optional[str]: - """Value of "Metadata-Version:" in distribution metadata, if available.""" - return self.metadata.get("Metadata-Version") - - @property - def raw_name(self) -> str: - """Value of "Name:" in distribution metadata.""" - # The metadata should NEVER be missing the Name: key, but if it somehow - # does, fall back to the known canonical name. - return self.metadata.get("Name", self.canonical_name) - - @property - def requires_python(self) -> SpecifierSet: - """Value of "Requires-Python:" in distribution metadata. - - If the key does not exist or contains an invalid value, an empty - SpecifierSet should be returned. - """ - value = self.metadata.get("Requires-Python") - if value is None: - return SpecifierSet() - try: - # Convert to str to satisfy the type checker; this can be a Header object. - spec = SpecifierSet(str(value)) - except InvalidSpecifier as e: - message = "Package %r has an invalid Requires-Python: %s" - logger.warning(message, self.raw_name, e) - return SpecifierSet() - return spec - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - """Dependencies of this distribution. - - For modern .dist-info distributions, this is the collection of - "Requires-Dist:" entries in distribution metadata. - """ - raise NotImplementedError() - - def iter_provided_extras(self) -> Iterable[str]: - """Extras provided by this distribution. - - For modern .dist-info distributions, this is the collection of - "Provides-Extra:" entries in distribution metadata. - """ - raise NotImplementedError() - - def _iter_declared_entries_from_record(self) -> Optional[Iterator[str]]: - try: - text = self.read_text("RECORD") - except FileNotFoundError: - return None - # This extra Path-str cast normalizes entries. - return (str(pathlib.Path(row[0])) for row in csv.reader(text.splitlines())) - - def _iter_declared_entries_from_legacy(self) -> Optional[Iterator[str]]: - try: - text = self.read_text("installed-files.txt") - except FileNotFoundError: - return None - paths = (p for p in text.splitlines(keepends=False) if p) - root = self.location - info = self.info_location - if root is None or info is None: - return paths - try: - info_rel = pathlib.Path(info).relative_to(root) - except ValueError: # info is not relative to root. - return paths - if not info_rel.parts: # info *is* root. - return paths - return ( - _convert_installed_files_path(pathlib.Path(p).parts, info_rel.parts) - for p in paths - ) - - def iter_declared_entries(self) -> Optional[Iterator[str]]: - """Iterate through file entries declared in this distribution. - - For modern .dist-info distributions, this is the files listed in the - ``RECORD`` metadata file. For legacy setuptools distributions, this - comes from ``installed-files.txt``, with entries normalized to be - compatible with the format used by ``RECORD``. - - :return: An iterator for listed entries, or None if the distribution - contains neither ``RECORD`` nor ``installed-files.txt``. - """ - return ( - self._iter_declared_entries_from_record() - or self._iter_declared_entries_from_legacy() - ) - - def _iter_requires_txt_entries(self) -> Iterator[RequiresEntry]: - """Parse a ``requires.txt`` in an egg-info directory. - - This is an INI-ish format where an egg-info stores dependencies. A - section name describes extra other environment markers, while each entry - is an arbitrary string (not a key-value pair) representing a dependency - as a requirement string (no markers). - - There is a construct in ``importlib.metadata`` called ``Sectioned`` that - does mostly the same, but the format is currently considered private. - """ - try: - content = self.read_text("requires.txt") - except FileNotFoundError: - return - extra = marker = "" # Section-less entries don't have markers. - for line in content.splitlines(): - line = line.strip() - if not line or line.startswith("#"): # Comment; ignored. - continue - if line.startswith("[") and line.endswith("]"): # A section header. - extra, _, marker = line.strip("[]").partition(":") - continue - yield RequiresEntry(requirement=line, extra=extra, marker=marker) - - def _iter_egg_info_extras(self) -> Iterable[str]: - """Get extras from the egg-info directory.""" - known_extras = {""} - for entry in self._iter_requires_txt_entries(): - if entry.extra in known_extras: - continue - known_extras.add(entry.extra) - yield entry.extra - - def _iter_egg_info_dependencies(self) -> Iterable[str]: - """Get distribution dependencies from the egg-info directory. - - To ease parsing, this converts a legacy dependency entry into a PEP 508 - requirement string. Like ``_iter_requires_txt_entries()``, there is code - in ``importlib.metadata`` that does mostly the same, but not do exactly - what we need. - - Namely, ``importlib.metadata`` does not normalize the extra name before - putting it into the requirement string, which causes marker comparison - to fail because the dist-info format do normalize. This is consistent in - all currently available PEP 517 backends, although not standardized. - """ - for entry in self._iter_requires_txt_entries(): - if entry.extra and entry.marker: - marker = f'({entry.marker}) and extra == "{safe_extra(entry.extra)}"' - elif entry.extra: - marker = f'extra == "{safe_extra(entry.extra)}"' - elif entry.marker: - marker = entry.marker - else: - marker = "" - if marker: - yield f"{entry.requirement} ; {marker}" - else: - yield entry.requirement - - def _add_egg_info_requires(self, metadata: email.message.Message) -> None: - """Add egg-info requires.txt information to the metadata.""" - if not metadata.get_all("Requires-Dist"): - for dep in self._iter_egg_info_dependencies(): - metadata["Requires-Dist"] = dep - if not metadata.get_all("Provides-Extra"): - for extra in self._iter_egg_info_extras(): - metadata["Provides-Extra"] = extra - - -class BaseEnvironment: - """An environment containing distributions to introspect.""" - - @classmethod - def default(cls) -> "BaseEnvironment": - raise NotImplementedError() - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> "BaseEnvironment": - raise NotImplementedError() - - def get_distribution(self, name: str) -> Optional["BaseDistribution"]: - """Given a requirement name, return the installed distributions. - - The name may not be normalized. The implementation must canonicalize - it for lookup. - """ - raise NotImplementedError() - - def _iter_distributions(self) -> Iterator["BaseDistribution"]: - """Iterate through installed distributions. - - This function should be implemented by subclass, but never called - directly. Use the public ``iter_distribution()`` instead, which - implements additional logic to make sure the distributions are valid. - """ - raise NotImplementedError() - - def iter_all_distributions(self) -> Iterator[BaseDistribution]: - """Iterate through all installed distributions without any filtering.""" - for dist in self._iter_distributions(): - # Make sure the distribution actually comes from a valid Python - # packaging distribution. Pip's AdjacentTempDirectory leaves folders - # e.g. ``~atplotlib.dist-info`` if cleanup was interrupted. The - # valid project name pattern is taken from PEP 508. - project_name_valid = re.match( - r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", - dist.canonical_name, - flags=re.IGNORECASE, - ) - if not project_name_valid: - logger.warning( - "Ignoring invalid distribution %s (%s)", - dist.canonical_name, - dist.location, - ) - continue - yield dist - - def iter_installed_distributions( - self, - local_only: bool = True, - skip: Container[str] = stdlib_pkgs, - include_editables: bool = True, - editables_only: bool = False, - user_only: bool = False, - ) -> Iterator[BaseDistribution]: - """Return a list of installed distributions. - - This is based on ``iter_all_distributions()`` with additional filtering - options. Note that ``iter_installed_distributions()`` without arguments - is *not* equal to ``iter_all_distributions()``, since some of the - configurations exclude packages by default. - - :param local_only: If True (default), only return installations - local to the current virtualenv, if in a virtualenv. - :param skip: An iterable of canonicalized project names to ignore; - defaults to ``stdlib_pkgs``. - :param include_editables: If False, don't report editables. - :param editables_only: If True, only report editables. - :param user_only: If True, only report installations in the user - site directory. - """ - it = self.iter_all_distributions() - if local_only: - it = (d for d in it if d.local) - if not include_editables: - it = (d for d in it if not d.editable) - if editables_only: - it = (d for d in it if d.editable) - if user_only: - it = (d for d in it if d.in_usersite) - return (d for d in it if d.canonical_name not in skip) - - -class Wheel(Protocol): - location: str - - def as_zipfile(self) -> zipfile.ZipFile: - raise NotImplementedError() - - -class FilesystemWheel(Wheel): - def __init__(self, location: str) -> None: - self.location = location - - def as_zipfile(self) -> zipfile.ZipFile: - return zipfile.ZipFile(self.location, allowZip64=True) - - -class MemoryWheel(Wheel): - def __init__(self, location: str, stream: IO[bytes]) -> None: - self.location = location - self.stream = stream - - def as_zipfile(self) -> zipfile.ZipFile: - return zipfile.ZipFile(self.stream, allowZip64=True) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py deleted file mode 100644 index e60988d643e007801f79e8718354e7d00c7acf18..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Metadata generation logic for legacy source distributions. -""" - -import logging -import os - -from pip._internal.build_env import BuildEnvironment -from pip._internal.cli.spinners import open_spinner -from pip._internal.exceptions import ( - InstallationError, - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.setuptools_build import make_setuptools_egg_info_args -from pip._internal.utils.subprocess import call_subprocess -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -def _find_egg_info(directory: str) -> str: - """Find an .egg-info subdirectory in `directory`.""" - filenames = [f for f in os.listdir(directory) if f.endswith(".egg-info")] - - if not filenames: - raise InstallationError(f"No .egg-info directory found in {directory}") - - if len(filenames) > 1: - raise InstallationError( - "More than one .egg-info directory found in {}".format(directory) - ) - - return os.path.join(directory, filenames[0]) - - -def generate_metadata( - build_env: BuildEnvironment, - setup_py_path: str, - source_dir: str, - isolated: bool, - details: str, -) -> str: - """Generate metadata using setup.py-based defacto mechanisms. - - Returns the generated metadata directory. - """ - logger.debug( - "Running setup.py (path:%s) egg_info for package %s", - setup_py_path, - details, - ) - - egg_info_dir = TempDirectory(kind="pip-egg-info", globally_managed=True).path - - args = make_setuptools_egg_info_args( - setup_py_path, - egg_info_dir=egg_info_dir, - no_user_config=isolated, - ) - - with build_env: - with open_spinner("Preparing metadata (setup.py)") as spinner: - try: - call_subprocess( - args, - cwd=source_dir, - command_desc="python setup.py egg_info", - spinner=spinner, - ) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - # Return the .egg-info directory. - return _find_egg_info(egg_info_dir) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/bdist_rpm.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/bdist_rpm.py deleted file mode 100644 index 047a6d08c2f08f4510337c0f00dc5e734f74ebce..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/bdist_rpm.py +++ /dev/null @@ -1,43 +0,0 @@ -import distutils.command.bdist_rpm as orig - -from ..warnings import SetuptoolsDeprecationWarning - - -class bdist_rpm(orig.bdist_rpm): - """ - Override the default bdist_rpm behavior to do the following: - - 1. Run egg_info to ensure the name and version are properly calculated. - 2. Always run 'install' using --single-version-externally-managed to - disable eggs in RPM distributions. - """ - - def run(self): - SetuptoolsDeprecationWarning.emit( - "Deprecated command", - """ - bdist_rpm is deprecated and will be removed in a future version. - Use bdist_wheel (wheel packages) instead. - """, - see_url="https://github.com/pypa/setuptools/issues/1988", - due_date=(2023, 10, 30) # Deprecation introduced in 22 Oct 2021. - ) - - # ensure distro name is up-to-date - self.run_command('egg_info') - - orig.bdist_rpm.run(self) - - def _make_spec_file(self): - spec = orig.bdist_rpm._make_spec_file(self) - spec = [ - line.replace( - "setup.py install ", - "setup.py install --single-version-externally-managed " - ).replace( - "%setup", - "%setup -n %{name}-%{unmangled_version}" - ) - for line in spec - ] - return spec diff --git a/spaces/TencentARC/VLog/models/blip2_model.py b/spaces/TencentARC/VLog/models/blip2_model.py deleted file mode 100644 index ef45282ce2a791abdfd39799e8383ac9a02162bf..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/blip2_model.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from PIL import Image -from transformers import Blip2Processor, Blip2ForConditionalGeneration, BlipProcessor, BlipForConditionalGeneration - -class ImageCaptioner: - def __init__(self, model_name="blip2-opt", device="cpu"): - self.model_name = model_name - self.device = device - self.processor, self.model = self.initialize_model() - - def initialize_model(self): - if self.device == 'cpu': - self.data_type = torch.float32 - else: - self.data_type = torch.float16 - processor, model = None, None - if self.model_name == "blip2-opt": - processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b-coco") - model = Blip2ForConditionalGeneration.from_pretrained( - "Salesforce/blip2-opt-2.7b-coco", torch_dtype=self.data_type, low_cpu_mem_usage=True) - - elif self.model_name == "blip2-flan-t5": - processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") - model = Blip2ForConditionalGeneration.from_pretrained( - "Salesforce/blip2-flan-t5-xl", torch_dtype=self.data_type, low_cpu_mem_usage=True) - - # for gpu with small memory - elif self.model_name == "blip": - processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - - else: - raise NotImplementedError(f"{self.model_name} not implemented.") - model.to(self.device) - - if self.device != 'cpu': - model.half() - return processor, model - - def image_caption(self, image): - inputs = self.processor(images=image, return_tensors="pt").to(self.device, self.data_type) - generated_ids = self.model.generate(**inputs) - generated_text = self.processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() - return generated_text - - def image_caption_debug(self, image_src): - return "A dish with salmon, broccoli, and something yellow." diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md deleted file mode 100644 index 5db8f22415ff5c857ce83fb0d3de68211f775080..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/unexpected-problems-bugs.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -name: "😩 Unexpected behaviors" -about: Report unexpected behaviors when using detectron2 -title: Please read & provide the following - ---- - -If you do not know the root cause of the problem, please post according to this template: - -## Instructions To Reproduce the Issue: - -Check https://stackoverflow.com/help/minimal-reproducible-example for how to ask good questions. -Simplify the steps to reproduce the issue using suggestions from the above link, and provide them below: - -1. Full runnable code or full changes you made: -``` -If making changes to the project itself, please use output of the following command: -git rev-parse HEAD; git diff - - -``` -2. What exact command you run: -3. __Full logs__ or other relevant observations: -``` - -``` - -## Expected behavior: - -If there are no obvious crash in "full logs" provided above, -please tell us the expected behavior. - -If you expect a model to converge / work better, we do not help with such issues, unless -a model fails to reproduce the results in detectron2 model zoo, or proves existence of bugs. - -## Environment: - -Paste the output of the following command: -``` -wget -nc -nv https://github.com/facebookresearch/detectron2/raw/main/detectron2/utils/collect_env.py && python collect_env.py -``` - -If your issue looks like an installation issue / environment issue, -please first check common issues in https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues diff --git a/spaces/Vrk/SeeFood/utils.py b/spaces/Vrk/SeeFood/utils.py deleted file mode 100644 index 2c71134ea563c038568394a1ce071a9ebb4fdf31..0000000000000000000000000000000000000000 --- a/spaces/Vrk/SeeFood/utils.py +++ /dev/null @@ -1,118 +0,0 @@ -import tensorflow as tf -import numpy as np -import torch -import torch.nn as nn -import timm -from torchvision import transforms -import os - -import requests -import json - -classes = ['apple pie', 'baby back ribs', 'baklava', 'beef carpaccio', 'beef tartare', - 'beet salad', 'beignets', 'bibimbap', 'bread pudding', 'breakfast burrito', - 'bruschetta', 'caesar_salad', 'cannoli', 'caprese salad', 'carrot cake', - 'ceviche', 'cheese plate', 'cheesecake', 'chicken curry', - 'chicken quesadilla', 'chicken wings', 'chocolate cake', 'chocolate mousse', - 'churros', 'clam chowder', 'club sandwich', 'crab cakes', 'creme brulee', - 'croque madame', 'cup cakes', 'deviled eggs', 'donuts', 'dumplings', 'edamame', - 'eggs benedict', 'escargots', 'falafel', 'filet mignon', 'fish and chips', - 'foie gras', 'french fries', 'french onion soup', 'french toast', - 'fried calamari', 'fried rice', 'frozen yogurt', 'garlic bread', 'gnocchi', - 'greek salad', 'grilled cheese sandwich', 'grilled salmon', 'guacamole', - 'gyoza', 'hamburger', 'hot and sour soup', 'hot dog', 'huevos rancheros', - 'hummus', 'ice cream', 'lasagna', 'lobster bisque', 'lobster roll sandwich', - 'macaroni and cheese', 'macarons', 'miso soup', 'mussels', 'nachos', - 'omelette', 'onion rings', 'oysters', 'pad thai', 'paella', 'pancakes', - 'panna cotta', 'peking duck', 'pho', 'pizza', 'pork chop', 'poutine', - 'prime rib', 'pulled pork sandwich', 'ramen', 'ravioli', 'red velvet cake', - 'risotto', 'samosa', 'sashimi', 'scallops', 'seaweed salad', - 'shrimp and grits', 'spaghetti bolognese', 'spaghetti carbonara', - 'spring rolls', 'steak', 'strawberry_shortcake', 'sushi', 'tacos', 'takoyaki', - 'tiramisu', 'tuna tartare', 'waffles'] - -########################################################################## -# TENSORFLOW FUNCTIONS # -########################################################################## - -def load_prepare_image_tf(filepath, img_size, rescale=False): - img = tf.io.decode_image(filepath, channels=3) - img = tf.image.resize(img, img_size) - - if rescale: - return img/255. - else: - return img - -def model_pred_tf(model_path, img, class_names=classes): - # Load TFLite model and allocate tensors. - interpreter = tf.lite.Interpreter(model_path=model_path) - #allocate the tensors - interpreter.allocate_tensors() - - input_tensor= np.array(np.expand_dims(img,0), dtype=np.float32) - input_index = interpreter.get_input_details()[0]["index"] - - # setting input tensor - interpreter.set_tensor(input_index, input_tensor) - - #Run the inference - interpreter.invoke() - output_details = interpreter.get_output_details() - - # output data of image - output_data = interpreter.get_tensor(output_details[0]['index']) - - pred = output_data.argmax() - - food_name = class_names[pred] - - return food_name - -########################################################################## -# PyTorch FUNCTIONS # -########################################################################## - -def get_model_pt(model_path): - model = timm.create_model('vit_base_patch16_224', pretrained=False) - model.head = nn.Linear(in_features=768, out_features=len(classes), bias=True) - model.load_state_dict(torch.load(model_path, map_location='cpu')) - return model - -def load_prepare_image_pt(input_image): - normalize = transforms.Normalize( - [0.485, 0.456, 0.406], - [0.229, 0.224, 0.225] - ) - img_transform = transforms.Compose([ - transforms.Resize((225, 225)), - transforms.CenterCrop(224), - transforms.ToTensor(), - normalize, - ]) - input_image = img_transform(input_image).unsqueeze(0) - return input_image - - -def model_pred_pt(input_image, model_path): - model = get_model_pt(model_path) - probs = model(input_image) - y_preds = torch.softmax(probs, dim=1).detach().numpy().argmax() - pred = classes[y_preds] - return pred - -def fetch_recipe(food_name): - url = "https://recipesapi2.p.rapidapi.com/recipes/"+food_name - querystring = {"maxRecipes":"1"} - - headers = { - 'x-rapidapi-host': "recipesapi2.p.rapidapi.com", - 'x-rapidapi-key': "f6f6823b91msh9e92fed91d5356ap136f5djsn494d8f582fb3" - } - - response = requests.request("GET", url, headers=headers, params=querystring) - json_data = json.loads(response.text) - - recipe_data = json_data['data'][0] - - return recipe_data \ No newline at end of file diff --git a/spaces/Wataru/Miipher/README.md b/spaces/Wataru/Miipher/README.md deleted file mode 100644 index e37f00432c7f0b91d4f3df99cb56102575836566..0000000000000000000000000000000000000000 --- a/spaces/Wataru/Miipher/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Miipher -emoji: 💻 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false -license: cc-by-nc-2.0 ---- - - -This repository is a demo of unofficial implementation of Miipher proposed by Koizumi et. al. [arxiv](https://arxiv.org/abs/2303.01664) -The weights are privided in CC-BY-NC-2.0 License. diff --git a/spaces/Wootang01/chatbot_three/README.md b/spaces/Wootang01/chatbot_three/README.md deleted file mode 100644 index 0c63113cfb704873c09aa7b0bdcb219b4eceec12..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/chatbot_three/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatbot_three -emoji: 🌖 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Wootang01/text_generator/app.py b/spaces/Wootang01/text_generator/app.py deleted file mode 100644 index ed092b52d5bb3bb1b386f8992a54f9cd5b0deef0..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/text_generator/app.py +++ /dev/null @@ -1,65 +0,0 @@ -#import libraries and dependencies -#from gradio.mix import Parallel - -import gradio as gr -import torch -from transformers import pipeline - -#instantiate variables as strings -title="Text Generator" -#title1="Level 1 Text Generator" -#title2="Level 3 Text Generator" -description="This text generator has been trained to chat and to respond to natural language instructions." -#description1="This is the basic text generator all students were taught to code using an older, smaller language model. Input text, submit, and the text generator will generate one output text instance." -#description2="This is a more advanced text generator that many students were taught to code. Input text and the text generator generates three output text instances from three language models. Importantly, two of these language models were designed to process explicit instructions." -#description3="This is the most advanced text generator that a few students were taught to code. Input text and the text generator generates an output text instance. You can resubmit to include that new text as input text." -examples = [ - ["What is the capital of China?"], - ["How do I apply for an Australian visa?"], - ["Write a short story."], - ["Once upon a time, "] -] - -#instantiate variables as functions -#pipe = pipeline("text-generation", model='EleutherAI/gpt-neo-2.7B', trust_remote_code=True) - -ans = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") - -#model1 = gr.Interface.load("huggingface/bigscience/bloom-560m") -#model2 = gr.Interface.load("huggingface/google/flan-t5-xl") -#model3 = gr.Interface.load("huggingface/bigscience/bloomz-7b1") -#model4 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") - -#togethercomputer/GPT-NeoXT-Chat-Base-20B -#decapoda-research/llama-7b-hf - -#define functions - -def answer(query): - out=ans(query) - return out - -#def complete_with_gpt(text): -# # Use the last 50 characters of the text as context -# return text[:-50] + model4(text[-50:]) - -#with gr.Blocks() as demo: -# with gr.Row(): -# textbox = gr.Textbox(placeholder=description3, lines=8) -# with gr.Column(): -# btn = gr.Button("Submit") - -# btn.click(complete_with_gpt, textbox, textbox) - -#tab1 = gr.Interface.load("huggingface/gpt2", title=title1, description=description1, examples=examples) -#tab2 = gr.Parallel(model1, model2, model3, inputs=gr.Textbox(lines=5, label="Input explicit or implicit instructions"), title=title2, description=description2, examples=examples) -#tab3 = demo - -#demo1 = gr.TabbedInterface([tab1, tab2, tab3], ["Level 1", "Level 3", "Level 5"], title=title) - -#if __name__ == "__main__": -# demo1.launch(debug=True) -#gr.Interface.from_pipeline(pipe).launch() - -Demo = gr.Interface(fn=answer,inputs='text',outputs='text', title=title, description=description, examples=examples) -Demo.launch() diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/gen_doc/convert2html.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/gen_doc/convert2html.py deleted file mode 100644 index a3a9ec2c59edd0bad9cf74296c0f8e038cdabdbd..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/gen_doc/convert2html.py +++ /dev/null @@ -1,51 +0,0 @@ -import os.path, re, nbformat, jupyter_contrib_nbextensions -from nbconvert.preprocessors import Preprocessor -from nbconvert import HTMLExporter -from traitlets.config import Config -from pathlib import Path - -__all__ = ['read_nb', 'convert_nb', 'convert_all'] - -exporter = HTMLExporter(Config()) -exporter.exclude_input_prompt=True -exporter.exclude_output_prompt=True -#Loads the template to deal with hidden cells. -exporter.template_file = 'jekyll.tpl' -path = Path(__file__).parent -exporter.template_path.append(str(path)) - -def read_nb(fname): - "Read the notebook in `fname`." - with open(fname,'r') as f: return nbformat.reads(f.read(), as_version=4) - -def convert_nb(fname, dest_path='.'): - "Convert a notebook `fname` to html file in `dest_path`." - from .gen_notebooks import remove_undoc_cells, remove_code_cell_jupyter_widget_state_elem - nb = read_nb(fname) - nb['cells'] = remove_undoc_cells(nb['cells']) - nb['cells'] = remove_code_cell_jupyter_widget_state_elem(nb['cells']) - fname = Path(fname).absolute() - dest_name = fname.with_suffix('.html').name - meta = nb['metadata'] - meta_jekyll = meta['jekyll'] if 'jekyll' in meta else {'title': fname.with_suffix('').name} - meta_jekyll['nb_path'] = f'{fname.parent.name}/{fname.name}' - with open(f'{dest_path}/{dest_name}','w') as f: - f.write(exporter.from_notebook_node(nb, resources=meta_jekyll)[0]) - -def convert_all(folder, dest_path='.', force_all=False): - "Convert modified notebooks in `folder` to html pages in `dest_path`." - path = Path(folder) - - changed_cnt = 0 - for fname in path.glob("*.ipynb"): - # only rebuild modified files - fname_out = Path(dest_path)/fname.with_suffix('.html').name - if not force_all and fname_out.exists(): - in_mod = os.path.getmtime(fname) - out_mod = os.path.getmtime(fname_out) - if in_mod < out_mod: continue - - print(f"converting: {fname} => {fname_out}") - changed_cnt += 1 - convert_nb(fname, dest_path=dest_path) - if not changed_cnt: print("No notebooks were modified") diff --git a/spaces/XzJosh/Gun-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Gun-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Gun-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/TianDou-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/TianDou-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/YONG627/456123/yolov5-code-main/segment/train.py b/spaces/YONG627/456123/yolov5-code-main/segment/train.py deleted file mode 100644 index 43d13c007dd3755d94f73ada4718e042dfa17f39..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/segment/train.py +++ /dev/null @@ -1,665 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Train a YOLOv5 segment model on a segment dataset -Models and datasets download automatically from the latest YOLOv5 release. - -Usage - Single-GPU training: - $ python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 # from pretrained (recommended) - $ python segment/train.py --data coco128-seg.yaml --weights '' --cfg yolov5s-seg.yaml --img 640 # from scratch - -Usage - Multi-GPU DDP training: - $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3 - -Models: https://github.com/ultralytics/yolov5/tree/master/models -Datasets: https://github.com/ultralytics/yolov5/tree/master/data -Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data -""" - -import argparse -import math -import os -os.environ["GIT_PYTHON_REFRESH"] = "quiet" -import random -import subprocess -import sys -import time -from copy import deepcopy -from datetime import datetime -from pathlib import Path - -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -import yaml -from torch.optim import lr_scheduler -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -import segment.val as validate # for end-of-epoch mAP -from models.experimental import attempt_load -from models.yolo import SegmentationModel -from utils.autoanchor import check_anchors -from utils.autobatch import check_train_batch_size -from utils.callbacks import Callbacks -from utils.downloads import attempt_download, is_url -from utils.general import (LOGGER, TQDM_BAR_FORMAT, check_amp, check_dataset, check_file, check_git_info, - check_git_status, check_img_size, check_requirements, check_suffix, check_yaml, colorstr, - get_latest_run, increment_path, init_seeds, intersect_dicts, labels_to_class_weights, - labels_to_image_weights, one_cycle, print_args, print_mutation, strip_optimizer, yaml_save) -from utils.loggers import GenericLogger -from utils.plots import plot_evolve, plot_labels -from utils.segment.dataloaders import create_dataloader -from utils.segment.loss import ComputeLoss -from utils.segment.metrics import KEYS, fitness -from utils.segment.plots import plot_images_and_masks, plot_results_with_masks -from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer, - smart_resume, torch_distributed_zero_first) - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) -GIT_INFO = check_git_info() - - -def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictionary - save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, mask_ratio = \ - Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ - opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze, opt.mask_ratio - # callbacks.run('on_pretrain_routine_start') - - # Directories - w = save_dir / 'weights' # weights dir - (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir - last, best = w / 'last.pt', w / 'best.pt' - - # Hyperparameters - if isinstance(hyp, str): - with open(hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) - opt.hyp = hyp.copy() # for saving hyps to checkpoints - - # Save run settings - if not evolve: - yaml_save(save_dir / 'hyp.yaml', hyp) - yaml_save(save_dir / 'opt.yaml', vars(opt)) - - # Loggers - data_dict = None - if RANK in {-1, 0}: - logger = GenericLogger(opt=opt, console_logger=LOGGER) - - # Config - plots = not evolve and not opt.noplots # create plots - overlap = not opt.no_overlap - cuda = device.type != 'cpu' - init_seeds(opt.seed + 1 + RANK, deterministic=True) - with torch_distributed_zero_first(LOCAL_RANK): - data_dict = data_dict or check_dataset(data) # check if None - train_path, val_path = data_dict['train'], data_dict['val'] - nc = 1 if single_cls else int(data_dict['nc']) # number of classes - names = {0: 'item'} if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names - is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt') # COCO dataset - - # Model - check_suffix(weights, '.pt') # check weights - pretrained = weights.endswith('.pt') - if pretrained: - with torch_distributed_zero_first(LOCAL_RANK): - weights = attempt_download(weights) # download if not found locally - ckpt = torch.load(weights, map_location='cpu') # load checkpoint to CPU to avoid CUDA memory leak - model = SegmentationModel(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) - exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect - model.load_state_dict(csd, strict=False) # load - LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report - else: - model = SegmentationModel(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - amp = check_amp(model) # check AMP - - # Freeze - freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # layers to freeze - for k, v in model.named_parameters(): - v.requires_grad = True # train all layers - # v.register_hook(lambda x: torch.nan_to_num(x)) # NaN to 0 (commented for erratic training results) - if any(x in k for x in freeze): - LOGGER.info(f'freezing {k}') - v.requires_grad = False - - # Image size - gs = max(int(model.stride.max()), 32) # grid size (max stride) - imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple - - # Batch size - if RANK == -1 and batch_size == -1: # single-GPU only, estimate best batch size - batch_size = check_train_batch_size(model, imgsz, amp) - logger.update_params({'batch_size': batch_size}) - # loggers.on_params_update({"batch_size": batch_size}) - - # Optimizer - nbs = 64 # nominal batch size - accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing - hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay - optimizer = smart_optimizer(model, opt.optimizer, hyp['lr0'], hyp['momentum'], hyp['weight_decay']) - - # Scheduler - if opt.cos_lr: - lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] - else: - lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) - - # EMA - ema = ModelEMA(model) if RANK in {-1, 0} else None - - # Resume - best_fitness, start_epoch = 0.0, 0 - if pretrained: - if resume: - best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume) - del ckpt, csd - - # DP mode - if cuda and RANK == -1 and torch.cuda.device_count() > 1: - LOGGER.warning('WARNING ⚠️ DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n' - 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') - model = torch.nn.DataParallel(model) - - # SyncBatchNorm - if opt.sync_bn and cuda and RANK != -1: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) - LOGGER.info('Using SyncBatchNorm()') - - # Trainloader - train_loader, dataset = create_dataloader( - train_path, - imgsz, - batch_size // WORLD_SIZE, - gs, - single_cls, - hyp=hyp, - augment=True, - cache=None if opt.cache == 'val' else opt.cache, - rect=opt.rect, - rank=LOCAL_RANK, - workers=workers, - image_weights=opt.image_weights, - quad=opt.quad, - prefix=colorstr('train: '), - shuffle=True, - mask_downsample_ratio=mask_ratio, - overlap_mask=overlap, - ) - labels = np.concatenate(dataset.labels, 0) - mlc = int(labels[:, 0].max()) # max label class - assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' - - # Process 0 - if RANK in {-1, 0}: - val_loader = create_dataloader(val_path, - imgsz, - batch_size // WORLD_SIZE * 2, - gs, - single_cls, - hyp=hyp, - cache=None if noval else opt.cache, - rect=True, - rank=-1, - workers=workers * 2, - pad=0.5, - mask_downsample_ratio=mask_ratio, - overlap_mask=overlap, - prefix=colorstr('val: '))[0] - - if not resume: - if not opt.noautoanchor: - check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) # run AutoAnchor - model.half().float() # pre-reduce anchor precision - - if plots: - plot_labels(labels, names, save_dir) - # callbacks.run('on_pretrain_routine_end', labels, names) - - # DDP mode - if cuda and RANK != -1: - model = smart_DDP(model) - - # Model attributes - nl = de_parallel(model).model[-1].nl # number of detection layers (to scale hyps) - hyp['box'] *= 3 / nl # scale to layers - hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers - hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers - hyp['label_smoothing'] = opt.label_smoothing - model.nc = nc # attach number of classes to model - model.hyp = hyp # attach hyperparameters to model - model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights - model.names = names - - # Start training - t0 = time.time() - nb = len(train_loader) # number of batches - nw = max(round(hyp['warmup_epochs'] * nb), 100) # number of warmup iterations, max(3 epochs, 100 iterations) - # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training - last_opt_step = -1 - maps = np.zeros(nc) # mAP per class - results = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) - scheduler.last_epoch = start_epoch - 1 # do not move - scaler = torch.cuda.amp.GradScaler(enabled=amp) - stopper, stop = EarlyStopping(patience=opt.patience), False - compute_loss = ComputeLoss(model, overlap=overlap) # init loss class - # callbacks.run('on_train_start') - LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' - f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n' - f"Logging results to {colorstr('bold', save_dir)}\n" - f'Starting training for {epochs} epochs...') - for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ - # callbacks.run('on_train_epoch_start') - model.train() - - # Update image weights (optional, single-GPU only) - if opt.image_weights: - cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights - iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights - dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx - - # Update mosaic border (optional) - # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) - # dataset.mosaic_border = [b - imgsz, -b] # height, width borders - - mloss = torch.zeros(4, device=device) # mean losses - if RANK != -1: - train_loader.sampler.set_epoch(epoch) - pbar = enumerate(train_loader) - LOGGER.info(('\n' + '%11s' * 8) % - ('Epoch', 'GPU_mem', 'box_loss', 'seg_loss', 'obj_loss', 'cls_loss', 'Instances', 'Size')) - if RANK in {-1, 0}: - pbar = tqdm(pbar, total=nb, bar_format=TQDM_BAR_FORMAT) # progress bar - optimizer.zero_grad() - for i, (imgs, targets, paths, _, masks) in pbar: # batch ------------------------------------------------------ - # callbacks.run('on_train_batch_start') - ni = i + nb * epoch # number integrated batches (since train start) - imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0 - - # Warmup - if ni <= nw: - xi = [0, nw] # x interp - # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) - accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) - for j, x in enumerate(optimizer.param_groups): - # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 - x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 0 else 0.0, x['initial_lr'] * lf(epoch)]) - if 'momentum' in x: - x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) - - # Multi-scale - if opt.multi_scale: - sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size - sf = sz / max(imgs.shape[2:]) # scale factor - if sf != 1: - ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) - imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) - - # Forward - with torch.cuda.amp.autocast(amp): - pred = model(imgs) # forward - loss, loss_items = compute_loss(pred, targets.to(device), masks=masks.to(device).float()) - if RANK != -1: - loss *= WORLD_SIZE # gradient averaged between devices in DDP mode - if opt.quad: - loss *= 4. - - # Backward - scaler.scale(loss).backward() - - # Optimize - https://pytorch.org/docs/master/notes/amp_examples.html - if ni - last_opt_step >= accumulate: - scaler.unscale_(optimizer) # unscale gradients - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients - scaler.step(optimizer) # optimizer.step - scaler.update() - optimizer.zero_grad() - if ema: - ema.update(model) - last_opt_step = ni - - # Log - if RANK in {-1, 0}: - mloss = (mloss * i + loss_items) / (i + 1) # update mean losses - mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) - pbar.set_description(('%11s' * 2 + '%11.4g' * 6) % - (f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) - # callbacks.run('on_train_batch_end', model, ni, imgs, targets, paths) - # if callbacks.stop_training: - # return - - # Mosaic plots - if plots: - if ni < 3: - plot_images_and_masks(imgs, targets, masks, paths, save_dir / f'train_batch{ni}.jpg') - if ni == 10: - files = sorted(save_dir.glob('train*.jpg')) - logger.log_images(files, 'Mosaics', epoch) - # end batch ------------------------------------------------------------------------------------------------ - - # Scheduler - lr = [x['lr'] for x in optimizer.param_groups] # for loggers - scheduler.step() - - if RANK in {-1, 0}: - # mAP - # callbacks.run('on_train_epoch_end', epoch=epoch) - ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) - final_epoch = (epoch + 1 == epochs) or stopper.possible_stop - if not noval or final_epoch: # Calculate mAP - results, maps, _ = validate.run(data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - half=amp, - model=ema.ema, - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - plots=False, - callbacks=callbacks, - compute_loss=compute_loss, - mask_downsample_ratio=mask_ratio, - overlap=overlap) - - # Update best mAP - fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] - stop = stopper(epoch=epoch, fitness=fi) # early stop check - if fi > best_fitness: - best_fitness = fi - log_vals = list(mloss) + list(results) + lr - # callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) - # Log val metrics and media - metrics_dict = dict(zip(KEYS, log_vals)) - logger.log_metrics(metrics_dict, epoch) - - # Save model - if (not nosave) or (final_epoch and not evolve): # if save - ckpt = { - 'epoch': epoch, - 'best_fitness': best_fitness, - 'model': deepcopy(de_parallel(model)).half(), - 'ema': deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': optimizer.state_dict(), - 'opt': vars(opt), - 'git': GIT_INFO, # {remote, branch, commit} if a git repo - 'date': datetime.now().isoformat()} - - # Save last, best and delete - torch.save(ckpt, last) - if best_fitness == fi: - torch.save(ckpt, best) - if opt.save_period > 0 and epoch % opt.save_period == 0: - torch.save(ckpt, w / f'epoch{epoch}.pt') - logger.log_model(w / f'epoch{epoch}.pt') - del ckpt - # callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) - - # EarlyStopping - if RANK != -1: # if DDP training - broadcast_list = [stop if RANK == 0 else None] - dist.broadcast_object_list(broadcast_list, 0) # broadcast 'stop' to all ranks - if RANK != 0: - stop = broadcast_list[0] - if stop: - break # must break all DDP ranks - - # end epoch ---------------------------------------------------------------------------------------------------- - # end training ----------------------------------------------------------------------------------------------------- - if RANK in {-1, 0}: - LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') - for f in last, best: - if f.exists(): - strip_optimizer(f) # strip optimizers - if f is best: - LOGGER.info(f'\nValidating {f}...') - results, _, _ = validate.run( - data_dict, - batch_size=batch_size // WORLD_SIZE * 2, - imgsz=imgsz, - model=attempt_load(f, device).half(), - iou_thres=0.65 if is_coco else 0.60, # best pycocotools at iou 0.65 - single_cls=single_cls, - dataloader=val_loader, - save_dir=save_dir, - save_json=is_coco, - verbose=True, - plots=plots, - callbacks=callbacks, - compute_loss=compute_loss, - mask_downsample_ratio=mask_ratio, - overlap=overlap) # val best model with plots - if is_coco: - # callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi) - metrics_dict = dict(zip(KEYS, list(mloss) + list(results) + lr)) - logger.log_metrics(metrics_dict, epoch) - - # callbacks.run('on_train_end', last, best, epoch, results) - # on train end callback using genericLogger - logger.log_metrics(dict(zip(KEYS[4:16], results)), epochs) - if not opt.evolve: - logger.log_model(best, epoch) - if plots: - plot_results_with_masks(file=save_dir / 'results.csv') # save results.png - files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] - files = [(save_dir / f) for f in files if (save_dir / f).exists()] # filter - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}") - logger.log_images(files, 'Results', epoch + 1) - logger.log_images(sorted(save_dir.glob('val*.jpg')), 'Validation', epoch + 1) - torch.cuda.empty_cache() - return results - - -def parse_opt(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s-seg.pt', help='initial weights path') - parser.add_argument('--cfg', type=str, default='', help='model.yaml path') - parser.add_argument('--data', type=str, default="C:/Users/yong/Desktop/yolov5-code-main/data/coco128-seg.yaml", help='dataset.yaml path') - parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=25, help='total training epochs') - parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs, -1 for autobatch') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') - parser.add_argument('--rect', action='store_true', help='rectangular training') - parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--noval', action='store_true', help='only validate final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') - parser.add_argument('--noplots', action='store_true', help='save no plot files') - parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') - parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='image --cache ram/disk') - parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') - parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') - parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--project', default=ROOT / 'runs/train-seg', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') - parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') - parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') - parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') - parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') - - # Instance Segmentation Args - parser.add_argument('--mask-ratio', type=int, default=4, help='Downsample the truth masks to saving memory') - parser.add_argument('--no-overlap', action='store_true', help='Overlap masks train faster at slightly less mAP') - - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def main(opt, callbacks=Callbacks()): - # Checks - if RANK in {-1, 0}: - print_args(vars(opt)) - check_git_status() - check_requirements() - - # Resume - if opt.resume and not opt.evolve: # resume from specified or most recent last.pt - last = Path(check_file(opt.resume) if isinstance(opt.resume, str) else get_latest_run()) - opt_yaml = last.parent.parent / 'opt.yaml' # train options yaml - opt_data = opt.data # original dataset - if opt_yaml.is_file(): - with open(opt_yaml, errors='ignore') as f: - d = yaml.safe_load(f) - else: - d = torch.load(last, map_location='cpu')['opt'] - opt = argparse.Namespace(**d) # replace - opt.cfg, opt.weights, opt.resume = '', str(last), True # reinstate - if is_url(opt_data): - opt.data = check_file(opt_data) # avoid HUB resume auth timeout - else: - opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ - check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks - assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' - if opt.evolve: - if opt.project == str(ROOT / 'runs/train'): # if default project name, rename to runs/evolve - opt.project = str(ROOT / 'runs/evolve') - opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume - if opt.name == 'cfg': - opt.name = Path(opt.cfg).stem # use model.yaml as name - opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) - - # DDP mode - device = select_device(opt.device, batch_size=opt.batch_size) - if LOCAL_RANK != -1: - msg = 'is not compatible with YOLOv5 Multi-GPU DDP training' - assert not opt.image_weights, f'--image-weights {msg}' - assert not opt.evolve, f'--evolve {msg}' - assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size' - assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' - assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' - torch.cuda.set_device(LOCAL_RANK) - device = torch.device('cuda', LOCAL_RANK) - dist.init_process_group(backend='nccl' if dist.is_nccl_available() else 'gloo') - - # Train - if not opt.evolve: - train(opt.hyp, opt, device, callbacks) - - # Evolve hyperparameters (optional) - else: - # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) - meta = { - 'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) - 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) - 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 - 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay - 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) - 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum - 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr - 'box': (1, 0.02, 0.2), # box loss gain - 'cls': (1, 0.2, 4.0), # cls loss gain - 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight - 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) - 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight - 'iou_t': (0, 0.1, 0.7), # IoU training threshold - 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold - 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) - 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) - 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) - 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) - 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) - 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) - 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) - 'scale': (1, 0.0, 0.9), # image scale (+/- gain) - 'shear': (1, 0.0, 10.0), # image shear (+/- deg) - 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 - 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) - 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) - 'mosaic': (1, 0.0, 1.0), # image mixup (probability) - 'mixup': (1, 0.0, 1.0), # image mixup (probability) - 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) - - with open(opt.hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - if 'anchors' not in hyp: # anchors commented in hyp.yaml - hyp['anchors'] = 3 - if opt.noautoanchor: - del hyp['anchors'], meta['anchors'] - opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch - # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices - evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' - if opt.bucket: - # download evolve.csv if exists - subprocess.run([ - 'gsutil', - 'cp', - f'gs://{opt.bucket}/evolve.csv', - str(evolve_csv),]) - - for _ in range(opt.evolve): # generations to evolve - if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate - # Select parent(s) - parent = 'single' # parent selection method: 'single' or 'weighted' - x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) - n = min(5, len(x)) # number of previous results to consider - x = x[np.argsort(-fitness(x))][:n] # top n mutations - w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) - if parent == 'single' or len(x) == 1: - # x = x[random.randint(0, n - 1)] # random selection - x = x[random.choices(range(n), weights=w)[0]] # weighted selection - elif parent == 'weighted': - x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination - - # Mutate - mp, s = 0.8, 0.2 # mutation probability, sigma - npr = np.random - npr.seed(int(time.time())) - g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 - ng = len(meta) - v = np.ones(ng) - while all(v == 1): # mutate until a change occurs (prevent duplicates) - v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) - for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) - hyp[k] = float(x[i + 7] * v[i]) # mutate - - # Constrain to limits - for k, v in meta.items(): - hyp[k] = max(hyp[k], v[1]) # lower limit - hyp[k] = min(hyp[k], v[2]) # upper limit - hyp[k] = round(hyp[k], 5) # significant digits - - # Train mutation - results = train(hyp.copy(), opt, device, callbacks) - callbacks = Callbacks() - # Write mutation results - print_mutation(KEYS, results, hyp.copy(), save_dir, opt.bucket) - - # Plot results - plot_evolve(evolve_csv) - LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n' - f"Results saved to {colorstr('bold', save_dir)}\n" - f'Usage example: $ python train.py --hyp {evolve_yaml}') - - -def run(**kwargs): - # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') - opt = parse_opt(True) - for k, v in kwargs.items(): - setattr(opt, k, v) - main(opt) - return opt - - -if __name__ == '__main__': - opt = parse_opt() - main(opt) diff --git a/spaces/Yuliang/ICON/lib/common/render_utils.py b/spaces/Yuliang/ICON/lib/common/render_utils.py deleted file mode 100644 index 09b38cadc8a5b66d765f9f62596709fa7325c773..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/common/render_utils.py +++ /dev/null @@ -1,221 +0,0 @@ - -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import torch -from torch import nn -import trimesh -import math -from typing import NewType -from pytorch3d.structures import Meshes -from pytorch3d.renderer.mesh import rasterize_meshes - -Tensor = NewType('Tensor', torch.Tensor) - - -def solid_angles(points: Tensor, - triangles: Tensor, - thresh: float = 1e-8) -> Tensor: - ''' Compute solid angle between the input points and triangles - Follows the method described in: - The Solid Angle of a Plane Triangle - A. VAN OOSTEROM AND J. STRACKEE - IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, - VOL. BME-30, NO. 2, FEBRUARY 1983 - Parameters - ----------- - points: BxQx3 - Tensor of input query points - triangles: BxFx3x3 - Target triangles - thresh: float - float threshold - Returns - ------- - solid_angles: BxQxF - A tensor containing the solid angle between all query points - and input triangles - ''' - # Center the triangles on the query points. Size should be BxQxFx3x3 - centered_tris = triangles[:, None] - points[:, :, None, None] - - # BxQxFx3 - norms = torch.norm(centered_tris, dim=-1) - - # Should be BxQxFx3 - cross_prod = torch.cross(centered_tris[:, :, :, 1], - centered_tris[:, :, :, 2], - dim=-1) - # Should be BxQxF - numerator = (centered_tris[:, :, :, 0] * cross_prod).sum(dim=-1) - del cross_prod - - dot01 = (centered_tris[:, :, :, 0] * centered_tris[:, :, :, 1]).sum(dim=-1) - dot12 = (centered_tris[:, :, :, 1] * centered_tris[:, :, :, 2]).sum(dim=-1) - dot02 = (centered_tris[:, :, :, 0] * centered_tris[:, :, :, 2]).sum(dim=-1) - del centered_tris - - denominator = (norms.prod(dim=-1) + dot01 * norms[:, :, :, 2] + - dot02 * norms[:, :, :, 1] + dot12 * norms[:, :, :, 0]) - del dot01, dot12, dot02, norms - - # Should be BxQ - solid_angle = torch.atan2(numerator, denominator) - del numerator, denominator - - torch.cuda.empty_cache() - - return 2 * solid_angle - - -def winding_numbers(points: Tensor, - triangles: Tensor, - thresh: float = 1e-8) -> Tensor: - ''' Uses winding_numbers to compute inside/outside - Robust inside-outside segmentation using generalized winding numbers - Alec Jacobson, - Ladislav Kavan, - Olga Sorkine-Hornung - Fast Winding Numbers for Soups and Clouds SIGGRAPH 2018 - Gavin Barill - NEIL G. Dickson - Ryan Schmidt - David I.W. Levin - and Alec Jacobson - Parameters - ----------- - points: BxQx3 - Tensor of input query points - triangles: BxFx3x3 - Target triangles - thresh: float - float threshold - Returns - ------- - winding_numbers: BxQ - A tensor containing the Generalized winding numbers - ''' - # The generalized winding number is the sum of solid angles of the point - # with respect to all triangles. - return 1 / (4 * math.pi) * solid_angles(points, triangles, - thresh=thresh).sum(dim=-1) - - -def batch_contains(verts, faces, points): - - B = verts.shape[0] - N = points.shape[1] - - verts = verts.detach().cpu() - faces = faces.detach().cpu() - points = points.detach().cpu() - contains = torch.zeros(B, N) - - for i in range(B): - contains[i] = torch.as_tensor( - trimesh.Trimesh(verts[i], faces[i]).contains(points[i])) - - return 2.0 * (contains - 0.5) - - -def dict2obj(d): - # if isinstance(d, list): - # d = [dict2obj(x) for x in d] - if not isinstance(d, dict): - return d - - class C(object): - pass - - o = C() - for k in d: - o.__dict__[k] = dict2obj(d[k]) - return o - - -def face_vertices(vertices, faces): - """ - :param vertices: [batch size, number of vertices, 3] - :param faces: [batch size, number of faces, 3] - :return: [batch size, number of faces, 3, 3] - """ - - bs, nv = vertices.shape[:2] - bs, nf = faces.shape[:2] - device = vertices.device - faces = faces + (torch.arange(bs, dtype=torch.int32).to(device) * - nv)[:, None, None] - vertices = vertices.reshape((bs * nv, vertices.shape[-1])) - - return vertices[faces.long()] - - -class Pytorch3dRasterizer(nn.Module): - """ Borrowed from https://github.com/facebookresearch/pytorch3d - Notice: - x,y,z are in image space, normalized - can only render squared image now - """ - - def __init__(self, image_size=224): - """ - use fixed raster_settings for rendering faces - """ - super().__init__() - raster_settings = { - 'image_size': image_size, - 'blur_radius': 0.0, - 'faces_per_pixel': 1, - 'bin_size': None, - 'max_faces_per_bin': None, - 'perspective_correct': True, - 'cull_backfaces': True, - } - raster_settings = dict2obj(raster_settings) - self.raster_settings = raster_settings - - def forward(self, vertices, faces, attributes=None): - fixed_vertices = vertices.clone() - fixed_vertices[..., :2] = -fixed_vertices[..., :2] - meshes_screen = Meshes(verts=fixed_vertices.float(), - faces=faces.long()) - raster_settings = self.raster_settings - pix_to_face, zbuf, bary_coords, dists = rasterize_meshes( - meshes_screen, - image_size=raster_settings.image_size, - blur_radius=raster_settings.blur_radius, - faces_per_pixel=raster_settings.faces_per_pixel, - bin_size=raster_settings.bin_size, - max_faces_per_bin=raster_settings.max_faces_per_bin, - perspective_correct=raster_settings.perspective_correct, - ) - vismask = (pix_to_face > -1).float() - D = attributes.shape[-1] - attributes = attributes.clone() - attributes = attributes.view(attributes.shape[0] * attributes.shape[1], - 3, attributes.shape[-1]) - N, H, W, K, _ = bary_coords.shape - mask = pix_to_face == -1 - pix_to_face = pix_to_face.clone() - pix_to_face[mask] = 0 - idx = pix_to_face.view(N * H * W * K, 1, 1).expand(N * H * W * K, 3, D) - pixel_face_vals = attributes.gather(0, idx).view(N, H, W, K, 3, D) - pixel_vals = (bary_coords[..., None] * pixel_face_vals).sum(dim=-2) - pixel_vals[mask] = 0 # Replace masked values in output. - pixel_vals = pixel_vals[:, :, :, 0].permute(0, 3, 1, 2) - pixel_vals = torch.cat( - [pixel_vals, vismask[:, :, :, 0][:, None, :, :]], dim=1) - return pixel_vals diff --git a/spaces/Yuzu22/rvc-models/infer_pack/attentions.py b/spaces/Yuzu22/rvc-models/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Yuzu22/rvc-models/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ZJunTvT/ZJunChat/chatgpt - macOS.command b/spaces/ZJunTvT/ZJunChat/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/aadnk/faster-whisper-webui/src/modelCache.py b/spaces/aadnk/faster-whisper-webui/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/aadnk/faster-whisper-webui/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/midas/midas/vit.py b/spaces/abhishek/sketch-to-image/annotator/midas/midas/vit.py deleted file mode 100644 index f861ea8bd64a46a9c647534fc7aa777691eaab83..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/midas/midas/vit.py +++ /dev/null @@ -1,501 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/global_context_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/global_context_head.py deleted file mode 100644 index d8e8cbca95d69e86ec7a2a1e7ed7f158be1b5753..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/global_context_head.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock - - -@HEADS.register_module() -class GlobalContextHead(nn.Module): - """Global context head used in `SCNet `_. - - Args: - num_convs (int, optional): number of convolutional layer in GlbCtxHead. - Default: 4. - in_channels (int, optional): number of input channels. Default: 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Default: 256. - num_classes (int, optional): number of classes. Default: 80. - loss_weight (float, optional): global context loss weight. Default: 1. - conv_cfg (dict, optional): config to init conv layer. Default: None. - norm_cfg (dict, optional): config to init norm layer. Default: None. - conv_to_res (bool, optional): if True, 2 convs will be grouped into - 1 `SimplifiedBasicBlock` using a skip connection. Default: False. - """ - - def __init__(self, - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_weight=1.0, - conv_cfg=None, - norm_cfg=None, - conv_to_res=False): - super(GlobalContextHead, self).__init__() - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.loss_weight = loss_weight - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.conv_to_res = conv_to_res - self.fp16_enabled = False - - if self.conv_to_res: - num_res_blocks = num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks - else: - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Linear(conv_out_channels, num_classes) - - self.criterion = nn.BCEWithLogitsLoss() - - def init_weights(self): - """Init weights for the head.""" - nn.init.normal_(self.fc.weight, 0, 0.01) - nn.init.constant_(self.fc.bias, 0) - - @auto_fp16() - def forward(self, feats): - """Forward function.""" - x = feats[-1] - for i in range(self.num_convs): - x = self.convs[i](x) - x = self.pool(x) - - # multi-class prediction - mc_pred = x.reshape(x.size(0), -1) - mc_pred = self.fc(mc_pred) - - return mc_pred, x - - @force_fp32(apply_to=('pred', )) - def loss(self, pred, labels): - """Loss function.""" - labels = [lbl.unique() for lbl in labels] - targets = pred.new_zeros(pred.size()) - for i, label in enumerate(labels): - targets[i, label] = 1.0 - loss = self.loss_weight * self.criterion(pred, targets) - return loss diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/mask/mask_target.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/mask/mask_target.py deleted file mode 100644 index 15d26a88bbf3710bd92813335918407db8c4e053..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/mask/mask_target.py +++ /dev/null @@ -1,122 +0,0 @@ -import numpy as np -import torch -from torch.nn.modules.utils import _pair - - -def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, - cfg): - """Compute mask target for positive proposals in multiple images. - - Args: - pos_proposals_list (list[Tensor]): Positive proposals in multiple - images. - pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each - positive proposals. - gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of - each image. - cfg (dict): Config dict that specifies the mask size. - - Returns: - list[Tensor]: Mask target of each image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * - >>> H, W = 17, 18 - >>> cfg = mmcv.Config({'mask_size': (13, 14)}) - >>> rng = np.random.RandomState(0) - >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image - >>> pos_proposals_list = [ - >>> torch.Tensor([ - >>> [ 7.2425, 5.5929, 13.9414, 14.9541], - >>> [ 7.3241, 3.6170, 16.3850, 15.3102], - >>> ]), - >>> torch.Tensor([ - >>> [ 4.8448, 6.4010, 7.0314, 9.7681], - >>> [ 5.9790, 2.6989, 7.4416, 4.8580], - >>> [ 0.0000, 0.0000, 0.1398, 9.8232], - >>> ]), - >>> ] - >>> # Corresponding class index for each proposal for each image - >>> pos_assigned_gt_inds_list = [ - >>> torch.LongTensor([7, 0]), - >>> torch.LongTensor([5, 4, 1]), - >>> ] - >>> # Ground truth mask for each true object for each image - >>> gt_masks_list = [ - >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), - >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), - >>> ] - >>> mask_targets = mask_target( - >>> pos_proposals_list, pos_assigned_gt_inds_list, - >>> gt_masks_list, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - cfg_list = [cfg for _ in range(len(pos_proposals_list))] - mask_targets = map(mask_target_single, pos_proposals_list, - pos_assigned_gt_inds_list, gt_masks_list, cfg_list) - mask_targets = list(mask_targets) - if len(mask_targets) > 0: - mask_targets = torch.cat(mask_targets) - return mask_targets - - -def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg): - """Compute mask target for each positive proposal in the image. - - Args: - pos_proposals (Tensor): Positive proposals. - pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals. - gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap - or Polygon. - cfg (dict): Config dict that indicate the mask size. - - Returns: - Tensor: Mask target of each positive proposals in the image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * # NOQA - >>> H, W = 32, 32 - >>> cfg = mmcv.Config({'mask_size': (7, 11)}) - >>> rng = np.random.RandomState(0) - >>> # Masks for each ground truth box (relative to the image) - >>> gt_masks_data = rng.rand(3, H, W) - >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W) - >>> # Predicted positive boxes in one image - >>> pos_proposals = torch.FloatTensor([ - >>> [ 16.2, 5.5, 19.9, 20.9], - >>> [ 17.3, 13.6, 19.3, 19.3], - >>> [ 14.8, 16.4, 17.0, 23.7], - >>> [ 0.0, 0.0, 16.0, 16.0], - >>> [ 4.0, 0.0, 20.0, 16.0], - >>> ]) - >>> # For each predicted proposal, its assignment to a gt mask - >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1]) - >>> mask_targets = mask_target_single( - >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - device = pos_proposals.device - mask_size = _pair(cfg.mask_size) - num_pos = pos_proposals.size(0) - if num_pos > 0: - proposals_np = pos_proposals.cpu().numpy() - maxh, maxw = gt_masks.height, gt_masks.width - proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw) - proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh) - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - - mask_targets = gt_masks.crop_and_resize( - proposals_np, mask_size, device=device, - inds=pos_assigned_gt_inds).to_ndarray() - - mask_targets = torch.from_numpy(mask_targets).float().to(device) - else: - mask_targets = pos_proposals.new_zeros((0, ) + mask_size) - - return mask_targets diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/point_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/point_head.py deleted file mode 100644 index 3342aa28bb8d264b2c3d01cbf5098d145943c193..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/point_head.py +++ /dev/null @@ -1,349 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, normal_init -from annotator.uniformer.mmcv.ops import point_sample - -from annotator.uniformer.mmseg.models.builder import HEADS -from annotator.uniformer.mmseg.ops import resize -from ..losses import accuracy -from .cascade_decode_head import BaseCascadeDecodeHead - - -def calculate_uncertainty(seg_logits): - """Estimate uncertainty based on seg logits. - - For each location of the prediction ``seg_logits`` we estimate - uncertainty as the difference between top first and top second - predicted logits. - - Args: - seg_logits (Tensor): Semantic segmentation logits, - shape (batch_size, num_classes, height, width). - - Returns: - scores (Tensor): T uncertainty scores with the most uncertain - locations having the highest uncertainty score, shape ( - batch_size, 1, height, width) - """ - top2_scores = torch.topk(seg_logits, k=2, dim=1)[0] - return (top2_scores[:, 1] - top2_scores[:, 0]).unsqueeze(1) - - -@HEADS.register_module() -class PointHead(BaseCascadeDecodeHead): - """A mask point head use in PointRend. - - ``PointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict|None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict|None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - """ - - def __init__(self, - num_fcs=3, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU', inplace=False), - **kwargs): - super(PointHead, self).__init__( - input_transform='multiple_select', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - **kwargs) - - self.num_fcs = num_fcs - self.coarse_pred_each_layer = coarse_pred_each_layer - - fc_in_channels = sum(self.in_channels) + self.num_classes - fc_channels = self.channels - self.fcs = nn.ModuleList() - for k in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += self.num_classes if self.coarse_pred_each_layer \ - else 0 - self.fc_seg = nn.Conv1d( - fc_in_channels, - self.num_classes, - kernel_size=1, - stride=1, - padding=0) - if self.dropout_ratio > 0: - self.dropout = nn.Dropout(self.dropout_ratio) - delattr(self, 'conv_seg') - - def init_weights(self): - """Initialize weights of classification layer.""" - normal_init(self.fc_seg, std=0.001) - - def cls_seg(self, feat): - """Classify each pixel with fc.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.fc_seg(feat) - return output - - def forward(self, fine_grained_point_feats, coarse_point_feats): - x = torch.cat([fine_grained_point_feats, coarse_point_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_point_feats), dim=1) - return self.cls_seg(x) - - def _get_fine_grained_point_feats(self, x, points): - """Sample from fine grained features. - - Args: - x (list[Tensor]): Feature pyramid from by neck or backbone. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - fine_grained_feats (Tensor): Sampled fine grained feature, - shape (batch_size, sum(channels of x), num_points). - """ - - fine_grained_feats_list = [ - point_sample(_, points, align_corners=self.align_corners) - for _ in x - ] - if len(fine_grained_feats_list) > 1: - fine_grained_feats = torch.cat(fine_grained_feats_list, dim=1) - else: - fine_grained_feats = fine_grained_feats_list[0] - - return fine_grained_feats - - def _get_coarse_point_feats(self, prev_output, points): - """Sample from fine grained features. - - Args: - prev_output (list[Tensor]): Prediction of previous decode head. - points (Tensor): Point coordinates, shape (batch_size, - num_points, 2). - - Returns: - coarse_feats (Tensor): Sampled coarse feature, shape (batch_size, - num_classes, num_points). - """ - - coarse_feats = point_sample( - prev_output, points, align_corners=self.align_corners) - - return coarse_feats - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self._transform_inputs(inputs) - with torch.no_grad(): - points = self.get_points_train( - prev_output, calculate_uncertainty, cfg=train_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats(prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - point_label = point_sample( - gt_semantic_seg.float(), - points, - mode='nearest', - align_corners=self.align_corners) - point_label = point_label.squeeze(1).long() - - losses = self.losses(point_logits, point_label) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - - x = self._transform_inputs(inputs) - refined_seg_logits = prev_output.clone() - for _ in range(test_cfg.subdivision_steps): - refined_seg_logits = resize( - refined_seg_logits, - scale_factor=test_cfg.scale_factor, - mode='bilinear', - align_corners=self.align_corners) - batch_size, channels, height, width = refined_seg_logits.shape - point_indices, points = self.get_points_test( - refined_seg_logits, calculate_uncertainty, cfg=test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, points) - coarse_point_feats = self._get_coarse_point_feats( - prev_output, points) - point_logits = self.forward(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_seg_logits = refined_seg_logits.reshape( - batch_size, channels, height * width) - refined_seg_logits = refined_seg_logits.scatter_( - 2, point_indices, point_logits) - refined_seg_logits = refined_seg_logits.view( - batch_size, channels, height, width) - - return refined_seg_logits - - def losses(self, point_logits, point_label): - """Compute segmentation loss.""" - loss = dict() - loss['loss_point'] = self.loss_decode( - point_logits, point_label, ignore_index=self.ignore_index) - loss['acc_point'] = accuracy(point_logits, point_label) - return loss - - def get_points_train(self, seg_logits, uncertainty_func, cfg): - """Sample points for training. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - 'uncertainty_func' function that takes point's logit prediction as - input. - - Args: - seg_logits (Tensor): Semantic segmentation logits, shape ( - batch_size, num_classes, height, width). - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains the coordinates of ``num_points`` sampled - points. - """ - num_points = cfg.num_points - oversample_ratio = cfg.oversample_ratio - importance_sample_ratio = cfg.importance_sample_ratio - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = seg_logits.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=seg_logits.device) - point_logits = point_sample(seg_logits, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = uncertainty_func(point_logits) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=seg_logits.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_point_coords = torch.rand( - batch_size, num_random_points, 2, device=seg_logits.device) - point_coords = torch.cat((point_coords, rand_point_coords), dim=1) - return point_coords - - def get_points_test(self, seg_logits, uncertainty_func, cfg): - """Sample points for testing. - - Find ``num_points`` most uncertain points from ``uncertainty_map``. - - Args: - seg_logits (Tensor): A tensor of shape (batch_size, num_classes, - height, width) for class-specific or class-agnostic prediction. - uncertainty_func (func): uncertainty calculation function. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (batch_size, num_points) - that contains indices from [0, height x width) of the most - uncertain points. - point_coords (Tensor): A tensor of shape (batch_size, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the ``height x width`` grid . - """ - - num_points = cfg.subdivision_num_points - uncertainty_map = uncertainty_func(seg_logits) - batch_size, _, height, width = uncertainty_map.shape - h_step = 1.0 / height - w_step = 1.0 / width - - uncertainty_map = uncertainty_map.view(batch_size, height * width) - num_points = min(height * width, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - point_coords = torch.zeros( - batch_size, - num_points, - 2, - dtype=torch.float, - device=seg_logits.device) - point_coords[:, :, 0] = w_step / 2.0 + (point_indices % - width).float() * w_step - point_coords[:, :, 1] = h_step / 2.0 + (point_indices // - width).float() * h_step - return point_indices, point_coords diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/utils/logger.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/utils/logger.py deleted file mode 100644 index 4149d9eda3dfef07490352d22ac40c42460315e4..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/utils/logger.py +++ /dev/null @@ -1,27 +0,0 @@ -import logging - -from annotator.uniformer.mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. The name of the root logger is the top-level package name, - e.g., "mmseg". - - Args: - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - - logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level) - - return logger diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/win32.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/win32.py deleted file mode 100644 index 2133d99bf3b14e24ade949bfc9e7d3adada2f203..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/win32.py +++ /dev/null @@ -1,255 +0,0 @@ -from pyglet.canvas.win32 import Win32Canvas -from .base import Config, CanvasConfig, Context - -from pyglet import gl -from pyglet.gl import gl_info -from pyglet.gl import wgl -from pyglet.gl import wglext_arb -from pyglet.gl import wgl_info - -from pyglet.libs.win32 import _user32, _kernel32, _gdi32 -from pyglet.libs.win32.constants import * -from pyglet.libs.win32.types import * - - -class Win32Config(Config): - def match(self, canvas): - if not isinstance(canvas, Win32Canvas): - raise RuntimeError('Canvas must be instance of Win32Canvas') - - # Use ARB API if available - if gl_info.have_context() and wgl_info.have_extension('WGL_ARB_pixel_format'): - return self._get_arb_pixel_format_matching_configs(canvas) - else: - return self._get_pixel_format_descriptor_matching_configs(canvas) - - def _get_pixel_format_descriptor_matching_configs(self, canvas): - """Get matching configs using standard PIXELFORMATDESCRIPTOR - technique.""" - pfd = PIXELFORMATDESCRIPTOR() - pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR) - pfd.nVersion = 1 - pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL - - if self.double_buffer: - pfd.dwFlags |= PFD_DOUBLEBUFFER - else: - pfd.dwFlags |= PFD_DOUBLEBUFFER_DONTCARE - - if self.stereo: - pfd.dwFlags |= PFD_STEREO - else: - pfd.dwFlags |= PFD_STEREO_DONTCARE - - # Not supported in pyglet API - # if attributes.get('swap_copy', False): - # pfd.dwFlags |= PFD_SWAP_COPY - # if attributes.get('swap_exchange', False): - # pfd.dwFlags |= PFD_SWAP_EXCHANGE - - if not self.depth_size: - pfd.dwFlags |= PFD_DEPTH_DONTCARE - - pfd.iPixelType = PFD_TYPE_RGBA - pfd.cColorBits = self.buffer_size or 0 - pfd.cRedBits = self.red_size or 0 - pfd.cGreenBits = self.green_size or 0 - pfd.cBlueBits = self.blue_size or 0 - pfd.cAlphaBits = self.alpha_size or 0 - pfd.cAccumRedBits = self.accum_red_size or 0 - pfd.cAccumGreenBits = self.accum_green_size or 0 - pfd.cAccumBlueBits = self.accum_blue_size or 0 - pfd.cAccumAlphaBits = self.accum_alpha_size or 0 - pfd.cDepthBits = self.depth_size or 0 - pfd.cStencilBits = self.stencil_size or 0 - pfd.cAuxBuffers = self.aux_buffers or 0 - - pf = _gdi32.ChoosePixelFormat(canvas.hdc, byref(pfd)) - if pf: - return [Win32CanvasConfig(canvas, pf, self)] - else: - return [] - - def _get_arb_pixel_format_matching_configs(self, canvas): - """Get configs using the WGL_ARB_pixel_format extension. - This method assumes a (dummy) GL context is already created.""" - - # Check for required extensions - if self.sample_buffers or self.samples: - if not gl_info.have_extension('GL_ARB_multisample'): - return [] - - # Construct array of attributes - attrs = [] - for name, value in self.get_gl_attributes(): - attr = Win32CanvasConfigARB.attribute_ids.get(name, None) - if attr and value is not None: - attrs.extend([attr, int(value)]) - attrs.append(0) - attrs = (c_int * len(attrs))(*attrs) - - pformats = (c_int * 16)() - nformats = c_uint(16) - wglext_arb.wglChoosePixelFormatARB(canvas.hdc, attrs, None, nformats, pformats, nformats) - - formats = [Win32CanvasConfigARB(canvas, pf, self) for pf in pformats[:nformats.value]] - return formats - - -class Win32CanvasConfig(CanvasConfig): - def __init__(self, canvas, pf, config): - super(Win32CanvasConfig, self).__init__(canvas, config) - self._pf = pf - self._pfd = PIXELFORMATDESCRIPTOR() - - _gdi32.DescribePixelFormat(canvas.hdc, pf, sizeof(PIXELFORMATDESCRIPTOR), byref(self._pfd)) - - self.double_buffer = bool(self._pfd.dwFlags & PFD_DOUBLEBUFFER) - self.sample_buffers = 0 - self.samples = 0 - self.stereo = bool(self._pfd.dwFlags & PFD_STEREO) - self.buffer_size = self._pfd.cColorBits - self.red_size = self._pfd.cRedBits - self.green_size = self._pfd.cGreenBits - self.blue_size = self._pfd.cBlueBits - self.alpha_size = self._pfd.cAlphaBits - self.accum_red_size = self._pfd.cAccumRedBits - self.accum_green_size = self._pfd.cAccumGreenBits - self.accum_blue_size = self._pfd.cAccumBlueBits - self.accum_alpha_size = self._pfd.cAccumAlphaBits - self.depth_size = self._pfd.cDepthBits - self.stencil_size = self._pfd.cStencilBits - self.aux_buffers = self._pfd.cAuxBuffers - - def compatible(self, canvas): - # TODO more careful checking - return isinstance(canvas, Win32Canvas) - - def create_context(self, share): - return Win32Context(self, share) - - def _set_pixel_format(self, canvas): - _gdi32.SetPixelFormat(canvas.hdc, self._pf, byref(self._pfd)) - - -class Win32CanvasConfigARB(CanvasConfig): - attribute_ids = { - 'double_buffer': wglext_arb.WGL_DOUBLE_BUFFER_ARB, - 'stereo': wglext_arb.WGL_STEREO_ARB, - 'buffer_size': wglext_arb.WGL_COLOR_BITS_ARB, - 'aux_buffers': wglext_arb.WGL_AUX_BUFFERS_ARB, - 'sample_buffers': wglext_arb.WGL_SAMPLE_BUFFERS_ARB, - 'samples': wglext_arb.WGL_SAMPLES_ARB, - 'red_size': wglext_arb.WGL_RED_BITS_ARB, - 'green_size': wglext_arb.WGL_GREEN_BITS_ARB, - 'blue_size': wglext_arb.WGL_BLUE_BITS_ARB, - 'alpha_size': wglext_arb.WGL_ALPHA_BITS_ARB, - 'depth_size': wglext_arb.WGL_DEPTH_BITS_ARB, - 'stencil_size': wglext_arb.WGL_STENCIL_BITS_ARB, - 'accum_red_size': wglext_arb.WGL_ACCUM_RED_BITS_ARB, - 'accum_green_size': wglext_arb.WGL_ACCUM_GREEN_BITS_ARB, - 'accum_blue_size': wglext_arb.WGL_ACCUM_BLUE_BITS_ARB, - 'accum_alpha_size': wglext_arb.WGL_ACCUM_ALPHA_BITS_ARB, - } - - def __init__(self, canvas, pf, config): - super(Win32CanvasConfigARB, self).__init__(canvas, config) - self._pf = pf - - names = list(self.attribute_ids.keys()) - attrs = list(self.attribute_ids.values()) - attrs = (c_int * len(attrs))(*attrs) - values = (c_int * len(attrs))() - - wglext_arb.wglGetPixelFormatAttribivARB(canvas.hdc, pf, 0, len(attrs), attrs, values) - - for name, value in zip(names, values): - setattr(self, name, value) - - def compatible(self, canvas): - # TODO more careful checking - return isinstance(canvas, Win32Canvas) - - def create_context(self, share): - if wgl_info.have_extension('WGL_ARB_create_context'): - # Graphics adapters that ONLY support up to OpenGL 3.1/3.2 - # should be using the Win32ARBContext class. - return Win32ARBContext(self, share) - else: - return Win32Context(self, share) - - def _set_pixel_format(self, canvas): - _gdi32.SetPixelFormat(canvas.hdc, self._pf, None) - - -class Win32Context(Context): - def __init__(self, config, share): - super(Win32Context, self).__init__(config, share) - self._context = None - - def attach(self, canvas): - super(Win32Context, self).attach(canvas) - - if not self._context: - self.config._set_pixel_format(canvas) - self._context = wgl.wglCreateContext(canvas.hdc) - - share = self.context_share - if share: - if not share.canvas: - raise RuntimeError('Share context has no canvas.') - if not wgl.wglShareLists(share._context, self._context): - raise gl.ContextException('Unable to share contexts.') - - def set_current(self): - if self._context is not None and self != gl.current_context: - wgl.wglMakeCurrent(self.canvas.hdc, self._context) - super(Win32Context, self).set_current() - - def detach(self): - if self.canvas: - wgl.wglDeleteContext(self._context) - self._context = None - super(Win32Context, self).detach() - - def flip(self): - _gdi32.SwapBuffers(self.canvas.hdc) - - def get_vsync(self): - if wgl_info.have_extension('WGL_EXT_swap_control'): - return bool(wglext_arb.wglGetSwapIntervalEXT()) - - def set_vsync(self, vsync): - if wgl_info.have_extension('WGL_EXT_swap_control'): - wglext_arb.wglSwapIntervalEXT(int(vsync)) - - -class Win32ARBContext(Win32Context): - def __init__(self, config, share): - super(Win32ARBContext, self).__init__(config, share) - - def attach(self, canvas): - share = self.context_share - if share: - if not share.canvas: - raise RuntimeError('Share context has no canvas.') - share = share._context - - attribs = [] - if self.config.major_version is not None: - attribs.extend([wglext_arb.WGL_CONTEXT_MAJOR_VERSION_ARB, self.config.major_version]) - if self.config.minor_version is not None: - attribs.extend([wglext_arb.WGL_CONTEXT_MINOR_VERSION_ARB, self.config.minor_version]) - flags = 0 - if self.config.forward_compatible: - flags |= wglext_arb.WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB - if self.config.debug: - flags |= wglext_arb.WGL_DEBUG_BIT_ARB - if flags: - attribs.extend([wglext_arb.WGL_CONTEXT_FLAGS_ARB, flags]) - attribs.append(0) - attribs = (c_int * len(attribs))(*attribs) - - self.config._set_pixel_format(canvas) - self._context = wglext_arb.wglCreateContextAttribsARB(canvas.hdc, share, attribs) - super(Win32ARBContext, self).attach(canvas) diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/image/codecs/png.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/image/codecs/png.py deleted file mode 100644 index ca8cc04fefb64dda5b41e39cae8bbb758389a27a..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/image/codecs/png.py +++ /dev/null @@ -1,78 +0,0 @@ -"""Encoder and decoder for PNG files, using PyPNG (png.py). -""" - -import array -import itertools - -from pyglet.image import ImageData, ImageDecodeException -from pyglet.image.codecs import ImageDecoder, ImageEncoder - -import pyglet.extlibs.png as pypng - - -class PNGImageDecoder(ImageDecoder): - def get_file_extensions(self): - return ['.png'] - - def decode(self, filename, file): - if not file: - file = open(filename, 'rb') - - try: - reader = pypng.Reader(file=file) - width, height, pixels, metadata = reader.asDirect() - except Exception as e: - raise ImageDecodeException('PyPNG cannot read %r: %s' % (filename or file, e)) - - if metadata['greyscale']: - if metadata['alpha']: - fmt = 'LA' - else: - fmt = 'L' - else: - if metadata['alpha']: - fmt = 'RGBA' - else: - fmt = 'RGB' - pitch = len(fmt) * width - - pixels = array.array('BH'[metadata['bitdepth'] > 8], itertools.chain(*pixels)) - return ImageData(width, height, fmt, pixels.tobytes(), -pitch) - - -class PNGImageEncoder(ImageEncoder): - def get_file_extensions(self): - return ['.png'] - - def encode(self, image, filename, file): - image = image.get_image_data() - - has_alpha = 'A' in image.format - greyscale = len(image.format) < 3 - if has_alpha: - if greyscale: - image.format = 'LA' - else: - image.format = 'RGBA' - else: - if greyscale: - image.format = 'L' - else: - image.format = 'RGB' - - image.pitch = -(image.width * len(image.format)) - - writer = pypng.Writer(image.width, image.height, greyscale=greyscale, alpha=has_alpha) - - data = array.array('B') - data.frombytes(image.get_data(image.format, image.pitch)) - - writer.write_array(file, data) - - -def get_decoders(): - return [PNGImageDecoder()] - - -def get_encoders(): - return [PNGImageEncoder()] diff --git a/spaces/aijack/jojo/e4e/scripts/calc_losses_on_images.py b/spaces/aijack/jojo/e4e/scripts/calc_losses_on_images.py deleted file mode 100644 index 32b6bcee854da7ae357daf82bd986f30db9fb72c..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/e4e/scripts/calc_losses_on_images.py +++ /dev/null @@ -1,87 +0,0 @@ -from argparse import ArgumentParser -import os -import json -import sys -from tqdm import tqdm -import numpy as np -import torch -from torch.utils.data import DataLoader -import torchvision.transforms as transforms - -sys.path.append(".") -sys.path.append("..") - -from criteria.lpips.lpips import LPIPS -from datasets.gt_res_dataset import GTResDataset - - -def parse_args(): - parser = ArgumentParser(add_help=False) - parser.add_argument('--mode', type=str, default='lpips', choices=['lpips', 'l2']) - parser.add_argument('--data_path', type=str, default='results') - parser.add_argument('--gt_path', type=str, default='gt_images') - parser.add_argument('--workers', type=int, default=4) - parser.add_argument('--batch_size', type=int, default=4) - parser.add_argument('--is_cars', action='store_true') - args = parser.parse_args() - return args - - -def run(args): - resize_dims = (256, 256) - if args.is_cars: - resize_dims = (192, 256) - transform = transforms.Compose([transforms.Resize(resize_dims), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - - print('Loading dataset') - dataset = GTResDataset(root_path=args.data_path, - gt_dir=args.gt_path, - transform=transform) - - dataloader = DataLoader(dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=int(args.workers), - drop_last=True) - - if args.mode == 'lpips': - loss_func = LPIPS(net_type='alex') - elif args.mode == 'l2': - loss_func = torch.nn.MSELoss() - else: - raise Exception('Not a valid mode!') - loss_func.cuda() - - global_i = 0 - scores_dict = {} - all_scores = [] - for result_batch, gt_batch in tqdm(dataloader): - for i in range(args.batch_size): - loss = float(loss_func(result_batch[i:i + 1].cuda(), gt_batch[i:i + 1].cuda())) - all_scores.append(loss) - im_path = dataset.pairs[global_i][0] - scores_dict[os.path.basename(im_path)] = loss - global_i += 1 - - all_scores = list(scores_dict.values()) - mean = np.mean(all_scores) - std = np.std(all_scores) - result_str = 'Average loss is {:.2f}+-{:.2f}'.format(mean, std) - print('Finished with ', args.data_path) - print(result_str) - - out_path = os.path.join(os.path.dirname(args.data_path), 'inference_metrics') - if not os.path.exists(out_path): - os.makedirs(out_path) - - with open(os.path.join(out_path, 'stat_{}.txt'.format(args.mode)), 'w') as f: - f.write(result_str) - with open(os.path.join(out_path, 'scores_{}.json'.format(args.mode)), 'w') as f: - json.dump(scores_dict, f) - - -if __name__ == '__main__': - args = parse_args() - run(args) diff --git a/spaces/akhaliq/GPEN/distributed.py b/spaces/akhaliq/GPEN/distributed.py deleted file mode 100644 index 51fa243257ef302e2015d5ff36ac531b86a9a0ce..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/distributed.py +++ /dev/null @@ -1,126 +0,0 @@ -import math -import pickle - -import torch -from torch import distributed as dist -from torch.utils.data.sampler import Sampler - - -def get_rank(): - if not dist.is_available(): - return 0 - - if not dist.is_initialized(): - return 0 - - return dist.get_rank() - - -def synchronize(): - if not dist.is_available(): - return - - if not dist.is_initialized(): - return - - world_size = dist.get_world_size() - - if world_size == 1: - return - - dist.barrier() - - -def get_world_size(): - if not dist.is_available(): - return 1 - - if not dist.is_initialized(): - return 1 - - return dist.get_world_size() - - -def reduce_sum(tensor): - if not dist.is_available(): - return tensor - - if not dist.is_initialized(): - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - - return tensor - - -def gather_grad(params): - world_size = get_world_size() - - if world_size == 1: - return - - for param in params: - if param.grad is not None: - dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) - param.grad.data.div_(world_size) - - -def all_gather(data): - world_size = get_world_size() - - if world_size == 1: - return [data] - - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to('cuda') - - local_size = torch.IntTensor([tensor.numel()]).to('cuda') - size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda')) - - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda') - tensor = torch.cat((tensor, padding), 0) - - dist.all_gather(tensor_list, tensor) - - data_list = [] - - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_loss_dict(loss_dict): - world_size = get_world_size() - - if world_size < 2: - return loss_dict - - with torch.no_grad(): - keys = [] - losses = [] - - for k in sorted(loss_dict.keys()): - keys.append(k) - losses.append(loss_dict[k]) - - losses = torch.stack(losses, 0) - dist.reduce(losses, dst=0) - - if dist.get_rank() == 0: - losses /= world_size - - reduced_losses = {k: v for k, v in zip(keys, losses)} - - return reduced_losses diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Evaluation/ROUGEEval.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/Evaluation/ROUGEEval.py deleted file mode 100644 index e5fb9a95319404cb2ed1d87711947599a1fb7a46..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Evaluation/ROUGEEval.py +++ /dev/null @@ -1,354 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -import os -import re -import shutil -from string import ascii_uppercase -from tqdm.auto import tqdm -from model.third_party.HMNet.Evaluation.OldROUGEEval import rouge -from model.third_party.HMNet.ThirdParty.ROUGE import pyrouge -from shutil import copyfile -from mpi4py import MPI -import torch -import logging -import json - - -def write_json_res( - output_file, tokenizers, x_ids, y_ids, x_tokens, y_tokens, predictions, gts -): - data = [] - - # for x_id, y_id, x_token, y_token, preds, gt in zip(x_ids, y_ids, x_tokens, y_tokens, predictions, gts): - # x_id = tokenizers[0].decode(x_id, skip_special_tokens=False) if x_id.dim() == 1 else tokenizers[0].convert_tokens_to_string(x_token) - # y_id = tokenizers[1].decode(y_id, skip_special_tokens=False) if y_id.dim() == 1 else tokenizers[1].convert_tokens_to_string(y_token) - for x_token, y_token, preds, gt in zip(x_tokens, y_tokens, predictions, gts): - data.append( - { - # 'x_ids': x_id, - # 'y_ids': y_id, - "x_tokens": x_token if isinstance(x_token, str) else " ".join(x_token), - "y_tokens": y_token if isinstance(y_token, str) else " ".join(y_token), - "predictions": preds, - "gt": gt, - } - ) - - json.dump(data, output_file, indent=4, ensure_ascii=False) - - -logger = logging.getLogger(__name__) - -""" -This code can only be run within docker "rouge", because of the usage of rouge-perl -""" - - -"""" In ROUGE parlance, your summaries are ‘system’ summaries and the gold standard summaries are ‘model’ summaries. -The summaries should be in separate folders, whose paths are set with the system_dir and model_dir variables. -All summaries should contain one sentence per line.""" - - -class ROUGEEval: - """ - Wrapper class for pyrouge. - Compute ROUGE given predictions and references for summarization evaluation. - """ - - def __init__(self, run_dir, save_dir, opt): - self.run_dir = run_dir - self.save_dir = save_dir - self.opt = opt - - # use relative path to make it work on Philly - self.pyrouge_dir = os.path.join( - os.path.dirname(__file__), "../ThirdParty/ROUGE/ROUGE-1.5.5/" - ) - - self.eval_batches_num = self.opt.get("EVAL_BATCHES_NUM", float("Inf")) - self.best_score = -float("Inf") - self.best_res = {} - - def reset_best_score(self, set_high=False): - if set_high: - self.best_score = float("Inf") - else: - self.best_score = -float("Inf") - - def make_html_safe(self, s): - s = s.replace("<", "<") - s = s.replace(">", ">") - return s - - def print_to_rouge_dir( - self, summaries, dir, suffix, split_chars, special_char_dict=None - ): - for idx, summary in enumerate(summaries): - fname = os.path.join(dir, "%06d_%s.txt" % (idx, suffix)) - with open(fname, "wb") as f: - sents = re.split(r"(?') - # else: - # new_predicitons.append(pred) - # return new_predicitons, new_groundtruths - - def _convert_tokens_to_string(self, tokenizer, tokens): - if "EVAL_TOKENIZED" in self.opt: - tokens = [t for t in tokens if t not in tokenizer.all_special_tokens] - if "EVAL_LOWERCASE" in self.opt: - tokens = [t.lower() for t in tokens] - if "EVAL_TOKENIZED" in self.opt: - return " ".join(tokens) - else: - return tokenizer.decode( - tokenizer.convert_tokens_to_ids(tokens), skip_special_tokens=True - ) - - def eval_batches(self, module, dev_batches, save_folder, label=""): - max_sent_len = int(self.opt["MAX_GEN_LENGTH"]) - - logger.info( - "Decoding current model ... \nSaving folder is {}".format(save_folder) - ) - - predictions = [] # prediction of tokens from model - x_tokens = [] # input tokens - y_tokens = [] # groundtruths tokens - x_ids = [] # input token ids - y_ids = [] # groundtruths token ids - gts = [] # groundtruths string - got_better_score = False - # err = 0 - if not isinstance(module.tokenizer, list): - encoder_tokenizer = module.tokenizer - decoder_tokenizer = module.tokenizer - elif len(module.tokenizer) == 1: - encoder_tokenizer = module.tokenizer[0] - decoder_tokenizer = module.tokenizer[0] - elif len(module.tokenizer) == 2: - encoder_tokenizer = module.tokenizer[0] - decoder_tokenizer = module.tokenizer[1] - else: - assert False, f"len(module.tokenizer) > 2" - - with torch.no_grad(): - for j, dev_batch in enumerate(dev_batches): - for b in dev_batch: - if torch.is_tensor(dev_batch[b]): - dev_batch[b] = dev_batch[b].to(self.opt["device"]) - - beam_search_res = module( - dev_batch, beam_search=True, max_sent_len=max_sent_len - ) - pred = [ - [t[0] for t in x] if len(x) > 0 else [[]] for x in beam_search_res - ] - predictions.extend( - [ - [ - self._convert_tokens_to_string(decoder_tokenizer, tt) - for tt in t - ] - for t in pred - ] - ) - - gts.extend( - [ - self._convert_tokens_to_string(decoder_tokenizer, t) - for t in dev_batch["decoder_tokens"] - ] - ) - x_tokens.extend(dev_batch["encoder_tokens"]) - y_tokens.extend(dev_batch["decoder_tokens"]) - - if ("DEBUG" in self.opt and j >= 10) or j >= self.eval_batches_num: - # in debug mode (decode first 10 batches) ortherwise decode first self.eval_batches_num bathes - break - - # use MPI to gather results from all processes / GPUs - # the result of the gather operation is a list of sublists - # each sublist corresponds to the list created on one of the MPI processes (or GPUs, respectively) - # we flatten this list into a "simple" list - assert len(predictions) == len( - gts - ), "len(predictions): {0}, len(gts): {1}".format(len(predictions), len(gts)) - comm = MPI.COMM_WORLD - predictions = comm.gather(predictions, root=0) - x_tokens = comm.gather(x_tokens, root=0) - y_tokens = comm.gather(y_tokens, root=0) - # if GPU numbers are high (>=8), passing x_ids, y_ids to a rank 0 will cause out of memory - # x_ids = comm.gather(x_ids, root=0) - # y_ids = comm.gather(y_ids, root=0) - gts = comm.gather(gts, root=0) - if self.opt["rank"] == 0: - # flatten lists - predictions = [item for sublist in predictions for item in sublist] - y_tokens = [item for sublist in y_tokens for item in sublist] - x_tokens = [item for sublist in x_tokens for item in sublist] - # x_ids = [item for sublist in x_ids for item in sublist] - # y_ids = [item for sublist in y_ids for item in sublist] - gts = [item for sublist in gts for item in sublist] - # import pdb; pdb.set_trace() - assert ( - len(predictions) == len(y_tokens) == len(x_tokens) == len(gts) - ), "len(predictions): {0}, len(y_tokens): {1}, len(x_tokens): {2}, len(gts): {3}".format( - len(predictions), len(y_tokens), len(x_tokens), len(gts) - ) - - # write intermediate results only on rank 0 - if not os.path.isdir(os.path.join(save_folder, "intermediate_results")): - os.makedirs(os.path.join(save_folder, "intermediate_results")) - top_1_predictions = [pred[0] for pred in predictions] - with open( - os.path.join( - save_folder, "intermediate_results", "res_" + label + ".json" - ), - "w", - encoding="utf-8", - ) as output_file: - write_json_res( - output_file, - [encoder_tokenizer, decoder_tokenizer], - x_ids, - y_ids, - x_tokens, - y_tokens, - predictions, - gts, - ) - try: - result = self.eval(top_1_predictions, gts) - except Exception as e: - logger.exception("ROUGE Eval ERROR") - result = {} - score = -float("Inf") - pass # this happens when no overlapping between pred and gts - else: - rouge_su4 = rouge(top_1_predictions, gts) # f, prec, recall - result = { - "ROUGE_1": result["rouge_1_f_score"] * 100.0, - "ROUGE_1_Prc": result["rouge_1_precision"] * 100.0, - "ROUGE_1_Rcl": result["rouge_1_recall"] * 100.0, - "ROUGE_2": result["rouge_2_f_score"] * 100.0, - "ROUGE_2_Prc": result["rouge_2_precision"] * 100.0, - "ROUGE_2_Rcl": result["rouge_2_recall"] * 100.0, - "ROUGE_L": result["rouge_l_f_score"] * 100.0, - "ROUGE_L_Prc": result["rouge_l_precision"] * 100.0, - "ROUGE_L_Rcl": result["rouge_l_recall"] * 100.0, - "ROUGE_SU4": rouge_su4["rouge_su4_f_score"] * 100.0, - } - - score = result["ROUGE_1"] - if score > self.best_score: - copyfile( - os.path.join( - save_folder, - "intermediate_results", - "res_" + label + ".json", - ), - os.path.join( - save_folder, - "intermediate_results", - "res_" + label + ".best.json", - ), - ) - self.best_score = score - self.best_res = result - got_better_score = True - - else: - result = {} - score = -float("Inf") - got_better_score = False - - return result, score, got_better_score - - def eval(self, predictions, groundtruths): - # predictions, groundtruths = self.filter_empty(predictions, groundtruths) - predictions = [self.make_html_safe(w) for w in predictions] - groundtruths = [self.make_html_safe(w) for w in groundtruths] - pred_dir = os.path.join(self.save_dir, "predictions") - if os.path.exists(pred_dir): - shutil.rmtree(pred_dir) - os.makedirs(pred_dir) - - gt_dir = os.path.join(self.save_dir, "groundtruths") - if os.path.exists(gt_dir): - shutil.rmtree(gt_dir) - os.makedirs(gt_dir) - - special_char_dict = self.print_to_rouge_dir_gt( - groundtruths, gt_dir, "gt", "SPLIT_CHARS_FOR_EVAL" in self.opt - ) - self.print_to_rouge_dir( - predictions, - pred_dir, - "pred", - "SPLIT_CHARS_FOR_EVAL" in self.opt, - special_char_dict, - ) - - r = pyrouge.Rouge155(self.pyrouge_dir) - r.system_dir = pred_dir - r.model_dir = gt_dir - r.system_filename_pattern = "(\d+)_pred.txt" - r.model_filename_pattern = "[A-Z].#ID#_gt.txt" - results = r.output_to_dict(r.convert_and_evaluate()) - return results diff --git a/spaces/ali-ghamdan/realesrgan-models/docs/CONTRIBUTING.md b/spaces/ali-ghamdan/realesrgan-models/docs/CONTRIBUTING.md deleted file mode 100644 index 6638d219ffd1aba9bcad6c2a8c51659dbbe658a0..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/realesrgan-models/docs/CONTRIBUTING.md +++ /dev/null @@ -1,42 +0,0 @@ -# Contributing to Real-ESRGAN - -We like open-source and want to develop practical algorithms for general image restoration. However, individual strength is limited. So, any kinds of contributions are welcome, such as: - -- New features -- New models (your fine-tuned models) -- Bug fixes -- Typo fixes -- Suggestions -- Maintenance -- Documents -- *etc* - -## Workflow - -1. Fork and pull the latest Real-ESRGAN repository -1. Checkout a new branch (do not use master branch for PRs) -1. Commit your changes -1. Create a PR - -**Note**: - -1. Please check the code style and linting - 1. The style configuration is specified in [setup.cfg](setup.cfg) - 1. If you use VSCode, the settings are configured in [.vscode/settings.json](.vscode/settings.json) -1. Strongly recommend using `pre-commit hook`. It will check your code style and linting before your commit. - 1. In the root path of project folder, run `pre-commit install` - 1. The pre-commit configuration is listed in [.pre-commit-config.yaml](.pre-commit-config.yaml) -1. Better to [open a discussion](https://github.com/xinntao/Real-ESRGAN/discussions) before large changes. - 1. Welcome to discuss :sunglasses:. I will try my best to join the discussion. - -## TODO List - -:zero: The most straightforward way of improving model performance is to fine-tune on some specific datasets. - -Here are some TODOs: - -- [ ] optimize for human faces -- [ ] optimize for texts -- [ ] support controllable restoration strength - -:one: There are also [several issues](https://github.com/xinntao/Real-ESRGAN/issues) that require helpers to improve. If you can help, please let me know :smile: diff --git a/spaces/alx-ai/Real-ESRGAN-Demo/README.md b/spaces/alx-ai/Real-ESRGAN-Demo/README.md deleted file mode 100644 index 9194e5c042521332ec069b39c82bdfaba8babaad..0000000000000000000000000000000000000000 --- a/spaces/alx-ai/Real-ESRGAN-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Real-ESRGAN Demo for Image Restoration and Upscaling -emoji: 🖼️ -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: true -duplicated_from: syedusama5556/Real-ESRGAN-Demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/silero_tts/tts_preprocessor.py b/spaces/antonovmaxim/text-generation-webui-space/extensions/silero_tts/tts_preprocessor.py deleted file mode 100644 index daefdcbda6c9b20a87c6f3d84d2a759c2c51289c..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/silero_tts/tts_preprocessor.py +++ /dev/null @@ -1,200 +0,0 @@ -import re - -from num2words import num2words - -punctuation = r'[\s,.?!/)\'\]>]' -alphabet_map = { - "A": " Ei ", - "B": " Bee ", - "C": " See ", - "D": " Dee ", - "E": " Eee ", - "F": " Eff ", - "G": " Jee ", - "H": " Eich ", - "I": " Eye ", - "J": " Jay ", - "K": " Kay ", - "L": " El ", - "M": " Emm ", - "N": " Enn ", - "O": " Ohh ", - "P": " Pee ", - "Q": " Queue ", - "R": " Are ", - "S": " Ess ", - "T": " Tee ", - "U": " You ", - "V": " Vee ", - "W": " Double You ", - "X": " Ex ", - "Y": " Why ", - "Z": " Zed " # Zed is weird, as I (da3dsoul) am American, but most of the voice models sound British, so it matches -} - - -def preprocess(string): - # the order for some of these matter - # For example, you need to remove the commas in numbers before expanding them - string = remove_surrounded_chars(string) - string = string.replace('"', '') - string = string.replace('\u201D', '').replace('\u201C', '') # right and left quote - string = string.replace('\u201F', '') # italic looking quote - string = string.replace('\n', ' ') - string = convert_num_locale(string) - string = replace_negative(string) - string = replace_roman(string) - string = hyphen_range_to(string) - string = num_to_words(string) - - # TODO Try to use a ML predictor to expand abbreviations. It's hard, dependent on context, and whether to actually - # try to say the abbreviation or spell it out as I've done below is not agreed upon - - # For now, expand abbreviations to pronunciations - # replace_abbreviations adds a lot of unnecessary whitespace to ensure separation - string = replace_abbreviations(string) - string = replace_lowercase_abbreviations(string) - - # cleanup whitespaces - # remove whitespace before punctuation - string = re.sub(rf'\s+({punctuation})', r'\1', string) - string = string.strip() - # compact whitespace - string = ' '.join(string.split()) - - return string - - -def remove_surrounded_chars(string): - # first this expression will check if there is a string nested exclusively between a alt= - # and a style= string. This would correspond to only a the alt text of an embedded image - # If it matches it will only keep that part as the string, and rend it for further processing - # Afterwards this expression matches to 'as few symbols as possible (0 upwards) between any - # asterisks' OR' as few symbols as possible (0 upwards) between an asterisk and the end of the string' - if re.search(r'(?<=alt=)(.*)(?=style=)', string, re.DOTALL): - m = re.search(r'(?<=alt=)(.*)(?=style=)', string, re.DOTALL) - string = m.group(0) - return re.sub(r'\*[^*]*?(\*|$)', '', string) - - -def convert_num_locale(text): - # This detects locale and converts it to American without comma separators - pattern = re.compile(r'(?:\s|^)\d{1,3}(?:\.\d{3})+(,\d+)(?:\s|$)') - result = text - while True: - match = pattern.search(result) - if match is None: - break - - start = match.start() - end = match.end() - result = result[0:start] + result[start:end].replace('.', '').replace(',', '.') + result[end:len(result)] - - # removes comma separators from existing American numbers - pattern = re.compile(r'(\d),(\d)') - result = pattern.sub(r'\1\2', result) - - return result - - -def replace_negative(string): - # handles situations like -5. -5 would become negative 5, which would then be expanded to negative five - return re.sub(rf'(\s)(-)(\d+)({punctuation})', r'\1negative \3\4', string) - - -def replace_roman(string): - # find a string of roman numerals. - # Only 2 or more, to avoid capturing I and single character abbreviations, like names - pattern = re.compile(rf'\s[IVXLCDM]{{2,}}{punctuation}') - result = string - while True: - match = pattern.search(result) - if match is None: - break - - start = match.start() - end = match.end() - result = result[0:start + 1] + str(roman_to_int(result[start + 1:end - 1])) + result[end - 1:len(result)] - - return result - - -def roman_to_int(s): - rom_val = {'I': 1, 'V': 5, 'X': 10, 'L': 50, 'C': 100, 'D': 500, 'M': 1000} - int_val = 0 - for i in range(len(s)): - if i > 0 and rom_val[s[i]] > rom_val[s[i - 1]]: - int_val += rom_val[s[i]] - 2 * rom_val[s[i - 1]] - else: - int_val += rom_val[s[i]] - return int_val - - -def hyphen_range_to(text): - pattern = re.compile(r'(\d+)[-–](\d+)') - result = pattern.sub(lambda x: x.group(1) + ' to ' + x.group(2), text) - return result - - -def num_to_words(text): - # 1000 or 10.23 - pattern = re.compile(r'\d+\.\d+|\d+') - result = pattern.sub(lambda x: num2words(float(x.group())), text) - return result - - -def replace_abbreviations(string): - # abbreviations 1 to 4 characters long. It will get things like A and I, but those are pronounced with their letter - pattern = re.compile(rf'(^|[\s(.\'\[<])([A-Z]{{1,4}})({punctuation}|$)') - result = string - while True: - match = pattern.search(result) - if match is None: - break - - start = match.start() - end = match.end() - result = result[0:start] + replace_abbreviation(result[start:end]) + result[end:len(result)] - - return result - - -def replace_lowercase_abbreviations(string): - # abbreviations 1 to 4 characters long, separated by dots i.e. e.g. - pattern = re.compile(rf'(^|[\s(.\'\[<])(([a-z]\.){{1,4}})({punctuation}|$)') - result = string - while True: - match = pattern.search(result) - if match is None: - break - - start = match.start() - end = match.end() - result = result[0:start] + replace_abbreviation(result[start:end].upper()) + result[end:len(result)] - - return result - - -def replace_abbreviation(string): - result = "" - for char in string: - result += match_mapping(char) - - return result - - -def match_mapping(char): - for mapping in alphabet_map.keys(): - if char == mapping: - return alphabet_map[char] - - return char - - -def __main__(args): - print(preprocess(args[1])) - - -if __name__ == "__main__": - import sys - __main__(sys.argv) diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/api/api.py b/spaces/aodianyun/stable-diffusion-webui/modules/api/api.py deleted file mode 100644 index 5a9ac5f1aa745e4dd8c9ed5a107dd840f05c0ba6..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/api/api.py +++ /dev/null @@ -1,551 +0,0 @@ -import base64 -import io -import time -import datetime -import uvicorn -from threading import Lock -from io import BytesIO -from gradio.processing_utils import decode_base64_to_file -from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request, Response -from fastapi.security import HTTPBasic, HTTPBasicCredentials -from secrets import compare_digest - -import modules.shared as shared -from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing -from modules.api.models import * -from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images -from modules.textual_inversion.textual_inversion import create_embedding, train_embedding -from modules.textual_inversion.preprocess import preprocess -from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork -from PIL import PngImagePlugin,Image -from modules.sd_models import checkpoints_list -from modules.sd_models_config import find_checkpoint_config_near_filename -from modules.realesrgan_model import get_realesrgan_models -from modules import devices -from typing import List -import piexif -import piexif.helper - -def upscaler_to_index(name: str): - try: - return [x.name.lower() for x in shared.sd_upscalers].index(name.lower()) - except: - raise HTTPException(status_code=400, detail=f"Invalid upscaler, needs to be one of these: {' , '.join([x.name for x in sd_upscalers])}") - -def script_name_to_index(name, scripts): - try: - return [script.title().lower() for script in scripts].index(name.lower()) - except: - raise HTTPException(status_code=422, detail=f"Script '{name}' not found") - -def validate_sampler_name(name): - config = sd_samplers.all_samplers_map.get(name, None) - if config is None: - raise HTTPException(status_code=404, detail="Sampler not found") - - return name - -def setUpscalers(req: dict): - reqDict = vars(req) - reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None) - reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None) - return reqDict - -def decode_base64_to_image(encoding): - if encoding.startswith("data:image/"): - encoding = encoding.split(";")[1].split(",")[1] - try: - image = Image.open(BytesIO(base64.b64decode(encoding))) - return image - except Exception as err: - raise HTTPException(status_code=500, detail="Invalid encoded image") - -def encode_pil_to_base64(image): - with io.BytesIO() as output_bytes: - - if opts.samples_format.lower() == 'png': - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality) - - elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"): - parameters = image.info.get('parameters', None) - exif_bytes = piexif.dump({ - "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") } - }) - if opts.samples_format.lower() in ("jpg", "jpeg"): - image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality) - else: - image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality) - - else: - raise HTTPException(status_code=500, detail="Invalid image format") - - bytes_data = output_bytes.getvalue() - - return base64.b64encode(bytes_data) - -def api_middleware(app: FastAPI): - @app.middleware("http") - async def log_and_time(req: Request, call_next): - ts = time.time() - res: Response = await call_next(req) - duration = str(round(time.time() - ts, 4)) - res.headers["X-Process-Time"] = duration - endpoint = req.scope.get('path', 'err') - if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'): - print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format( - t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"), - code = res.status_code, - ver = req.scope.get('http_version', '0.0'), - cli = req.scope.get('client', ('0:0.0.0', 0))[0], - prot = req.scope.get('scheme', 'err'), - method = req.scope.get('method', 'err'), - endpoint = endpoint, - duration = duration, - )) - return res - - -class Api: - def __init__(self, app: FastAPI, queue_lock: Lock): - if shared.cmd_opts.api_auth: - self.credentials = dict() - for auth in shared.cmd_opts.api_auth.split(","): - user, password = auth.split(":") - self.credentials[user] = password - - self.router = APIRouter() - self.app = app - self.queue_lock = queue_lock - api_middleware(self.app) - self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=TextToImageResponse) - self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=ImageToImageResponse) - self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=ExtrasSingleImageResponse) - self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=ExtrasBatchImagesResponse) - self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=PNGInfoResponse) - self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=ProgressResponse) - self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"]) - self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=OptionsModel) - self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) - self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=FlagsModel) - self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[SamplerItem]) - self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[UpscalerItem]) - self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[SDModelItem]) - self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[HypernetworkItem]) - self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[FaceRestorerItem]) - self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[RealesrganItem]) - self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[PromptStyleItem]) - self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=EmbeddingsResponse) - self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"]) - self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=PreprocessResponse) - self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=MemoryResponse) - - def add_api_route(self, path: str, endpoint, **kwargs): - if shared.cmd_opts.api_auth: - return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs) - return self.app.add_api_route(path, endpoint, **kwargs) - - def auth(self, credentials: HTTPBasicCredentials = Depends(HTTPBasic())): - if credentials.username in self.credentials: - if compare_digest(credentials.password, self.credentials[credentials.username]): - return True - - raise HTTPException(status_code=401, detail="Incorrect username or password", headers={"WWW-Authenticate": "Basic"}) - - def get_script(self, script_name, script_runner): - if script_name is None: - return None, None - - if not script_runner.scripts: - script_runner.initialize_scripts(False) - ui.create_ui() - - script_idx = script_name_to_index(script_name, script_runner.selectable_scripts) - script = script_runner.selectable_scripts[script_idx] - return script, script_idx - - def text2imgapi(self, txt2imgreq: StableDiffusionTxt2ImgProcessingAPI): - script, script_idx = self.get_script(txt2imgreq.script_name, scripts.scripts_txt2img) - - populate = txt2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(txt2imgreq.sampler_name or txt2imgreq.sampler_index), - "do_not_save_samples": True, - "do_not_save_grid": True - } - ) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('script_name', None) - - with self.queue_lock: - p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args) - - shared.state.begin() - if script is not None: - p.outpath_grids = opts.outdir_txt2img_grids - p.outpath_samples = opts.outdir_txt2img_samples - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_txt2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) - - return TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js()) - - def img2imgapi(self, img2imgreq: StableDiffusionImg2ImgProcessingAPI): - init_images = img2imgreq.init_images - if init_images is None: - raise HTTPException(status_code=404, detail="Init image not found") - - script, script_idx = self.get_script(img2imgreq.script_name, scripts.scripts_img2img) - - mask = img2imgreq.mask - if mask: - mask = decode_base64_to_image(mask) - - populate = img2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(img2imgreq.sampler_name or img2imgreq.sampler_index), - "do_not_save_samples": True, - "do_not_save_grid": True, - "mask": mask - } - ) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('include_init_images', None) # this is meant to be done by "exclude": True in model, but it's for a reason that I cannot determine. - args.pop('script_name', None) - - with self.queue_lock: - p = StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args) - p.init_images = [decode_base64_to_image(x) for x in init_images] - - shared.state.begin() - if script is not None: - p.outpath_grids = opts.outdir_img2img_grids - p.outpath_samples = opts.outdir_img2img_samples - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_img2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) - - if not img2imgreq.include_init_images: - img2imgreq.init_images = None - img2imgreq.mask = None - - return ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js()) - - def extras_single_image_api(self, req: ExtrasSingleImageRequest): - reqDict = setUpscalers(req) - - reqDict['image'] = decode_base64_to_image(reqDict['image']) - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1]) - - def extras_batch_images_api(self, req: ExtrasBatchImagesRequest): - reqDict = setUpscalers(req) - - def prepareFiles(file): - file = decode_base64_to_file(file.data, file_path=file.name) - file.orig_name = file.name - return file - - reqDict['image_folder'] = list(map(prepareFiles, reqDict['imageList'])) - reqDict.pop('imageList') - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=1, image="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1]) - - def pnginfoapi(self, req: PNGInfoRequest): - if(not req.image.strip()): - return PNGInfoResponse(info="") - - image = decode_base64_to_image(req.image.strip()) - if image is None: - return PNGInfoResponse(info="") - - geninfo, items = images.read_info_from_image(image) - if geninfo is None: - geninfo = "" - - items = {**{'parameters': geninfo}, **items} - - return PNGInfoResponse(info=geninfo, items=items) - - def progressapi(self, req: ProgressRequest = Depends()): - # copy from check_progress_call of ui.py - - if shared.state.job_count == 0: - return ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo) - - # avoid dividing zero - progress = 0.01 - - if shared.state.job_count > 0: - progress += shared.state.job_no / shared.state.job_count - if shared.state.sampling_steps > 0: - progress += 1 / shared.state.job_count * shared.state.sampling_step / shared.state.sampling_steps - - time_since_start = time.time() - shared.state.time_start - eta = (time_since_start/progress) - eta_relative = eta-time_since_start - - progress = min(progress, 1) - - shared.state.set_current_image() - - current_image = None - if shared.state.current_image and not req.skip_current_image: - current_image = encode_pil_to_base64(shared.state.current_image) - - return ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo) - - def interrogateapi(self, interrogatereq: InterrogateRequest): - image_b64 = interrogatereq.image - if image_b64 is None: - raise HTTPException(status_code=404, detail="Image not found") - - img = decode_base64_to_image(image_b64) - img = img.convert('RGB') - - # Override object param - with self.queue_lock: - if interrogatereq.model == "clip": - processed = shared.interrogator.interrogate(img) - elif interrogatereq.model == "deepdanbooru": - processed = deepbooru.model.tag(img) - else: - raise HTTPException(status_code=404, detail="Model not found") - - return InterrogateResponse(caption=processed) - - def interruptapi(self): - shared.state.interrupt() - - return {} - - def skip(self): - shared.state.skip() - - def get_config(self): - options = {} - for key in shared.opts.data.keys(): - metadata = shared.opts.data_labels.get(key) - if(metadata is not None): - options.update({key: shared.opts.data.get(key, shared.opts.data_labels.get(key).default)}) - else: - options.update({key: shared.opts.data.get(key, None)}) - - return options - - def set_config(self, req: Dict[str, Any]): - for k, v in req.items(): - shared.opts.set(k, v) - - shared.opts.save(shared.config_filename) - return - - def get_cmd_flags(self): - return vars(shared.cmd_opts) - - def get_samplers(self): - return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers] - - def get_upscalers(self): - return [ - { - "name": upscaler.name, - "model_name": upscaler.scaler.model_name, - "model_path": upscaler.data_path, - "model_url": None, - "scale": upscaler.scale, - } - for upscaler in shared.sd_upscalers - ] - - def get_sd_models(self): - return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()] - - def get_hypernetworks(self): - return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks] - - def get_face_restorers(self): - return [{"name":x.name(), "cmd_dir": getattr(x, "cmd_dir", None)} for x in shared.face_restorers] - - def get_realesrgan_models(self): - return [{"name":x.name,"path":x.data_path, "scale":x.scale} for x in get_realesrgan_models(None)] - - def get_prompt_styles(self): - styleList = [] - for k in shared.prompt_styles.styles: - style = shared.prompt_styles.styles[k] - styleList.append({"name":style[0], "prompt": style[1], "negative_prompt": style[2]}) - - return styleList - - def get_embeddings(self): - db = sd_hijack.model_hijack.embedding_db - - def convert_embedding(embedding): - return { - "step": embedding.step, - "sd_checkpoint": embedding.sd_checkpoint, - "sd_checkpoint_name": embedding.sd_checkpoint_name, - "shape": embedding.shape, - "vectors": embedding.vectors, - } - - def convert_embeddings(embeddings): - return {embedding.name: convert_embedding(embedding) for embedding in embeddings.values()} - - return { - "loaded": convert_embeddings(db.word_embeddings), - "skipped": convert_embeddings(db.skipped_embeddings), - } - - def refresh_checkpoints(self): - shared.refresh_checkpoints() - - def create_embedding(self, args: dict): - try: - shared.state.begin() - filename = create_embedding(**args) # create empty embedding - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used - shared.state.end() - return CreateResponse(info = "create embedding filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create embedding error: {error}".format(error = e)) - - def create_hypernetwork(self, args: dict): - try: - shared.state.begin() - filename = create_hypernetwork(**args) # create empty embedding - shared.state.end() - return CreateResponse(info = "create hypernetwork filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create hypernetwork error: {error}".format(error = e)) - - def preprocess(self, args: dict): - try: - shared.state.begin() - preprocess(**args) # quick operation unless blip/booru interrogation is enabled - shared.state.end() - return PreprocessResponse(info = 'preprocess complete') - except KeyError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: invalid token: {error}".format(error = e)) - except AssertionError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: {error}".format(error = e)) - except FileNotFoundError as e: - shared.state.end() - return PreprocessResponse(info = 'preprocess error: {error}'.format(error = e)) - - def train_embedding(self, args: dict): - try: - shared.state.begin() - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - embedding, filename = train_embedding(**args) # can take a long time to complete - except Exception as e: - error = e - finally: - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info = "train embedding complete: filename: {filename} error: {error}".format(filename = filename, error = error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info = "train embedding error: {msg}".format(msg = msg)) - - def train_hypernetwork(self, args: dict): - try: - shared.state.begin() - shared.loaded_hypernetworks = [] - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - hypernetwork, filename = train_hypernetwork(**args) - except Exception as e: - error = e - finally: - shared.sd_model.cond_stage_model.to(devices.device) - shared.sd_model.first_stage_model.to(devices.device) - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info="train embedding complete: filename: {filename} error: {error}".format(filename=filename, error=error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info="train embedding error: {error}".format(error=error)) - - def get_memory(self): - try: - import os, psutil - process = psutil.Process(os.getpid()) - res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values - ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe - ram = { 'free': ram_total - res.rss, 'used': res.rss, 'total': ram_total } - except Exception as err: - ram = { 'error': f'{err}' } - try: - import torch - if torch.cuda.is_available(): - s = torch.cuda.mem_get_info() - system = { 'free': s[0], 'used': s[1] - s[0], 'total': s[1] } - s = dict(torch.cuda.memory_stats(shared.device)) - allocated = { 'current': s['allocated_bytes.all.current'], 'peak': s['allocated_bytes.all.peak'] } - reserved = { 'current': s['reserved_bytes.all.current'], 'peak': s['reserved_bytes.all.peak'] } - active = { 'current': s['active_bytes.all.current'], 'peak': s['active_bytes.all.peak'] } - inactive = { 'current': s['inactive_split_bytes.all.current'], 'peak': s['inactive_split_bytes.all.peak'] } - warnings = { 'retries': s['num_alloc_retries'], 'oom': s['num_ooms'] } - cuda = { - 'system': system, - 'active': active, - 'allocated': allocated, - 'reserved': reserved, - 'inactive': inactive, - 'events': warnings, - } - else: - cuda = { 'error': 'unavailable' } - except Exception as err: - cuda = { 'error': f'{err}' } - return MemoryResponse(ram = ram, cuda = cuda) - - def launch(self, server_name, port): - self.app.include_router(self.router) - uvicorn.run(self.app, host=server_name, port=port) diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/prompt_parser.py b/spaces/aodianyun/stable-diffusion-webui/modules/prompt_parser.py deleted file mode 100644 index a7bbfa4ea73cbfcb6da0e1012ac166042b6fae08..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/prompt_parser.py +++ /dev/null @@ -1,373 +0,0 @@ -import re -from collections import namedtuple -from typing import List -import lark - -# a prompt like this: "fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5]" -# will be represented with prompt_schedule like this (assuming steps=100): -# [25, 'fantasy landscape with a mountain and an oak in foreground shoddy'] -# [50, 'fantasy landscape with a lake and an oak in foreground in background shoddy'] -# [60, 'fantasy landscape with a lake and an oak in foreground in background masterful'] -# [75, 'fantasy landscape with a lake and an oak in background masterful'] -# [100, 'fantasy landscape with a lake and a christmas tree in background masterful'] - -schedule_parser = lark.Lark(r""" -!start: (prompt | /[][():]/+)* -prompt: (emphasized | scheduled | alternate | plain | WHITESPACE)* -!emphasized: "(" prompt ")" - | "(" prompt ":" prompt ")" - | "[" prompt "]" -scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER "]" -alternate: "[" prompt ("|" prompt)+ "]" -WHITESPACE: /\s+/ -plain: /([^\\\[\]():|]|\\.)+/ -%import common.SIGNED_NUMBER -> NUMBER -""") - -def get_learned_conditioning_prompt_schedules(prompts, steps): - """ - >>> g = lambda p: get_learned_conditioning_prompt_schedules([p], 10)[0] - >>> g("test") - [[10, 'test']] - >>> g("a [b:3]") - [[3, 'a '], [10, 'a b']] - >>> g("a [b: 3]") - [[3, 'a '], [10, 'a b']] - >>> g("a [[[b]]:2]") - [[2, 'a '], [10, 'a [[b]]']] - >>> g("[(a:2):3]") - [[3, ''], [10, '(a:2)']] - >>> g("a [b : c : 1] d") - [[1, 'a b d'], [10, 'a c d']] - >>> g("a[b:[c:d:2]:1]e") - [[1, 'abe'], [2, 'ace'], [10, 'ade']] - >>> g("a [unbalanced") - [[10, 'a [unbalanced']] - >>> g("a [b:.5] c") - [[5, 'a c'], [10, 'a b c']] - >>> g("a [{b|d{:.5] c") # not handling this right now - [[5, 'a c'], [10, 'a {b|d{ c']] - >>> g("((a][:b:c [d:3]") - [[3, '((a][:b:c '], [10, '((a][:b:c d']] - >>> g("[a|(b:1.1)]") - [[1, 'a'], [2, '(b:1.1)'], [3, 'a'], [4, '(b:1.1)'], [5, 'a'], [6, '(b:1.1)'], [7, 'a'], [8, '(b:1.1)'], [9, 'a'], [10, '(b:1.1)']] - """ - - def collect_steps(steps, tree): - l = [steps] - class CollectSteps(lark.Visitor): - def scheduled(self, tree): - tree.children[-1] = float(tree.children[-1]) - if tree.children[-1] < 1: - tree.children[-1] *= steps - tree.children[-1] = min(steps, int(tree.children[-1])) - l.append(tree.children[-1]) - def alternate(self, tree): - l.extend(range(1, steps+1)) - CollectSteps().visit(tree) - return sorted(set(l)) - - def at_step(step, tree): - class AtStep(lark.Transformer): - def scheduled(self, args): - before, after, _, when = args - yield before or () if step <= when else after - def alternate(self, args): - yield next(args[(step - 1)%len(args)]) - def start(self, args): - def flatten(x): - if type(x) == str: - yield x - else: - for gen in x: - yield from flatten(gen) - return ''.join(flatten(args)) - def plain(self, args): - yield args[0].value - def __default__(self, data, children, meta): - for child in children: - yield child - return AtStep().transform(tree) - - def get_schedule(prompt): - try: - tree = schedule_parser.parse(prompt) - except lark.exceptions.LarkError as e: - if 0: - import traceback - traceback.print_exc() - return [[steps, prompt]] - return [[t, at_step(t, tree)] for t in collect_steps(steps, tree)] - - promptdict = {prompt: get_schedule(prompt) for prompt in set(prompts)} - return [promptdict[prompt] for prompt in prompts] - - -ScheduledPromptConditioning = namedtuple("ScheduledPromptConditioning", ["end_at_step", "cond"]) - - -def get_learned_conditioning(model, prompts, steps): - """converts a list of prompts into a list of prompt schedules - each schedule is a list of ScheduledPromptConditioning, specifying the comdition (cond), - and the sampling step at which this condition is to be replaced by the next one. - - Input: - (model, ['a red crown', 'a [blue:green:5] jeweled crown'], 20) - - Output: - [ - [ - ScheduledPromptConditioning(end_at_step=20, cond=tensor([[-0.3886, 0.0229, -0.0523, ..., -0.4901, -0.3066, 0.0674], ..., [ 0.3317, -0.5102, -0.4066, ..., 0.4119, -0.7647, -1.0160]], device='cuda:0')) - ], - [ - ScheduledPromptConditioning(end_at_step=5, cond=tensor([[-0.3886, 0.0229, -0.0522, ..., -0.4901, -0.3067, 0.0673], ..., [-0.0192, 0.3867, -0.4644, ..., 0.1135, -0.3696, -0.4625]], device='cuda:0')), - ScheduledPromptConditioning(end_at_step=20, cond=tensor([[-0.3886, 0.0229, -0.0522, ..., -0.4901, -0.3067, 0.0673], ..., [-0.7352, -0.4356, -0.7888, ..., 0.6994, -0.4312, -1.2593]], device='cuda:0')) - ] - ] - """ - res = [] - - prompt_schedules = get_learned_conditioning_prompt_schedules(prompts, steps) - cache = {} - - for prompt, prompt_schedule in zip(prompts, prompt_schedules): - - cached = cache.get(prompt, None) - if cached is not None: - res.append(cached) - continue - - texts = [x[1] for x in prompt_schedule] - conds = model.get_learned_conditioning(texts) - - cond_schedule = [] - for i, (end_at_step, text) in enumerate(prompt_schedule): - cond_schedule.append(ScheduledPromptConditioning(end_at_step, conds[i])) - - cache[prompt] = cond_schedule - res.append(cond_schedule) - - return res - - -re_AND = re.compile(r"\bAND\b") -re_weight = re.compile(r"^(.*?)(?:\s*:\s*([-+]?(?:\d+\.?|\d*\.\d+)))?\s*$") - -def get_multicond_prompt_list(prompts): - res_indexes = [] - - prompt_flat_list = [] - prompt_indexes = {} - - for prompt in prompts: - subprompts = re_AND.split(prompt) - - indexes = [] - for subprompt in subprompts: - match = re_weight.search(subprompt) - - text, weight = match.groups() if match is not None else (subprompt, 1.0) - - weight = float(weight) if weight is not None else 1.0 - - index = prompt_indexes.get(text, None) - if index is None: - index = len(prompt_flat_list) - prompt_flat_list.append(text) - prompt_indexes[text] = index - - indexes.append((index, weight)) - - res_indexes.append(indexes) - - return res_indexes, prompt_flat_list, prompt_indexes - - -class ComposableScheduledPromptConditioning: - def __init__(self, schedules, weight=1.0): - self.schedules: List[ScheduledPromptConditioning] = schedules - self.weight: float = weight - - -class MulticondLearnedConditioning: - def __init__(self, shape, batch): - self.shape: tuple = shape # the shape field is needed to send this object to DDIM/PLMS - self.batch: List[List[ComposableScheduledPromptConditioning]] = batch - -def get_multicond_learned_conditioning(model, prompts, steps) -> MulticondLearnedConditioning: - """same as get_learned_conditioning, but returns a list of ScheduledPromptConditioning along with the weight objects for each prompt. - For each prompt, the list is obtained by splitting the prompt using the AND separator. - - https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/ - """ - - res_indexes, prompt_flat_list, prompt_indexes = get_multicond_prompt_list(prompts) - - learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps) - - res = [] - for indexes in res_indexes: - res.append([ComposableScheduledPromptConditioning(learned_conditioning[i], weight) for i, weight in indexes]) - - return MulticondLearnedConditioning(shape=(len(prompts),), batch=res) - - -def reconstruct_cond_batch(c: List[List[ScheduledPromptConditioning]], current_step): - param = c[0][0].cond - res = torch.zeros((len(c),) + param.shape, device=param.device, dtype=param.dtype) - for i, cond_schedule in enumerate(c): - target_index = 0 - for current, (end_at, cond) in enumerate(cond_schedule): - if current_step <= end_at: - target_index = current - break - res[i] = cond_schedule[target_index].cond - - return res - - -def reconstruct_multicond_batch(c: MulticondLearnedConditioning, current_step): - param = c.batch[0][0].schedules[0].cond - - tensors = [] - conds_list = [] - - for batch_no, composable_prompts in enumerate(c.batch): - conds_for_batch = [] - - for cond_index, composable_prompt in enumerate(composable_prompts): - target_index = 0 - for current, (end_at, cond) in enumerate(composable_prompt.schedules): - if current_step <= end_at: - target_index = current - break - - conds_for_batch.append((len(tensors), composable_prompt.weight)) - tensors.append(composable_prompt.schedules[target_index].cond) - - conds_list.append(conds_for_batch) - - # if prompts have wildly different lengths above the limit we'll get tensors fo different shapes - # and won't be able to torch.stack them. So this fixes that. - token_count = max([x.shape[0] for x in tensors]) - for i in range(len(tensors)): - if tensors[i].shape[0] != token_count: - last_vector = tensors[i][-1:] - last_vector_repeated = last_vector.repeat([token_count - tensors[i].shape[0], 1]) - tensors[i] = torch.vstack([tensors[i], last_vector_repeated]) - - return conds_list, torch.stack(tensors).to(device=param.device, dtype=param.dtype) - - -re_attention = re.compile(r""" -\\\(| -\\\)| -\\\[| -\\]| -\\\\| -\\| -\(| -\[| -:([+-]?[.\d]+)\)| -\)| -]| -[^\\()\[\]:]+| -: -""", re.X) - -re_break = re.compile(r"\s*\bBREAK\b\s*", re.S) - -def parse_prompt_attention(text): - """ - Parses a string with attention tokens and returns a list of pairs: text and its associated weight. - Accepted tokens are: - (abc) - increases attention to abc by a multiplier of 1.1 - (abc:3.12) - increases attention to abc by a multiplier of 3.12 - [abc] - decreases attention to abc by a multiplier of 1.1 - \( - literal character '(' - \[ - literal character '[' - \) - literal character ')' - \] - literal character ']' - \\ - literal character '\' - anything else - just text - - >>> parse_prompt_attention('normal text') - [['normal text', 1.0]] - >>> parse_prompt_attention('an (important) word') - [['an ', 1.0], ['important', 1.1], [' word', 1.0]] - >>> parse_prompt_attention('(unbalanced') - [['unbalanced', 1.1]] - >>> parse_prompt_attention('\(literal\]') - [['(literal]', 1.0]] - >>> parse_prompt_attention('(unnecessary)(parens)') - [['unnecessaryparens', 1.1]] - >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).') - [['a ', 1.0], - ['house', 1.5730000000000004], - [' ', 1.1], - ['on', 1.0], - [' a ', 1.1], - ['hill', 0.55], - [', sun, ', 1.1], - ['sky', 1.4641000000000006], - ['.', 1.1]] - """ - - res = [] - round_brackets = [] - square_brackets = [] - - round_bracket_multiplier = 1.1 - square_bracket_multiplier = 1 / 1.1 - - def multiply_range(start_position, multiplier): - for p in range(start_position, len(res)): - res[p][1] *= multiplier - - for m in re_attention.finditer(text): - text = m.group(0) - weight = m.group(1) - - if text.startswith('\\'): - res.append([text[1:], 1.0]) - elif text == '(': - round_brackets.append(len(res)) - elif text == '[': - square_brackets.append(len(res)) - elif weight is not None and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), float(weight)) - elif text == ')' and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), round_bracket_multiplier) - elif text == ']' and len(square_brackets) > 0: - multiply_range(square_brackets.pop(), square_bracket_multiplier) - else: - parts = re.split(re_break, text) - for i, part in enumerate(parts): - if i > 0: - res.append(["BREAK", -1]) - res.append([part, 1.0]) - - for pos in round_brackets: - multiply_range(pos, round_bracket_multiplier) - - for pos in square_brackets: - multiply_range(pos, square_bracket_multiplier) - - if len(res) == 0: - res = [["", 1.0]] - - # merge runs of identical weights - i = 0 - while i + 1 < len(res): - if res[i][1] == res[i + 1][1]: - res[i][0] += res[i + 1][0] - res.pop(i + 1) - else: - i += 1 - - return res - -if __name__ == "__main__": - import doctest - doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE) -else: - import torch # doctest faster diff --git a/spaces/appl044/Chat-GPT-LangChain/azure_utils.py b/spaces/appl044/Chat-GPT-LangChain/azure_utils.py deleted file mode 100644 index 4173eaa689abe9b7b6b66ed3fcf1ede591655a53..0000000000000000000000000000000000000000 --- a/spaces/appl044/Chat-GPT-LangChain/azure_utils.py +++ /dev/null @@ -1,155 +0,0 @@ -# This class stores Azure voice data. Specifically, the class stores several records containing -# language, lang_code, gender, voice_id and engine. The class also has a method to return the -# voice_id, lang_code and engine given a language and gender. - -NEURAL_ENGINE = "neural" -STANDARD_ENGINE = "standard" - - -class AzureVoiceData: - def get_voice(self, language, gender): - for voice in self.voice_data: - if voice['language'] == language and voice['gender'] == gender: - return voice['azure_voice'] - return None - - def __init__(self): - self.voice_data = [ - {'language': 'Arabic', - 'azure_voice': 'ar-EG-ShakirNeural', - 'gender': 'Male'}, - {'language': 'Arabic (Gulf)', - 'azure_voice': 'ar-KW-FahedNeural', - 'gender': 'Male'}, - {'language': 'Catalan', - 'azure_voice': 'ca-ES-EnricNeural', - 'gender': 'Male'}, - {'language': 'Chinese (Cantonese)', - 'azure_voice': 'yue-CN-YunSongNeural', - 'gender': 'Male'}, - {'language': 'Chinese (Mandarin)', - 'azure_voice': 'zh-CN-YunxiNeural', - 'gender': 'Male'}, - {'language': 'Danish', - 'azure_voice': 'da-DK-JeppeNeural', - 'gender': 'Male'}, - {'language': 'Dutch', - 'azure_voice': 'nl-NL-MaartenNeural', - 'gender': 'Male'}, - {'language': 'English (Australian)', - 'azure_voice': 'en-AU-KenNeural', - 'gender': 'Male'}, - {'language': 'English (British)', - 'azure_voice': 'en-GB-RyanNeural', - 'gender': 'Male'}, - {'language': 'English (Indian)', - 'azure_voice': 'en-IN-PrabhatNeural', - 'gender': 'Male'}, - {'language': 'English (New Zealand)', - 'azure_voice': 'en-NZ-MitchellNeural', - 'gender': 'Male'}, - {'language': 'English (South African)', - 'azure_voice': 'en-ZA-LukeNeural', - 'gender': 'Male'}, - {'language': 'English (US)', - 'azure_voice': 'en-US-ChristopherNeural', - 'gender': 'Male'}, - {'language': 'English (Welsh)', - 'azure_voice': 'cy-GB-AledNeural', - 'gender': 'Male'}, - {'language': 'Finnish', - 'azure_voice': 'fi-FI-HarriNeural', - 'gender': 'Male'}, - {'language': 'French', - 'azure_voice': 'fr-FR-HenriNeural', - 'gender': 'Male'}, - {'language': 'French (Canadian)', - 'azure_voice': 'fr-CA-AntoineNeural', - 'gender': 'Male'}, - {'language': 'German', - 'azure_voice': 'de-DE-KlausNeural', - 'gender': 'Male'}, - {'language': 'German (Austrian)', - 'azure_voice': 'de-AT-JonasNeural', - 'gender': 'Male'}, - {'language': 'Hindi', - 'azure_voice': 'hi-IN-MadhurNeural', - 'gender': 'Male'}, - {'language': 'Icelandic', - 'azure_voice': 'is-IS-GunnarNeural', - 'gender': 'Male'}, - {'language': 'Italian', - 'azure_voice': 'it-IT-GianniNeural', - 'gender': 'Male'}, - {'language': 'Japanese', - 'azure_voice': 'ja-JP-KeitaNeural', - 'gender': 'Male'}, - {'language': 'Korean', - 'azure_voice': 'ko-KR-GookMinNeural', - 'gender': 'Male'}, - {'language': 'Norwegian', - 'azure_voice': 'nb-NO-FinnNeural', - 'gender': 'Male'}, - {'language': 'Polish', - 'azure_voice': 'pl-PL-MarekNeural', - 'gender': 'Male'}, - {'language': 'Portuguese (Brazilian)', - 'azure_voice': 'pt-BR-NicolauNeural', - 'gender': 'Male'}, - {'language': 'Portuguese (European)', - 'azure_voice': 'pt-PT-DuarteNeural', - 'gender': 'Male'}, - {'language': 'Romanian', - 'azure_voice': 'ro-RO-EmilNeural', - 'gender': 'Male'}, - {'language': 'Russian', - 'azure_voice': 'ru-RU-DmitryNeural', - 'gender': 'Male'}, - {'language': 'Spanish (European)', - 'azure_voice': 'es-ES-TeoNeural', - 'gender': 'Male'}, - {'language': 'Spanish (Mexican)', - 'azure_voice': 'es-MX-LibertoNeural', - 'gender': 'Male'}, - {'language': 'Spanish (US)', - 'azure_voice': 'es-US-AlonsoNeural"', - 'gender': 'Male'}, - {'language': 'Swedish', - 'azure_voice': 'sv-SE-MattiasNeural', - 'gender': 'Male'}, - {'language': 'Turkish', - 'azure_voice': 'tr-TR-AhmetNeural', - 'gender': 'Male'}, - {'language': 'Welsh', - 'azure_voice': 'cy-GB-AledNeural', - 'gender': 'Male'}, - ] - - -# Run from the command-line -if __name__ == '__main__': - azure_voice_data = AzureVoiceData() - - azure_voice = azure_voice_data.get_voice('English (US)', 'Male') - print('English (US)', 'Male', azure_voice) - - azure_voice = azure_voice_data.get_voice('English (US)', 'Female') - print('English (US)', 'Female', azure_voice) - - azure_voice = azure_voice_data.get_voice('French', 'Female') - print('French', 'Female', azure_voice) - - azure_voice = azure_voice_data.get_voice('French', 'Male') - print('French', 'Male', azure_voice) - - azure_voice = azure_voice_data.get_voice('Japanese', 'Female') - print('Japanese', 'Female', azure_voice) - - azure_voice = azure_voice_data.get_voice('Japanese', 'Male') - print('Japanese', 'Male', azure_voice) - - azure_voice = azure_voice_data.get_voice('Hindi', 'Female') - print('Hindi', 'Female', azure_voice) - - azure_voice = azure_voice_data.get_voice('Hindi', 'Male') - print('Hindi', 'Male', azure_voice) diff --git a/spaces/arch-123/bingo/src/components/welcome-screen.tsx b/spaces/arch-123/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
    - {exampleMessages.map(example => ( - - ))} -
    - ) -} diff --git a/spaces/arxify/RVC-beta-v2-0618/Changelog_CN.md b/spaces/arxify/RVC-beta-v2-0618/Changelog_CN.md deleted file mode 100644 index 42a71ee366a0c21afc0c8e05a42cd8508aa2db0a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/Changelog_CN.md +++ /dev/null @@ -1,80 +0,0 @@ -### 20230618更新 -- v2增加32k和48k两个新预训练模型 -- 修复非f0模型推理报错 -- 对于超过一小时的训练集的索引建立环节,自动kmeans缩小特征处理以加速索引训练、加入和查询 -- 附送一个人声转吉他玩具仓库 -- 数据处理剔除异常值切片 -- onnx导出选项卡 - -失败的实验: -- ~~特征检索增加时序维度:寄,没啥效果~~ -- ~~特征检索增加PCAR降维可选项:寄,数据大用kmeans缩小数据量,数据小降维操作耗时比省下的匹配耗时还多~~ -- ~~支持onnx推理(附带仅推理的小压缩包):寄,生成nsf还是需要pytorch~~ -- ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强:寄,没啥效果~~ - -todolist: -- 接入小型声码器调研 -- 训练集音高识别支持crepe -- crepe的精度支持和RVC-config同步 -- 对接F0编辑器 - - -### 20230528更新 -- 增加v2的jupyter notebook,韩文changelog,增加一些环境依赖 -- 增加呼吸、清辅音、齿音保护模式 -- 支持crepe-full推理 -- UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型,增加HP3人声提取模型 -- 索引名称增加版本和实验名称 -- 人声伴奏分离、推理批量导出增加音频导出格式选项 -- 废弃32k模型的训练 - -### 20230513更新 -- 清除一键包内部老版本runtime内残留的infer_pack和uvr5_pack -- 修复训练集预处理伪多进程的bug -- 增加harvest识别音高可选通过中值滤波削弱哑音现象,可调整中值滤波半径 -- 导出音频增加后处理重采样 -- 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取" -- 自动检测logs文件夹下的index路径,提供下拉列表功能 -- tab页增加"常见问题解答"(也可参考github-rvc-wiki) -- 相同路径的输入音频推理增加了音高缓存(用途:使用harvest音高提取,整个pipeline会经历漫长且重复的音高提取过程,如果不使用缓存,实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦) - -### 20230514更新 -- 音量包络对齐输入混合(可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启,默认不开启(值为1可视为不开启)) -- 支持按照指定频率保存提取的小模型(假如你想尝试不同epoch下的推理效果,但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型,这项功能会非常实用) -- 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题 -- 支持v2预训练模型(目前只公开了40k版本进行测试,另外2个采样率还没有训练完全) -- 推理前限制超过1的过大音量 -- 微调数据预处理参数 - - -### 20230409更新 -- 修正训练参数,提升显卡平均利用率,A100最高从25%提升至90%左右,V100:50%->90%左右,2060S:60%->85%左右,P40:25%->95%左右,训练速度显著提升 -- 修正参数:总batch_size改为每张卡的batch_size -- 修正total_epoch:最大限制100解锁至1000;默认10提升至默认20 -- 修复ckpt提取识别是否带音高错误导致推理异常的问题 -- 修复分布式训练每个rank都保存一次ckpt的问题 -- 特征提取进行nan特征过滤 -- 修复静音输入输出随机辅音or噪声的问题(老版模型需要重做训练集重训) - -### 20230416更新 -- 新增本地实时变声迷你GUI,双击go-realtime-gui.bat启动 -- 训练推理均对<50Hz的频段进行滤波过滤 -- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑 -- WebUI支持根据系统区域变更语言(现支持en_US,ja_JP,zh_CN,zh_HK,zh_SG,zh_TW,不支持的默认en_US) -- 修正部分显卡识别(例如V100-16G识别失败,P4识别失败) - -### 20230428更新 -- 升级faiss索引设置,速度更快,质量更高 -- 取消total_npy依赖,后续分享模型不再需要填写total_npy -- 解锁16系限制。4G显存GPU给到4G的推理设置。 -- 修复部分音频格式下UVR5人声伴奏分离的bug -- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持 - -### 后续计划: -功能: -- 支持多人训练选项卡(至多4人) - -底模: -- 收集呼吸wav加入训练集修正呼吸变声电音的问题 -- 我们正在训练增加了歌声训练集的底模,未来会公开 - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Errors.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Errors.py deleted file mode 100644 index 9761b52c32fd14c30784654db79fd5e406a73c7b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Errors.py +++ /dev/null @@ -1,265 +0,0 @@ -# -# Errors -# - -from __future__ import absolute_import - -try: - from __builtin__ import basestring as any_string_type -except ImportError: - any_string_type = (bytes, str) - -import sys -from contextlib import contextmanager - -from ..Utils import open_new_file -from . import DebugFlags -from . import Options - - -class PyrexError(Exception): - pass - - -class PyrexWarning(Exception): - pass - - -def context(position): - source = position[0] - assert not (isinstance(source, any_string_type)), ( - "Please replace filename strings with Scanning.FileSourceDescriptor instances %r" % source) - try: - F = source.get_lines() - except UnicodeDecodeError: - # file has an encoding problem - s = u"[unprintable code]\n" - else: - s = u''.join(F[max(0, position[1]-6):position[1]]) - s = u'...\n%s%s^\n' % (s, u' '*(position[2]-1)) - s = u'%s\n%s%s\n' % (u'-'*60, s, u'-'*60) - return s - -def format_position(position): - if position: - return u"%s:%d:%d: " % (position[0].get_error_description(), - position[1], position[2]) - return u'' - -def format_error(message, position): - if position: - pos_str = format_position(position) - cont = context(position) - message = u'\nError compiling Cython file:\n%s\n%s%s' % (cont, pos_str, message or u'') - return message - -class CompileError(PyrexError): - - def __init__(self, position = None, message = u""): - self.position = position - self.message_only = message - self.formatted_message = format_error(message, position) - self.reported = False - # Deprecated and withdrawn in 2.6: - # self.message = message - Exception.__init__(self, self.formatted_message) - # Python Exception subclass pickling is broken, - # see http://bugs.python.org/issue1692335 - self.args = (position, message) - - def __str__(self): - return self.formatted_message - -class CompileWarning(PyrexWarning): - - def __init__(self, position = None, message = ""): - self.position = position - # Deprecated and withdrawn in 2.6: - # self.message = message - Exception.__init__(self, format_position(position) + message) - -class InternalError(Exception): - # If this is ever raised, there is a bug in the compiler. - - def __init__(self, message): - self.message_only = message - Exception.__init__(self, u"Internal compiler error: %s" - % message) - -class AbortError(Exception): - # Throw this to stop the compilation immediately. - - def __init__(self, message): - self.message_only = message - Exception.__init__(self, u"Abort error: %s" % message) - -class CompilerCrash(CompileError): - # raised when an unexpected exception occurs in a transform - def __init__(self, pos, context, message, cause, stacktrace=None): - if message: - message = u'\n' + message - else: - message = u'\n' - self.message_only = message - if context: - message = u"Compiler crash in %s%s" % (context, message) - if stacktrace: - import traceback - message += ( - u'\n\nCompiler crash traceback from this point on:\n' + - u''.join(traceback.format_tb(stacktrace))) - if cause: - if not stacktrace: - message += u'\n' - message += u'%s: %s' % (cause.__class__.__name__, cause) - CompileError.__init__(self, pos, message) - # Python Exception subclass pickling is broken, - # see http://bugs.python.org/issue1692335 - self.args = (pos, context, message, cause, stacktrace) - -class NoElementTreeInstalledException(PyrexError): - """raised when the user enabled options.gdb_debug but no ElementTree - implementation was found - """ - -listing_file = None -num_errors = 0 -echo_file = None - -def open_listing_file(path, echo_to_stderr = 1): - # Begin a new error listing. If path is None, no file - # is opened, the error counter is just reset. - global listing_file, num_errors, echo_file - if path is not None: - listing_file = open_new_file(path) - else: - listing_file = None - if echo_to_stderr: - echo_file = sys.stderr - else: - echo_file = None - num_errors = 0 - -def close_listing_file(): - global listing_file - if listing_file: - listing_file.close() - listing_file = None - -def report_error(err, use_stack=True): - if error_stack and use_stack: - error_stack[-1].append(err) - else: - global num_errors - # See Main.py for why dual reporting occurs. Quick fix for now. - if err.reported: return - err.reported = True - try: line = u"%s\n" % err - except UnicodeEncodeError: - # Python <= 2.5 does this for non-ASCII Unicode exceptions - line = format_error(getattr(err, 'message_only', "[unprintable exception message]"), - getattr(err, 'position', None)) + u'\n' - if listing_file: - try: listing_file.write(line) - except UnicodeEncodeError: - listing_file.write(line.encode('ASCII', 'replace')) - if echo_file: - try: echo_file.write(line) - except UnicodeEncodeError: - echo_file.write(line.encode('ASCII', 'replace')) - num_errors += 1 - if Options.fast_fail: - raise AbortError("fatal errors") - - -def error(position, message): - #print("Errors.error:", repr(position), repr(message)) ### - if position is None: - raise InternalError(message) - err = CompileError(position, message) - if DebugFlags.debug_exception_on_error: raise Exception(err) # debug - report_error(err) - return err - - -LEVEL = 1 # warn about all errors level 1 or higher - - -def message(position, message, level=1): - if level < LEVEL: - return - warn = CompileWarning(position, message) - line = "note: %s\n" % warn - if listing_file: - listing_file.write(line) - if echo_file: - echo_file.write(line) - return warn - - -def warning(position, message, level=0): - if level < LEVEL: - return - if Options.warning_errors and position: - return error(position, message) - warn = CompileWarning(position, message) - line = "warning: %s\n" % warn - if listing_file: - listing_file.write(line) - if echo_file: - echo_file.write(line) - return warn - - -_warn_once_seen = {} -def warn_once(position, message, level=0): - if level < LEVEL or message in _warn_once_seen: - return - warn = CompileWarning(position, message) - line = "warning: %s\n" % warn - if listing_file: - listing_file.write(line) - if echo_file: - echo_file.write(line) - _warn_once_seen[message] = True - return warn - - -# These functions can be used to momentarily suppress errors. - -error_stack = [] - - -def hold_errors(): - error_stack.append([]) - - -def release_errors(ignore=False): - held_errors = error_stack.pop() - if not ignore: - for err in held_errors: - report_error(err) - - -def held_errors(): - return error_stack[-1] - - -# same as context manager: - -@contextmanager -def local_errors(ignore=False): - errors = [] - error_stack.append(errors) - try: - yield errors - finally: - release_errors(ignore=ignore) - - -# this module needs a redesign to support parallel cythonisation, but -# for now, the following works at least in sequential compiler runs - -def reset(): - _warn_once_seen.clear() - del error_stack[:] diff --git a/spaces/awacke1/VoiceChatGPT-13/app.py b/spaces/awacke1/VoiceChatGPT-13/app.py deleted file mode 100644 index 6e8f5891c60cca7c440adb4f8fc7a1e85915972c..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VoiceChatGPT-13/app.py +++ /dev/null @@ -1,434 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests -import time - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%I%M") - safe_prompt = "".join(x for x in prompt if x.isalnum())[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - - response2 = chat_with_model(response.json().get('text'), '') # ************************************* - st.write('Responses:') - #st.write(response) - st.write(response2) - return response.json().get('text') - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "audio/wav": - #0628 - if file is not None: - transcription = transcribe_audio(openai.api_key, file, "whisper-1") - st.write(transcription) - gptOutput = chat_with_model(transcription, '', model_choice) # ************************************* - filename = generate_filename(transcription, choice) - create_file(filename, transcription, gptOutput) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - return transcription - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - - # iterate through the stream of events - start_time = time.time() - - - report = [] - res_box = st.empty() - - collected_chunks = [] - collected_messages = [] - - for chunk in openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages=conversation, - temperature=0.5, - stream=True - ): - - collected_chunks.append(chunk) # save the event response - chunk_message = chunk['choices'][0]['delta'] # extract the message - collected_messages.append(chunk_message) # save the message - - content=chunk["choices"][0].get("delta",{}).get("content") - - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - #result = result.replace("\n", "") - res_box.markdown(f'*{result}*') - except: - st.write('.') - - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - #st.write(f"Full conversation received: {full_reply_content}") - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - - -def main(): - # Sidebar and global - openai.api_key = os.getenv('OPENAI_KEY') - #st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") - menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619 - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.write(transcription) - gptOutput = chat_with_model(transcription, '', model_choice) # ************************************* - filename = generate_filename(transcription, choice) - create_file(filename, transcription, gptOutput) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k. - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt", "wav"]) - - document_sections = deque() - document_responses = {} - - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - - if len(document_sections) > 0: - - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, ''.join(list(document_sections,)), model_choice) # ************************************* - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - #response = chat_with_file_contents(user_prompt, file_contents) - response = chat_with_model(user_prompt, file_contents, model_choice) - st.write('Response:') - st.write(response) - filename = generate_filename(file_content_area, choice) - create_file(filename, file_content_area, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - - - - -from langchain.chains import ConversationChain -from langchain.chains.conversation.memory import ConversationEntityMemory -from langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE -from langchain.llms import OpenAI - -if "generated" not in st.session_state: - st.session_state["generated"] = [] - -if "past" not in st.session_state: - st.session_state["past"] = [] - -if "input" not in st.session_state: - st.session_state["input"] = "" - -if "stored_session" not in st.session_state: - st.session_state["stored_session"] = [] - - -# Define function to get user input -def get_text(): - """ - Get the user input text. - - Returns: - (str): The text entered by the user - """ - input_text = st.text_input("You: ", st.session_state["input"], key="input", - placeholder="Your AI assistant here! Ask me anything ...", - label_visibility='hidden') - return input_text - -# Define function to start a new chat -def new_chat(): - """ - Clears session state and starts a new chat. - """ - save = [] - for i in range(len(st.session_state['generated'])-1, -1, -1): - save.append("User:" + st.session_state["past"][i]) - save.append("Bot:" + st.session_state["generated"][i]) - st.session_state["stored_session"].append(save) - st.session_state["generated"] = [] - st.session_state["past"] = [] - st.session_state["input"] = "" - st.session_state.entity_memory.entity_store = {} - st.session_state.entity_memory.buffer.clear() - -# Set up sidebar with various options -with st.sidebar.expander("🛠️ ", expanded=False): - # Option to preview memory store - if st.checkbox("Preview memory store"): - with st.expander("Memory-Store", expanded=False): - st.session_state.entity_memory.store - # Option to preview memory buffer - if st.checkbox("Preview memory buffer"): - with st.expander("Bufffer-Store", expanded=False): - st.session_state.entity_memory.buffer - MODEL = st.selectbox(label='Model', options=['gpt-3.5-turbo','text-davinci-003','text-davinci-002','code-davinci-002']) - K = st.number_input(' (#)Summary of prompts to consider',min_value=3,max_value=1000) - -# Set up the Streamlit app layout -#st.title("🤖 Chat Bot with 🧠") -#st.subheader(" Powered by 🦜 LangChain + OpenAI + Streamlit") - -# Ask the user to enter their OpenAI API key -#API_O = st.sidebar.text_input("API-KEY", type="password") -API_O = os.getenv('OPENAI_KEY') - -# Session state storage would be ideal -if API_O: - # Create an OpenAI instance - llm = OpenAI(temperature=0, - openai_api_key=API_O, - model_name=MODEL, - verbose=False) - - # Create a ConversationEntityMemory object if not already created - if 'entity_memory' not in st.session_state: - st.session_state.entity_memory = ConversationEntityMemory(llm=llm, k=K ) - - # Create the ConversationChain object with the specified configuration - Conversation = ConversationChain( - llm=llm, - prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE, - memory=st.session_state.entity_memory - ) - - -# Add a button to start a new chat -st.sidebar.button("Embedding Memory Chat", on_click = new_chat, type='primary') - -# Get the user input -user_input = get_text() - -# Generate the output using the ConversationChain object and the user input, and add the input/output to the session -if user_input: - output = Conversation.run(input=user_input) - st.session_state.past.append(user_input) - st.session_state.generated.append(output) - -# Allow to download as well -download_str = [] -# Display the conversation history using an expander, and allow the user to download it -with st.expander("Conversation", expanded=True): - for i in range(len(st.session_state['generated'])-1, -1, -1): - st.info(st.session_state["past"][i],icon="🧐") - st.success(st.session_state["generated"][i], icon="🤖") - download_str.append(st.session_state["past"][i]) - download_str.append(st.session_state["generated"][i]) - - # Can throw error - requires fix - download_str = '\n'.join(download_str) - if download_str: - st.download_button('Download',download_str) - -# Display stored conversation sessions in the sidebar -for i, sublist in enumerate(st.session_state.stored_session): - with st.sidebar.expander(label= f"Conversation-Session:{i}"): - st.write(sublist) - -# Allow the user to clear all stored conversation sessions -if st.session_state.stored_session: - if st.sidebar.checkbox("Clear-all"): - del st.session_state.stored_session - - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/awsaf49/gcvit-tf/gcvit/layers/embedding.py b/spaces/awsaf49/gcvit-tf/gcvit/layers/embedding.py deleted file mode 100644 index f194fc090eb4d1f385f23a50f3c4b020e31439c9..0000000000000000000000000000000000000000 --- a/spaces/awsaf49/gcvit-tf/gcvit/layers/embedding.py +++ /dev/null @@ -1,27 +0,0 @@ -import tensorflow as tf - -from .feature import ReduceSize - - -@tf.keras.utils.register_keras_serializable(package="gcvit") -class Stem(tf.keras.layers.Layer): - def __init__(self, dim, **kwargs): - super().__init__(**kwargs) - self.dim = dim - - def build(self, input_shape): - self.pad = tf.keras.layers.ZeroPadding2D(1, name='pad') - self.proj = tf.keras.layers.Conv2D(self.dim, kernel_size=3, strides=2, name='proj') - self.conv_down = ReduceSize(keep_dim=True, name='conv_down') - super().build(input_shape) - - def call(self, inputs, **kwargs): - x = self.pad(inputs) - x = self.proj(x) - x = self.conv_down(x) - return x - - def get_config(self): - config = super().get_config() - config.update({'dim': self.dim}) - return config \ No newline at end of file diff --git a/spaces/banana-projects/coref/deploy.sh b/spaces/banana-projects/coref/deploy.sh deleted file mode 100644 index ca3a6fb40ced87bcca68b1b16f1af1bf9582e652..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/coref/deploy.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash -git pull - -# Front -bower install -npm install -grunt -tsc - diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AssimpJSONLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AssimpJSONLoader.js deleted file mode 100644 index 7cfb78293b3e34a17b4dee53d3a6ce20e48b29f8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/AssimpJSONLoader.js +++ /dev/null @@ -1,293 +0,0 @@ -/** - * @author Alexander Gessler / http://www.greentoken.de/ - * https://github.com/acgessler - * - * Loader for models imported with Open Asset Import Library (http://assimp.sf.net) - * through assimp2json (https://github.com/acgessler/assimp2json). - * - * Supports any input format that assimp supports, including 3ds, obj, dae, blend, - * fbx, x, ms3d, lwo (and many more). - * - * See webgl_loader_assimp2json example. - */ - -THREE.AssimpJSONLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - -}; - -THREE.AssimpJSONLoader.prototype = { - - constructor: THREE.AssimpJSONLoader, - - crossOrigin: 'anonymous', - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var path = ( scope.path === undefined ) ? THREE.LoaderUtils.extractUrlBase( url ) : scope.path; - - var loader = new THREE.FileLoader( this.manager ); - loader.setPath( scope.path ); - loader.load( url, function ( text ) { - - var json = JSON.parse( text ); - var metadata = json.__metadata__; - - // check if __metadata__ meta header is present - // this header is used to disambiguate between different JSON-based file formats - - if ( typeof metadata !== 'undefined' ) { - - // check if assimp2json at all - - if ( metadata.format !== 'assimp2json' ) { - - onError( 'THREE.AssimpJSONLoader: Not an assimp2json scene.' ); - return; - - // check major format version - - } else if ( metadata.version < 100 && metadata.version >= 200 ) { - - onError( 'THREE.AssimpJSONLoader: Unsupported assimp2json file format version.' ); - return; - - } - - } - - onLoad( scope.parse( json, path ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - setResourcePath: function ( value ) { - - this.resourcePath = value; - return this; - - }, - - setCrossOrigin: function ( value ) { - - this.crossOrigin = value; - return this; - - }, - - parse: function ( json, path ) { - - function parseList( json, handler ) { - - var meshes = new Array( json.length ); - - for ( var i = 0; i < json.length; ++ i ) { - - meshes[ i ] = handler.call( this, json[ i ] ); - - } - - return meshes; - - } - - function parseMesh( json ) { - - var geometry = new THREE.BufferGeometry(); - - var i, l, face; - - var indices = []; - - var vertices = json.vertices || []; - var normals = json.normals || []; - var uvs = json.texturecoords || []; - var colors = json.colors || []; - - uvs = uvs[ 0 ] || []; // only support for a single set of uvs - - for ( i = 0, l = json.faces.length; i < l; i ++ ) { - - face = json.faces[ i ]; - indices.push( face[ 0 ], face[ 1 ], face[ 2 ] ); - - } - - geometry.setIndex( indices ); - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( vertices, 3 ) ); - - if ( normals.length > 0 ) { - - geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) ); - - } - - if ( uvs.length > 0 ) { - - geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) ); - - } - - if ( colors.length > 0 ) { - - geometry.addAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) ); - - } - - geometry.computeBoundingSphere(); - - return geometry; - - } - - function parseMaterial( json ) { - - var material = new THREE.MeshPhongMaterial(); - - for ( var i in json.properties ) { - - var property = json.properties[ i ]; - var key = property.key; - var value = property.value; - - switch ( key ) { - - case '$tex.file': { - - var semantic = property.semantic; - - // prop.semantic gives the type of the texture - // 1: diffuse - // 2: specular map - // 4: emissive map - // 5: height map (bumps) - // 6: normal map - // more values (i.e. environment, etc) are known by assimp and may be relevant - - if ( semantic === 1 || semantic === 2 || semantic === 4 || semantic === 5 || semantic === 6 ) { - - var keyname; - - switch ( semantic ) { - - case 1: - keyname = 'map'; - break; - case 2: - keyname = 'specularMap'; - break; - case 4: - keyname = 'emissiveMap'; - break; - case 5: - keyname = 'bumpMap'; - break; - case 6: - keyname = 'normalMap'; - break; - - } - - var texture = textureLoader.load( value ); - - // TODO: read texture settings from assimp. - // Wrapping is the default, though. - - texture.wrapS = texture.wrapT = THREE.RepeatWrapping; - - material[ keyname ] = texture; - - } - - break; - - } - - case '?mat.name': - material.name = value; - break; - - case '$clr.diffuse': - material.color.fromArray( value ); - break; - - case '$clr.specular': - material.specular.fromArray( value ); - break; - - case '$clr.emissive': - material.emissive.fromArray( value ); - break; - - case '$mat.shininess': - material.shininess = value; - break; - - case '$mat.shadingm': - // aiShadingMode_Flat - material.flatShading = ( value === 1 ) ? true : false; - break; - - case '$mat.opacity': - if ( value < 1 ) { - - material.opacity = value; - material.transparent = true; - - } - break; - - } - - } - - return material; - - } - - function parseObject( json, node, meshes, materials ) { - - var obj = new THREE.Object3D(), i, idx; - - obj.name = node.name || ''; - obj.matrix = new THREE.Matrix4().fromArray( node.transformation ).transpose(); - obj.matrix.decompose( obj.position, obj.quaternion, obj.scale ); - - for ( i = 0; node.meshes && i < node.meshes.length; i ++ ) { - - idx = node.meshes[ i ]; - obj.add( new THREE.Mesh( meshes[ idx ], materials[ json.meshes[ idx ].materialindex ] ) ); - - } - - for ( i = 0; node.children && i < node.children.length; i ++ ) { - - obj.add( parseObject( json, node.children[ i ], meshes, materials ) ); - - } - - return obj; - - } - - var textureLoader = new THREE.TextureLoader( this.manager ); - textureLoader.setPath( this.resourcePath || path ).setCrossOrigin( this.crossOrigin ); - - var meshes = parseList( json.meshes, parseMesh ); - var materials = parseList( json.materials, parseMaterial ); - return parseObject( json, json.rootnode, meshes, materials ); - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/effects/BlurNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/effects/BlurNode.js deleted file mode 100644 index 80c8d4b5769998e3fc631ce1326a0fe6eda1e49d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/effects/BlurNode.js +++ /dev/null @@ -1,168 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { TempNode } from '../core/TempNode.js'; -import { FunctionNode } from '../core/FunctionNode.js'; -import { FloatNode } from '../inputs/FloatNode.js'; -import { Vector2Node } from '../inputs/Vector2Node.js'; -import { UVNode } from '../accessors/UVNode.js'; - -function BlurNode( value, uv, radius, size ) { - - TempNode.call( this, 'v4' ); - - this.value = value; - this.uv = uv || new UVNode(); - this.radius = new Vector2Node( 1, 1 ); - - this.size = size; - - this.blurX = true; - this.blurY = true; - - this.horizontal = new FloatNode( 1 / 64 ); - this.vertical = new FloatNode( 1 / 64 ); - -} - -BlurNode.Nodes = ( function () { - - var blurX = new FunctionNode( [ - "vec4 blurX( sampler2D texture, vec2 uv, float s ) {", - " vec4 sum = vec4( 0.0 );", - " sum += texture2D( texture, vec2( uv.x - 4.0 * s, uv.y ) ) * 0.051;", - " sum += texture2D( texture, vec2( uv.x - 3.0 * s, uv.y ) ) * 0.0918;", - " sum += texture2D( texture, vec2( uv.x - 2.0 * s, uv.y ) ) * 0.12245;", - " sum += texture2D( texture, vec2( uv.x - 1.0 * s, uv.y ) ) * 0.1531;", - " sum += texture2D( texture, vec2( uv.x, uv.y ) ) * 0.1633;", - " sum += texture2D( texture, vec2( uv.x + 1.0 * s, uv.y ) ) * 0.1531;", - " sum += texture2D( texture, vec2( uv.x + 2.0 * s, uv.y ) ) * 0.12245;", - " sum += texture2D( texture, vec2( uv.x + 3.0 * s, uv.y ) ) * 0.0918;", - " sum += texture2D( texture, vec2( uv.x + 4.0 * s, uv.y ) ) * 0.051;", - " return sum * .667;", - "}" - ].join( "\n" ) ); - - var blurY = new FunctionNode( [ - "vec4 blurY( sampler2D texture, vec2 uv, float s ) {", - " vec4 sum = vec4( 0.0 );", - " sum += texture2D( texture, vec2( uv.x, uv.y - 4.0 * s ) ) * 0.051;", - " sum += texture2D( texture, vec2( uv.x, uv.y - 3.0 * s ) ) * 0.0918;", - " sum += texture2D( texture, vec2( uv.x, uv.y - 2.0 * s ) ) * 0.12245;", - " sum += texture2D( texture, vec2( uv.x, uv.y - 1.0 * s ) ) * 0.1531;", - " sum += texture2D( texture, vec2( uv.x, uv.y ) ) * 0.1633;", - " sum += texture2D( texture, vec2( uv.x, uv.y + 1.0 * s ) ) * 0.1531;", - " sum += texture2D( texture, vec2( uv.x, uv.y + 2.0 * s ) ) * 0.12245;", - " sum += texture2D( texture, vec2( uv.x, uv.y + 3.0 * s ) ) * 0.0918;", - " sum += texture2D( texture, vec2( uv.x, uv.y + 4.0 * s ) ) * 0.051;", - " return sum * .667;", - "}" - ].join( "\n" ) ); - - return { - blurX: blurX, - blurY: blurY - }; - -} )(); - - -BlurNode.prototype = Object.create( TempNode.prototype ); -BlurNode.prototype.constructor = BlurNode; -BlurNode.prototype.nodeType = "Blur"; - -BlurNode.prototype.updateFrame = function ( frame ) { - - if ( this.size ) { - - this.horizontal.value = this.radius.x / this.size.x; - this.vertical.value = this.radius.y / this.size.y; - - } else if ( this.value.value && this.value.value.image ) { - - var image = this.value.value.image; - - this.horizontal.value = this.radius.x / image.width; - this.vertical.value = this.radius.y / image.height; - - } - -}; - -BlurNode.prototype.generate = function ( builder, output ) { - - if ( builder.isShader( 'fragment' ) ) { - - var blurCode = [], code; - - var blurX = builder.include( BlurNode.Nodes.blurX ), - blurY = builder.include( BlurNode.Nodes.blurY ); - - if ( this.blurX ) { - - blurCode.push( blurX + '( ' + this.value.build( builder, 'sampler2D' ) + ', ' + this.uv.build( builder, 'v2' ) + ', ' + this.horizontal.build( builder, 'f' ) + ' )' ); - - } - - if ( this.blurY ) { - - blurCode.push( blurY + '( ' + this.value.build( builder, 'sampler2D' ) + ', ' + this.uv.build( builder, 'v2' ) + ', ' + this.vertical.build( builder, 'f' ) + ' )' ); - - } - - if ( blurCode.length == 2 ) code = '( ' + blurCode.join( ' + ' ) + ' / 2.0 )'; - else if ( blurCode.length ) code = '( ' + blurCode[ 0 ] + ' )'; - else code = 'vec4( 0.0 )'; - - return builder.format( code, this.getType( builder ), output ); - - } else { - - console.warn( "THREE.BlurNode is not compatible with " + builder.shader + " shader." ); - - return builder.format( 'vec4( 0.0 )', this.getType( builder ), output ); - - } - -}; - -BlurNode.prototype.copy = function ( source ) { - - TempNode.prototype.copy.call( this, source ); - - this.value = source.value; - this.uv = source.uv; - this.radius = source.radius; - - if ( source.size !== undefined ) this.size = new THREE.Vector2( source.size.x, source.size.y ); - - this.blurX = source.blurX; - this.blurY = source.blurY; - -}; - -BlurNode.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.value = this.value.toJSON( meta ).uuid; - data.uv = this.uv.toJSON( meta ).uuid; - data.radius = this.radius.toJSON( meta ).uuid; - - if ( this.size ) data.size = { x: this.size.x, y: this.size.y }; - - data.blurX = this.blurX; - data.blurY = this.blurY; - - } - - return data; - -}; - -export { BlurNode }; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TexturePass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TexturePass.js deleted file mode 100644 index 1ae56ac249618c81dde6a488a7e4b225b84868d0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/TexturePass.js +++ /dev/null @@ -1,57 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - */ - -THREE.TexturePass = function ( map, opacity ) { - - THREE.Pass.call( this ); - - if ( THREE.CopyShader === undefined ) - console.error( "THREE.TexturePass relies on THREE.CopyShader" ); - - var shader = THREE.CopyShader; - - this.map = map; - this.opacity = ( opacity !== undefined ) ? opacity : 1.0; - - this.uniforms = THREE.UniformsUtils.clone( shader.uniforms ); - - this.material = new THREE.ShaderMaterial( { - - uniforms: this.uniforms, - vertexShader: shader.vertexShader, - fragmentShader: shader.fragmentShader, - depthTest: false, - depthWrite: false - - } ); - - this.needsSwap = false; - - this.fsQuad = new THREE.Pass.FullScreenQuad( null ); - -}; - -THREE.TexturePass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), { - - constructor: THREE.TexturePass, - - render: function ( renderer, writeBuffer, readBuffer, deltaTime, maskActive ) { - - var oldAutoClear = renderer.autoClear; - renderer.autoClear = false; - - this.fsQuad.material = this.material; - - this.uniforms[ "opacity" ].value = this.opacity; - this.uniforms[ "tDiffuse" ].value = this.map; - this.material.transparent = ( this.opacity < 1.0 ); - - renderer.setRenderTarget( this.renderToScreen ? null : readBuffer ); - if ( this.clear ) renderer.clear(); - this.fsQuad.render( renderer ); - - renderer.autoClear = oldAutoClear; - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.js b/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.js deleted file mode 100644 index fc98224bf79fd89c9956dccc5e084d4008aef669..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.js +++ /dev/null @@ -1,24 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -import { PerspectiveCamera } from './PerspectiveCamera.js'; - -function ArrayCamera( array ) { - - PerspectiveCamera.call( this ); - - this.cameras = array || []; - -} - -ArrayCamera.prototype = Object.assign( Object.create( PerspectiveCamera.prototype ), { - - constructor: ArrayCamera, - - isArrayCamera: true - -} ); - - -export { ArrayCamera }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/PointLightHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/PointLightHelper.js deleted file mode 100644 index 14d1291c4129c9cdc35829129483f7b56561dc6f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/PointLightHelper.js +++ /dev/null @@ -1,92 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - * @author mrdoob / http://mrdoob.com/ - */ - -import { Mesh } from '../objects/Mesh.js'; -import { MeshBasicMaterial } from '../materials/MeshBasicMaterial.js'; -import { SphereBufferGeometry } from '../geometries/SphereGeometry.js'; - -function PointLightHelper( light, sphereSize, color ) { - - this.light = light; - this.light.updateMatrixWorld(); - - this.color = color; - - var geometry = new SphereBufferGeometry( sphereSize, 4, 2 ); - var material = new MeshBasicMaterial( { wireframe: true, fog: false } ); - - Mesh.call( this, geometry, material ); - - this.matrix = this.light.matrixWorld; - this.matrixAutoUpdate = false; - - this.update(); - - - /* - var distanceGeometry = new THREE.IcosahedronBufferGeometry( 1, 2 ); - var distanceMaterial = new THREE.MeshBasicMaterial( { color: hexColor, fog: false, wireframe: true, opacity: 0.1, transparent: true } ); - - this.lightSphere = new THREE.Mesh( bulbGeometry, bulbMaterial ); - this.lightDistance = new THREE.Mesh( distanceGeometry, distanceMaterial ); - - var d = light.distance; - - if ( d === 0.0 ) { - - this.lightDistance.visible = false; - - } else { - - this.lightDistance.scale.set( d, d, d ); - - } - - this.add( this.lightDistance ); - */ - -} - -PointLightHelper.prototype = Object.create( Mesh.prototype ); -PointLightHelper.prototype.constructor = PointLightHelper; - -PointLightHelper.prototype.dispose = function () { - - this.geometry.dispose(); - this.material.dispose(); - -}; - -PointLightHelper.prototype.update = function () { - - if ( this.color !== undefined ) { - - this.material.color.set( this.color ); - - } else { - - this.material.color.copy( this.light.color ); - - } - - /* - var d = this.light.distance; - - if ( d === 0.0 ) { - - this.lightDistance.visible = false; - - } else { - - this.lightDistance.visible = true; - this.lightDistance.scale.set( d, d, d ); - - } - */ - -}; - - -export { PointLightHelper }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLGeometries.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLGeometries.js deleted file mode 100644 index 8ef204e526c913d0627b1efdd9935114602d60ae..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLGeometries.js +++ /dev/null @@ -1,184 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -import { Uint16BufferAttribute, Uint32BufferAttribute } from '../../core/BufferAttribute.js'; -import { BufferGeometry } from '../../core/BufferGeometry.js'; -import { arrayMax } from '../../utils.js'; - -function WebGLGeometries( gl, attributes, info ) { - - var geometries = {}; - var wireframeAttributes = {}; - - function onGeometryDispose( event ) { - - var geometry = event.target; - var buffergeometry = geometries[ geometry.id ]; - - if ( buffergeometry.index !== null ) { - - attributes.remove( buffergeometry.index ); - - } - - for ( var name in buffergeometry.attributes ) { - - attributes.remove( buffergeometry.attributes[ name ] ); - - } - - geometry.removeEventListener( 'dispose', onGeometryDispose ); - - delete geometries[ geometry.id ]; - - var attribute = wireframeAttributes[ buffergeometry.id ]; - - if ( attribute ) { - - attributes.remove( attribute ); - delete wireframeAttributes[ buffergeometry.id ]; - - } - - // - - info.memory.geometries --; - - } - - function get( object, geometry ) { - - var buffergeometry = geometries[ geometry.id ]; - - if ( buffergeometry ) return buffergeometry; - - geometry.addEventListener( 'dispose', onGeometryDispose ); - - if ( geometry.isBufferGeometry ) { - - buffergeometry = geometry; - - } else if ( geometry.isGeometry ) { - - if ( geometry._bufferGeometry === undefined ) { - - geometry._bufferGeometry = new BufferGeometry().setFromObject( object ); - - } - - buffergeometry = geometry._bufferGeometry; - - } - - geometries[ geometry.id ] = buffergeometry; - - info.memory.geometries ++; - - return buffergeometry; - - } - - function update( geometry ) { - - var index = geometry.index; - var geometryAttributes = geometry.attributes; - - if ( index !== null ) { - - attributes.update( index, gl.ELEMENT_ARRAY_BUFFER ); - - } - - for ( var name in geometryAttributes ) { - - attributes.update( geometryAttributes[ name ], gl.ARRAY_BUFFER ); - - } - - // morph targets - - var morphAttributes = geometry.morphAttributes; - - for ( var name in morphAttributes ) { - - var array = morphAttributes[ name ]; - - for ( var i = 0, l = array.length; i < l; i ++ ) { - - attributes.update( array[ i ], gl.ARRAY_BUFFER ); - - } - - } - - } - - function getWireframeAttribute( geometry ) { - - var attribute = wireframeAttributes[ geometry.id ]; - - if ( attribute ) return attribute; - - var indices = []; - - var geometryIndex = geometry.index; - var geometryAttributes = geometry.attributes; - - // console.time( 'wireframe' ); - - if ( geometryIndex !== null ) { - - var array = geometryIndex.array; - - for ( var i = 0, l = array.length; i < l; i += 3 ) { - - var a = array[ i + 0 ]; - var b = array[ i + 1 ]; - var c = array[ i + 2 ]; - - indices.push( a, b, b, c, c, a ); - - } - - } else { - - var array = geometryAttributes.position.array; - - for ( var i = 0, l = ( array.length / 3 ) - 1; i < l; i += 3 ) { - - var a = i + 0; - var b = i + 1; - var c = i + 2; - - indices.push( a, b, b, c, c, a ); - - } - - } - - // console.timeEnd( 'wireframe' ); - - attribute = new ( arrayMax( indices ) > 65535 ? Uint32BufferAttribute : Uint16BufferAttribute )( indices, 1 ); - - attributes.update( attribute, gl.ELEMENT_ARRAY_BUFFER ); - - wireframeAttributes[ geometry.id ] = attribute; - - return attribute; - - } - - return { - - get: get, - update: update, - - getWireframeAttribute: getWireframeAttribute - - }; - -} - - -export { WebGLGeometries }; diff --git a/spaces/barabum/image-duplicate-finder/README.md b/spaces/barabum/image-duplicate-finder/README.md deleted file mode 100644 index 9d845cb4da71f5a9ccfdfb6045883132cd1bf43f..0000000000000000000000000000000000000000 --- a/spaces/barabum/image-duplicate-finder/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Duplicate Comparer -emoji: 🖼️ -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004134.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004134.py deleted file mode 100644 index 43c7248137807e6458b0e62c42481571795ea9de..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004134.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/tests/test_gfpgan_model.py b/spaces/beihai/GFPGAN-V1.3-whole-image/tests/test_gfpgan_model.py deleted file mode 100644 index 1408ddd7c909c7257fbcea79f8576231a40f9211..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/tests/test_gfpgan_model.py +++ /dev/null @@ -1,132 +0,0 @@ -import tempfile -import torch -import yaml -from basicsr.archs.stylegan2_arch import StyleGAN2Discriminator -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from gfpgan.archs.arcface_arch import ResNetArcFace -from gfpgan.archs.gfpganv1_arch import FacialComponentDiscriminator, GFPGANv1 -from gfpgan.models.gfpgan_model import GFPGANModel - - -def test_gfpgan_model(): - with open('tests/data/test_gfpgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = GFPGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'GFPGANModel' - assert isinstance(model.net_g, GFPGANv1) # generator - assert isinstance(model.net_d, StyleGAN2Discriminator) # discriminator - # facial component discriminators - assert isinstance(model.net_d_left_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_right_eye, FacialComponentDiscriminator) - assert isinstance(model.net_d_mouth, FacialComponentDiscriminator) - # identity network - assert isinstance(model.network_identity, ResNetArcFace) - # losses - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.cri_l1, L1Loss) - # optimizer - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 512, 512), dtype=torch.float32) - lq = torch.rand((1, 3, 512, 512), dtype=torch.float32) - loc_left_eye = torch.rand((1, 4), dtype=torch.float32) - loc_right_eye = torch.rand((1, 4), dtype=torch.float32) - loc_mouth = torch.rand((1, 4), dtype=torch.float32) - data = dict(gt=gt, lq=lq, loc_left_eye=loc_left_eye, loc_right_eye=loc_right_eye, loc_mouth=loc_mouth) - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 512, 512) - assert model.gt.shape == (1, 3, 512, 512) - assert model.loc_left_eyes.shape == (1, 4) - assert model.loc_right_eyes.shape == (1, 4) - assert model.loc_mouths.shape == (1, 4) - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- remove pyramid_loss_weight-------------------- # - model.feed_data(data) - model.optimize_parameters(100000) # large than remove_pyramid_loss = 50000 - assert model.output.shape == (1, 3, 512, 512) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = [ - 'l_g_pix', 'l_g_percep', 'l_g_style', 'l_g_gan', 'l_g_gan_left_eye', 'l_g_gan_right_eye', 'l_g_gan_mouth', - 'l_g_comp_style_loss', 'l_identity', 'l_d', 'real_score', 'fake_score', 'l_d_r1', 'l_d_left_eye', - 'l_d_right_eye', 'l_d_mouth' - ] - assert set(expected_keys).issubset(set(model.log_dict.keys())) - - # ----------------- test save -------------------- # - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['models'] = tmpdir - model.opt['path']['training_states'] = tmpdir - model.save(0, 1) - - # ----------------- test the test function -------------------- # - model.test() - assert model.output.shape == (1, 3, 512, 512) - # delete net_g_ema - model.__delattr__('net_g_ema') - model.test() - assert model.output.shape == (1, 3, 512, 512) - assert model.net_g.training is True # should back to training mode after testing - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/gt', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - assert model.is_train is True - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # validation - with tempfile.TemporaryDirectory() as tmpdir: - model.opt['is_train'] = False - model.opt['val']['suffix'] = 'test' - model.opt['path']['visualization'] = tmpdir - model.opt['val']['pbar'] = True - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) - - # if opt['val']['suffix'] is None - model.opt['val']['suffix'] = None - model.opt['name'] = 'demo' - model.opt['path']['visualization'] = tmpdir - model.nondist_validation(dataloader, 1, None, save_img=True) - # check metric_results - assert 'psnr' in model.metric_results - assert isinstance(model.metric_results['psnr'], float) diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/deforum_controlnet.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/deforum_controlnet.py deleted file mode 100644 index a6b72c8d4723a32721ce3c1242d6b8b33a7b21b2..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/deforum_controlnet.py +++ /dev/null @@ -1,462 +0,0 @@ -# This helper script is responsible for ControlNet/Deforum integration -# https://github.com/Mikubill/sd-webui-controlnet — controlnet repo - -import os, sys -import gradio as gr -import scripts -import modules.scripts as scrpts -from PIL import Image -import numpy as np -from modules.processing import process_images -from .rich import console -from rich.table import Table -from rich import box - -has_controlnet = None - -def find_controlnet(): - global has_controlnet - if has_controlnet is not None: - return has_controlnet - - try: - from scripts import controlnet - except Exception as e: - print(f'\033[33mFailed to import controlnet! The exact error is {e}. Deforum support for ControlNet will not be activated\033[0m') - has_controlnet = False - return False - has_controlnet = True - print(f"\033[0;32m*Deforum ControlNet support: enabled*\033[0m") - return True - -# The most parts below are plainly copied from controlnet.py -# TODO: come up with a cleaner way - -gradio_compat = True -try: - from distutils.version import LooseVersion - from importlib_metadata import version - if LooseVersion(version("gradio")) < LooseVersion("3.10"): - gradio_compat = False -except ImportError: - pass - -# svgsupports -svgsupport = False -try: - import io - import base64 - from svglib.svglib import svg2rlg - from reportlab.graphics import renderPM - svgsupport = True -except ImportError: - pass - -def ControlnetArgs(): - controlnet_enabled = False - controlnet_scribble_mode = False - controlnet_rgbbgr_mode = False - controlnet_lowvram = False - controlnet_module = "none" - controlnet_model = "None" - controlnet_weight = 1.0 - controlnet_guidance_strength = 1.0 - blendFactorMax = "0:(0.35)" - blendFactorSlope = "0:(0.25)" - tweening_frames_schedule = "0:(20)" - color_correction_factor = "0:(0.075)" - return locals() - -def setup_controlnet_ui_raw(): - # Already under an accordion - from scripts import controlnet - from scripts.controlnet import update_cn_models, cn_models, cn_models_names - - refresh_symbol = '\U0001f504' # 🔄 - switch_values_symbol = '\U000021C5' # ⇅ - model_dropdowns = [] - infotext_fields = [] - # Main part - class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - from scripts.processor import canny, midas, midas_normal, leres, hed, mlsd, openpose, pidinet, simple_scribble, fake_scribble, uniformer - - preprocessor = { - "none": lambda x, *args, **kwargs: x, - "canny": canny, - "depth": midas, - "depth_leres": leres, - "hed": hed, - "mlsd": mlsd, - "normal_map": midas_normal, - "openpose": openpose, - # "openpose_hand": openpose_hand, - "pidinet": pidinet, - # "scribble": simple_scribble, - "fake_scribble": fake_scribble, - "segmentation": uniformer, - } - - # Copying the main ControlNet widgets while getting rid of static elements such as the scribble pad - with gr.Row(): - controlnet_enabled = gr.Checkbox(label='Enable', value=False) - controlnet_scribble_mode = gr.Checkbox(label='Scribble Mode (Invert colors)', value=False, visible=False) - controlnet_rgbbgr_mode = gr.Checkbox(label='RGB to BGR', value=False, visible=False) - controlnet_lowvram = gr.Checkbox(label='Low VRAM', value=False, visible=False) - - def refresh_all_models(*inputs): - update_cn_models() - - dd = inputs[0] - selected = dd if dd in cn_models else "None" - return gr.Dropdown.update(value=selected, choices=list(cn_models.keys())) - - with gr.Row(visible=False) as cn_mod_row: - controlnet_module = gr.Dropdown(list(preprocessor.keys()), label=f"Preprocessor", value="none") - controlnet_model = gr.Dropdown(list(cn_models.keys()), label=f"Model", value="None") - refresh_models = ToolButton(value=refresh_symbol) - refresh_models.click(refresh_all_models, controlnet_model, controlnet_model) - # ctrls += (refresh_models, ) - with gr.Row(visible=False) as cn_weight_row: - controlnet_weight = gr.Slider(label=f"Weight", value=1.0, minimum=0.0, maximum=2.0, step=.05) - controlnet_guidance_strength = gr.Slider(label="Guidance strength (T)", value=1.0, minimum=0.0, maximum=1.0, interactive=True) - # ctrls += (module, model, weight,) - # model_dropdowns.append(model) - - # advanced options - controlnet_advanced = gr.Column(visible=False) - with controlnet_advanced: - controlnet_processor_res = gr.Slider(label="Annotator resolution", value=64, minimum=64, maximum=2048, interactive=False) - controlnet_threshold_a = gr.Slider(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False) - controlnet_threshold_b = gr.Slider(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False) - - if gradio_compat: - controlnet_module.change(build_sliders, inputs=[controlnet_module], outputs=[controlnet_processor_res, controlnet_threshold_a, controlnet_threshold_b, controlnet_advanced]) - - infotext_fields.extend([ - (controlnet_module, f"ControlNet Preprocessor"), - (controlnet_model, f"ControlNet Model"), - (controlnet_weight, f"ControlNet Weight"), - ]) - - with gr.Row(visible=False) as cn_env_row: - controlnet_resize_mode = gr.Radio(choices=["Envelope (Outer Fit)", "Scale to Fit (Inner Fit)", "Just Resize"], value="Scale to Fit (Inner Fit)", label="Resize Mode") - - # Video input to be fed into ControlNet - #input_video_url = gr.Textbox(source='upload', type='numpy', tool='sketch') # TODO - controlnet_input_video_chosen_file = gr.File(label="ControlNet Video Input", interactive=True, file_count="single", file_types=["video"], elem_id="controlnet_input_video_chosen_file", visible=False) - controlnet_input_video_mask_chosen_file = gr.File(label="ControlNet Video Mask Input", interactive=True, file_count="single", file_types=["video"], elem_id="controlnet_input_video_mask_chosen_file", visible=False) - - cn_hide_output_list = [controlnet_scribble_mode,controlnet_rgbbgr_mode,controlnet_lowvram,cn_mod_row,cn_weight_row,cn_env_row,controlnet_input_video_chosen_file,controlnet_input_video_mask_chosen_file] - for cn_output in cn_hide_output_list: - controlnet_enabled.change(fn=hide_ui_by_cn_status, inputs=controlnet_enabled,outputs=cn_output) - - return locals() - - -def setup_controlnet_ui(): - if not find_controlnet(): - gr.HTML(""" - ControlNet not found. Please install it :) - """, elem_id='controlnet_not_found_html_msg') - return {} - - return setup_controlnet_ui_raw() - -def controlnet_component_names(): - if not find_controlnet(): - return [] - - controlnet_args_names = str(r'''controlnet_input_video_chosen_file, controlnet_input_video_mask_chosen_file, -controlnet_enabled, controlnet_scribble_mode, controlnet_rgbbgr_mode, controlnet_lowvram, -controlnet_module, controlnet_model, -controlnet_weight, controlnet_guidance_strength, -controlnet_processor_res, -controlnet_threshold_a, controlnet_threshold_b, controlnet_resize_mode''' - ).replace("\n", "").replace("\r", "").replace(" ", "").split(',') - - return controlnet_args_names - -def is_controlnet_enabled(controlnet_args): - return 'controlnet_enabled' in vars(controlnet_args) and controlnet_args.controlnet_enabled - -def process_txt2img_with_controlnet(p, args, anim_args, loop_args, controlnet_args, root, frame_idx = 1): - # TODO: use init image and mask here - p.control_net_enabled = False # we don't want to cause concurrence - p.init_images = [] - controlnet_frame_path = os.path.join(args.outdir, 'controlnet_inputframes', f"{frame_idx:05}.jpg") - controlnet_mask_frame_path = os.path.join(args.outdir, 'controlnet_maskframes', f"{frame_idx:05}.jpg") - cn_mask_np = None - cn_image_np = None - - if not os.path.exists(controlnet_frame_path) and not os.path.exists(controlnet_mask_frame_path): - print(f'\033[33mNeither the base nor the masking frames for ControlNet were found. Using the regular pipeline\033[0m') - from .deforum_controlnet_hardcode import restore_networks - unet = p.sd_model.model.diffusion_model - restore_networks(unet) - return process_images(p) - - if os.path.exists(controlnet_frame_path): - cn_image_np = Image.open(controlnet_frame_path).convert("RGB") - - if os.path.exists(controlnet_mask_frame_path): - cn_mask_np = Image.open(controlnet_mask_frame_path).convert("RGB") - - cn_args = { - "enabled": True, - "module": controlnet_args.controlnet_module, - "model": controlnet_args.controlnet_model, - "weight": controlnet_args.controlnet_weight, - "input_image": {'image': cn_image_np, 'mask': cn_mask_np}, - "scribble_mode": controlnet_args.controlnet_scribble_mode, - "resize_mode": controlnet_args.controlnet_resize_mode, - "rgbbgr_mode": controlnet_args.controlnet_rgbbgr_mode, - "lowvram": controlnet_args.controlnet_lowvram, - "processor_res": controlnet_args.controlnet_processor_res, - "threshold_a": controlnet_args.controlnet_threshold_a, - "threshold_b": controlnet_args.controlnet_threshold_b, - "guidance_strength": controlnet_args.controlnet_guidance_strength,"guidance_strength": controlnet_args.controlnet_guidance_strength, - } - - from .deforum_controlnet_hardcode import process - p.script_args = ( - cn_args["enabled"], - cn_args["module"], - cn_args["model"], - cn_args["weight"], - cn_args["input_image"], - cn_args["scribble_mode"], - cn_args["resize_mode"], - cn_args["rgbbgr_mode"], - cn_args["lowvram"], - cn_args["processor_res"], - cn_args["threshold_a"], - cn_args["threshold_b"], - cn_args["guidance_strength"], - ) - - table = Table(title="ControlNet params",padding=0, box=box.ROUNDED) - - field_names = [] - field_names += ["module", "model", "weight", "guidance", "scribble", "resize", "rgb->bgr", "proc res", "thr a", "thr b"] - for field_name in field_names: - table.add_column(field_name, justify="center") - - rows = [] - rows += [cn_args["module"], cn_args["model"], cn_args["weight"], cn_args["guidance_strength"], cn_args["scribble_mode"], cn_args["resize_mode"], cn_args["rgbbgr_mode"], cn_args["processor_res"], cn_args["threshold_a"], cn_args["threshold_b"]] - rows = [str(x) for x in rows] - - table.add_row(*rows) - - console.print(table) - - processed = process(p, *(p.script_args)) - - if processed is None: # the script just swaps the pipeline, so failing is OK for the first time - processed = process_images(p) - - if processed is None: # now it's definitely not OK - raise Exception("\033[31mFailed to process a frame with ControlNet enabled!\033[0m") - - p.close() - - return processed - -def process_img2img_with_controlnet(p, args, anim_args, loop_args, controlnet_args, root, frame_idx = 0): - p.control_net_enabled = False # we don't want to cause concurrence - controlnet_frame_path = os.path.join(args.outdir, 'controlnet_inputframes', f"{frame_idx:05}.jpg") - controlnet_mask_frame_path = os.path.join(args.outdir, 'controlnet_maskframes', f"{frame_idx:05}.jpg") - - print(f'Reading ControlNet base frame {frame_idx} at {controlnet_frame_path}') - print(f'Reading ControlNet mask frame {frame_idx} at {controlnet_mask_frame_path}') - - cn_mask_np = None - cn_image_np = None - - if not os.path.exists(controlnet_frame_path) and not os.path.exists(controlnet_mask_frame_path): - print(f'\033[33mNeither the base nor the masking frames for ControlNet were found. Using the regular pipeline\033[0m') - return process_images(p) - - if os.path.exists(controlnet_frame_path): - cn_image_np = np.array(Image.open(controlnet_frame_path).convert("RGB")).astype('uint8') - - if os.path.exists(controlnet_mask_frame_path): - cn_mask_np = np.array(Image.open(controlnet_mask_frame_path).convert("RGB")).astype('uint8') - - cn_args = { - "enabled": True, - "module": controlnet_args.controlnet_module, - "model": controlnet_args.controlnet_model, - "weight": controlnet_args.controlnet_weight, - "input_image": {'image': cn_image_np, 'mask': cn_mask_np}, - "scribble_mode": controlnet_args.controlnet_scribble_mode, - "resize_mode": controlnet_args.controlnet_resize_mode, - "rgbbgr_mode": controlnet_args.controlnet_rgbbgr_mode, - "lowvram": controlnet_args.controlnet_lowvram, - "processor_res": controlnet_args.controlnet_processor_res, - "threshold_a": controlnet_args.controlnet_threshold_a, - "threshold_b": controlnet_args.controlnet_threshold_b, - "guidance_strength": controlnet_args.controlnet_guidance_strength, - } - - from .deforum_controlnet_hardcode import process - p.script_args = ( - cn_args["enabled"], - cn_args["module"], - cn_args["model"], - cn_args["weight"], - cn_args["input_image"], - cn_args["scribble_mode"], - cn_args["resize_mode"], - cn_args["rgbbgr_mode"], - cn_args["lowvram"], - cn_args["processor_res"], - cn_args["threshold_a"], - cn_args["threshold_b"], - cn_args["guidance_strength"], - ) - - table = Table(title="ControlNet params",padding=0, box=box.ROUNDED) - - field_names = [] - field_names += ["module", "model", "weight", "guidance", "scribble", "resize", "rgb->bgr", "proc res", "thr a", "thr b"] - for field_name in field_names: - table.add_column(field_name, justify="center") - - rows = [] - rows += [cn_args["module"], cn_args["model"], cn_args["weight"], cn_args["guidance_strength"], cn_args["scribble_mode"], cn_args["resize_mode"], cn_args["rgbbgr_mode"], cn_args["processor_res"], cn_args["threshold_a"], cn_args["threshold_b"]] - rows = [str(x) for x in rows] - - table.add_row(*rows) - - console.print(table) - - processed = process(p, *(p.script_args)) - - if processed is None: # the script just swaps the pipeline, so failing is OK for the first time - processed = process_images(p) - - if processed is None: # now it's definitely not OK - raise Exception("\033[31mFailed to process a frame with ControlNet enabled!\033[0m") - - p.close() - - return processed - -import pathlib -from .video_audio_utilities import vid2frames - -def unpack_controlnet_vids(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - if controlnet_args.controlnet_input_video_chosen_file is not None and len(controlnet_args.controlnet_input_video_chosen_file.name) > 0: - print(f'Unpacking ControlNet base video') - # create a folder for the video input frames to live in - mask_in_frame_path = os.path.join(args.outdir, 'controlnet_inputframes') - os.makedirs(mask_in_frame_path, exist_ok=True) - - # save the video frames from mask video - print(f"Exporting Video Frames (1 every {anim_args.extract_nth_frame}) frames to {mask_in_frame_path}...") - vid2frames(video_path=controlnet_args.controlnet_input_video_chosen_file.name, video_in_frame_path=mask_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame, numeric_files_output=True) - - print(f"Loading {anim_args.max_frames} input frames from {mask_in_frame_path} and saving video frames to {args.outdir}") - print(f'ControlNet base video unpacked!') - - if controlnet_args.controlnet_input_video_mask_chosen_file is not None and len(controlnet_args.controlnet_input_video_mask_chosen_file.name) > 0: - print(f'Unpacking ControlNet video mask') - # create a folder for the video input frames to live in - mask_in_frame_path = os.path.join(args.outdir, 'controlnet_maskframes') - os.makedirs(mask_in_frame_path, exist_ok=True) - - # save the video frames from mask video - print(f"Exporting Video Frames (1 every {anim_args.extract_nth_frame}) frames to {mask_in_frame_path}...") - vid2frames(video_path=controlnet_args.controlnet_input_video_mask_chosen_file.name, video_in_frame_path=mask_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame, numeric_files_output=True) - - print(f"Loading {anim_args.max_frames} input frames from {mask_in_frame_path} and saving video frames to {args.outdir}") - print(f'ControlNet video mask unpacked!') - -def hide_ui_by_cn_status(choice): - return gr.update(visible=True) if choice else gr.update(visible=False) - -def build_sliders(cn_model): - if cn_model == "canny": - return [ - gr.update(label="Annotator resolution", value=512, minimum=64, maximum=2048, step=1, interactive=True), - gr.update(label="Canny low threshold", minimum=1, maximum=255, value=100, step=1, interactive=True), - gr.update(label="Canny high threshold", minimum=1, maximum=255, value=200, step=1, interactive=True), - gr.update(visible=True) - ] - elif cn_model == "mlsd": #Hough - return [ - gr.update(label="Hough Resolution", minimum=64, maximum=2048, value=512, step=1, interactive=True), - gr.update(label="Hough value threshold (MLSD)", minimum=0.01, maximum=2.0, value=0.1, step=0.01, interactive=True), - gr.update(label="Hough distance threshold (MLSD)", minimum=0.01, maximum=20.0, value=0.1, step=0.01, interactive=True), - gr.update(visible=True) - ] - elif cn_model in ["hed", "fake_scribble"]: - return [ - gr.update(label="HED Resolution", minimum=64, maximum=2048, value=512, step=1, interactive=True), - gr.update(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=True) - ] - elif cn_model in ["openpose", "openpose_hand", "segmentation"]: - return [ - gr.update(label="Annotator Resolution", minimum=64, maximum=2048, value=512, step=1, interactive=True), - gr.update(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=True) - ] - elif cn_model == "depth": - return [ - gr.update(label="Midas Resolution", minimum=64, maximum=2048, value=384, step=1, interactive=True), - gr.update(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=True) - ] - elif cn_model == "depth_leres": - return [ - gr.update(label="LeReS Resolution", minimum=64, maximum=2048, value=512, step=1, interactive=True), - gr.update(label="Remove Near %", value=0, minimum=0, maximum=100, step=0.1, interactive=True), - gr.update(label="Remove Background %", value=0, minimum=0, maximum=100, step=0.1, interactive=True), - gr.update(visible=True) - ] - elif cn_model == "normal_map": - return [ - gr.update(label="Normal Resolution", minimum=64, maximum=2048, value=512, step=1, interactive=True), - gr.update(label="Normal background threshold", minimum=0.0, maximum=1.0, value=0.4, step=0.01, interactive=True), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=True) - ] - elif cn_model == "none": - return [ - gr.update(label="Normal Resolution", value=64, minimum=64, maximum=2048, interactive=False), - gr.update(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=False) - ] - else: - return [ - gr.update(label="Annotator resolution", value=512, minimum=64, maximum=2048, step=1, interactive=True), - gr.update(label="Threshold A", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(label="Threshold B", value=64, minimum=64, maximum=1024, interactive=False), - gr.update(visible=True) - ] - - # def svgPreprocess(inputs): - # if (inputs): - # if (inputs['image'].startswith("data:image/svg+xml;base64,") and svgsupport): - # svg_data = base64.b64decode(inputs['image'].replace('data:image/svg+xml;base64,','')) - # drawing = svg2rlg(io.BytesIO(svg_data)) - # png_data = renderPM.drawToString(drawing, fmt='PNG') - # encoded_string = base64.b64encode(png_data) - # base64_str = str(encoded_string, "utf-8") - # base64_str = "data:image/png;base64,"+ base64_str - # inputs['image'] = base64_str - # return input_image.orgpreprocess(inputs) - # return None \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers_compvis.py b/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers_compvis.py deleted file mode 100644 index 46a7054ebdedc5a3281d6d5928c3e4d6746fbc27..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers_compvis.py +++ /dev/null @@ -1,160 +0,0 @@ -import math -import ldm.models.diffusion.ddim -import ldm.models.diffusion.plms - -import numpy as np -import torch - -from modules.shared import state -from modules import sd_samplers_common, prompt_parser, shared - - -samplers_data_compvis = [ - sd_samplers_common.SamplerData('DDIM', lambda model: VanillaStableDiffusionSampler(ldm.models.diffusion.ddim.DDIMSampler, model), [], {}), - sd_samplers_common.SamplerData('PLMS', lambda model: VanillaStableDiffusionSampler(ldm.models.diffusion.plms.PLMSSampler, model), [], {}), -] - - -class VanillaStableDiffusionSampler: - def __init__(self, constructor, sd_model): - self.sampler = constructor(sd_model) - self.is_plms = hasattr(self.sampler, 'p_sample_plms') - self.orig_p_sample_ddim = self.sampler.p_sample_plms if self.is_plms else self.sampler.p_sample_ddim - self.mask = None - self.nmask = None - self.init_latent = None - self.sampler_noises = None - self.step = 0 - self.stop_at = None - self.eta = None - self.config = None - self.last_latent = None - - self.conditioning_key = sd_model.model.conditioning_key - - def number_of_needed_noises(self, p): - return 0 - - def launch_sampling(self, steps, func): - state.sampling_steps = steps - state.sampling_step = 0 - - try: - return func() - except sd_samplers_common.InterruptedException: - return self.last_latent - - def p_sample_ddim_hook(self, x_dec, cond, ts, unconditional_conditioning, *args, **kwargs): - if state.interrupted or state.skipped: - raise sd_samplers_common.InterruptedException - - if self.stop_at is not None and self.step > self.stop_at: - raise sd_samplers_common.InterruptedException - - # Have to unwrap the inpainting conditioning here to perform pre-processing - image_conditioning = None - if isinstance(cond, dict): - image_conditioning = cond["c_concat"][0] - cond = cond["c_crossattn"][0] - unconditional_conditioning = unconditional_conditioning["c_crossattn"][0] - - conds_list, tensor = prompt_parser.reconstruct_multicond_batch(cond, self.step) - unconditional_conditioning = prompt_parser.reconstruct_cond_batch(unconditional_conditioning, self.step) - - assert all([len(conds) == 1 for conds in conds_list]), 'composition via AND is not supported for DDIM/PLMS samplers' - cond = tensor - - # for DDIM, shapes must match, we can't just process cond and uncond independently; - # filling unconditional_conditioning with repeats of the last vector to match length is - # not 100% correct but should work well enough - if unconditional_conditioning.shape[1] < cond.shape[1]: - last_vector = unconditional_conditioning[:, -1:] - last_vector_repeated = last_vector.repeat([1, cond.shape[1] - unconditional_conditioning.shape[1], 1]) - unconditional_conditioning = torch.hstack([unconditional_conditioning, last_vector_repeated]) - elif unconditional_conditioning.shape[1] > cond.shape[1]: - unconditional_conditioning = unconditional_conditioning[:, :cond.shape[1]] - - if self.mask is not None: - img_orig = self.sampler.model.q_sample(self.init_latent, ts) - x_dec = img_orig * self.mask + self.nmask * x_dec - - # Wrap the image conditioning back up since the DDIM code can accept the dict directly. - # Note that they need to be lists because it just concatenates them later. - if image_conditioning is not None: - cond = {"c_concat": [image_conditioning], "c_crossattn": [cond]} - unconditional_conditioning = {"c_concat": [image_conditioning], "c_crossattn": [unconditional_conditioning]} - - res = self.orig_p_sample_ddim(x_dec, cond, ts, unconditional_conditioning=unconditional_conditioning, *args, **kwargs) - - if self.mask is not None: - self.last_latent = self.init_latent * self.mask + self.nmask * res[1] - else: - self.last_latent = res[1] - - sd_samplers_common.store_latent(self.last_latent) - - self.step += 1 - state.sampling_step = self.step - shared.total_tqdm.update() - - return res - - def initialize(self, p): - self.eta = p.eta if p.eta is not None else shared.opts.eta_ddim - if self.eta != 0.0: - p.extra_generation_params["Eta DDIM"] = self.eta - - for fieldname in ['p_sample_ddim', 'p_sample_plms']: - if hasattr(self.sampler, fieldname): - setattr(self.sampler, fieldname, self.p_sample_ddim_hook) - - self.mask = p.mask if hasattr(p, 'mask') else None - self.nmask = p.nmask if hasattr(p, 'nmask') else None - - def adjust_steps_if_invalid(self, p, num_steps): - if (self.config.name == 'DDIM' and p.ddim_discretize == 'uniform') or (self.config.name == 'PLMS'): - valid_step = 999 / (1000 // num_steps) - if valid_step == math.floor(valid_step): - return int(valid_step) + 1 - - return num_steps - - def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None): - steps, t_enc = sd_samplers_common.setup_img2img_steps(p, steps) - steps = self.adjust_steps_if_invalid(p, steps) - self.initialize(p) - - self.sampler.make_schedule(ddim_num_steps=steps, ddim_eta=self.eta, ddim_discretize=p.ddim_discretize, verbose=False) - x1 = self.sampler.stochastic_encode(x, torch.tensor([t_enc] * int(x.shape[0])).to(shared.device), noise=noise) - - self.init_latent = x - self.last_latent = x - self.step = 0 - - # Wrap the conditioning models with additional image conditioning for inpainting model - if image_conditioning is not None: - conditioning = {"c_concat": [image_conditioning], "c_crossattn": [conditioning]} - unconditional_conditioning = {"c_concat": [image_conditioning], "c_crossattn": [unconditional_conditioning]} - - samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning)) - - return samples - - def sample(self, p, x, conditioning, unconditional_conditioning, steps=None, image_conditioning=None): - self.initialize(p) - - self.init_latent = None - self.last_latent = x - self.step = 0 - - steps = self.adjust_steps_if_invalid(p, steps or p.steps) - - # Wrap the conditioning models with additional image conditioning for inpainting model - # dummy_for_plms is needed because PLMS code checks the first item in the dict to have the right shape - if image_conditioning is not None: - conditioning = {"dummy_for_plms": np.zeros((conditioning.shape[0],)), "c_crossattn": [conditioning], "c_concat": [image_conditioning]} - unconditional_conditioning = {"c_crossattn": [unconditional_conditioning], "c_concat": [image_conditioning]} - - samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0]) - - return samples_ddim diff --git a/spaces/bioriAsaeru/text-to-voice/Awolnation - Megalithic Symphony 320kbps Mp3.md b/spaces/bioriAsaeru/text-to-voice/Awolnation - Megalithic Symphony 320kbps Mp3.md deleted file mode 100644 index 0a3f4c4d3bbc5a555f07a638c3bf0c96a1c66d45..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Awolnation - Megalithic Symphony 320kbps Mp3.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Category: Pop. Release date: 26 May 2003. Format: MP3. Genre: Pop. Awolnation - Sail Megalithic Symphony (10th Anniversary Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe. 2020 Format: MP3 / FLAC Quality: 320 Kbps / Lossless Size [RAR or. Awolnation - Sail. Awolnation - Megalithic Symphony. Awolnation - Megalithic Symphony (10th Anniversary Deluxe Deluxe.

    -

    Awolnation - Megalithic Symphony 320kbps mp3


    Downloadhttps://urloso.com/2uyP8g



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md b/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md deleted file mode 100644 index 9a0179f420b5e27c20eb3742c4303d11acc55b39..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    1. All X11 libraries upgraded from 6.9.2 to 7.2.0
    2. GNU gettext upgraded from 0.14.5 to 0.16.1
    3. JASper JPEG-2000 upgraded from 1.701.0 to 1.900.1
    4. TIFF upgraded from 3.7.4 to 3.8.2
    5. libpng upgraded from 1.2.8 to 1.2.18
    6. libmng upgraded from 1.0.9 to 1.0.10
    7. FreeType 2 upgraded from 2.1.10 to 2.3.5
    8. libgd2 upgraded from 2.0.33 to 2.0.35
    9. libgif upgraded from 4.1.0 to 4.1.4
    10. NetPBM upgraded from 10.26.14 to 10.34
    11. GTK 2 upgraded from 2.8.9 to 2.10.14
    12. Pango upgraded from 1.10.2 to 1.17.3
    13. Cairo upgraded from 1.0.2 to 1.4.10
    14. GNU readline upgraded from 5.1 to 5.2
    15. BerkeleyDB upgraded from 4.3.28 to 4.6.18
    16. Expat upgraded from 1.95.8 to 2.0.1
    17. libxml2 upgraded from 2.6.22 to 2.6.29
    18. libxslt upgraded from 1.1.15 to 1.1.21
    19. XMLSEC 1 upgraded from 1.2.9 to 1.2.10
    20. OpenSSL 0.9.8e added
    21. OpenSSL 0.9.7 upgraded from 0.9.7i to 0.9.7m
    22. OpenSSL 0.9.6m has been deprecated and no longer provided
    23. OpenLDAP upgraded from 2.2.30 to 2.3.37
    24. Cyrus SASL upgraded from 2.1.20 to 2.1.22
    25. MM upgraded from 1.4.0 to 1.4.2
    26. PCRE upgraded from 6.4 to 7.2
    27. LCMS upgraded from 1.15 to 1.16
    28. libIDL upgraded from 0.8.6 to 0.8.8
    29. cURL upgraded from 7.15.1 to 7.16.4
    30. Sablotron upgraded from 1.0.2 to 1.0.3
    31. ICU upgraded from 3.4 to 3.6
    32. FontConfig upgraded from 2.2.2 to 2.4.2
    33. trio upgraded from 1.10 to 1.12
    34. libart LGPL upgraded from 2.3.17 to 2.3.19
    35. GNOME Options Library (libpopt) upgraded from 1.7 to 1.10.4
    36. GNOME Structured File library (libgsf) upgraded from 1.13.3 to 1.14.5
    37. GNOME CSS library (libcroco) upgraded from 0.6.0 to 0.6.1
    38. librsvg upgraded from 2.13.3 to 2.18.0
    39. libexif upgraded from 0.6.12 to 0.6.16
    40. GnuPG upgraded from 1.4.0 to 1.4.7
    41. libgcrypt upgraded from 1.2.2 to 1.2.4
    42. libgpg-error upgraded from 1.0.0 to 1.5
    43. Tcl 8.4 upgraded from 8.4.10 to 8.4.15
    44. Tk 8.4 upgraded from 8.4.10 to 8.4.14
    For more details please seeAppendix: Graphics libraries, Perl modules,and PHP PEAR modules.Java 2 SE 1.4.2OpenServer 6.0.0 can have both J2SE 1.4.2 and J2SE 5.0 installed andfunctional at the same time.J2SE 1.4.2 is used specifically by various OpenServer tools and by defaultis updated to version 1.4.2_16 when you install OpenServer 6.0.0MP3 CD #1.

    -

    Software Components and PackagesAbbreviationFCS VersionMP2 VersionMP3/MP4 Version3D Athena Widget Set for X11xaw3d1.5E1.5E1.5EApache Portable Runtime Utility Libraryaprutiln/an/a1.2.8Apache Portable Runtimeaprn/an/a1.2.9Accessibility Toolkitatk1.8.01.10.31.10.3bzip2 compression library and utilitiesbzip21.0.31.0.31.0.3Cairo Graphics Librarycairon/a1.0.21.4.10compface Image Manipulation Librarycompface1.0.01.0.01.5.2cURL URL Librarycurl7.13.27.15.17.16.4Berkeley-DB Database Librarybdb4.3.274.3.284.6.18Expat XML Parserexpat1.95.81.95.82.0.1Expect TCL Extensionexpect5.425.435.43FontConfigfontcfg2.2.22.2.22.4.2FreeType Font Engine Version 1freetype11.3.11.3.11.3.1FreeType Font Enginefreetype22.1.92.1.102.3.5GD Graphics Librarygd11.8.41.8.41.8.4GD Graphics Librarygd22.0.332.0.332.0.35GNU dbm Librarygdbm1.8.01.8.01.8.0Gnome DOM Librarygdome20.8.10.8.10.8.1GNU gettextgettext0.14.10.14.50.16.1GIF Image Manipulation Librarygiflib4.1.04.1.04.1.4GIMP Portability Libraryglib11.2.101.2.101.2.10GIMP Portability Libraryglib22.4.82.8.42.12.13GNU Privacy Guard (gnupg)gnupg1.4.01.4.01.4.7GIMP Toolkitgtk11.2.101.2.101.2.10GIMP Toolkitgtk22.4.142.8.92.10.14GWXLIBS Base Support Toolsgwxlibs2.0.02.1.03.0.0International Components for Unicode (ICU)icu3.23.43.6Enlightenment Imaging Libraryimlib1.10.01.10.01.9.15JASper JPEG2000 libraryjasper1.701.01.701.01.900.1ISO/IEC 11544:1993 JBIG kitjbig1.61.61.6IJG JPEG libraryjpeg6b6b6bJavaScript Embedded C Libraryjs1.5rc51.5rc51.5Little Color Management System (LCMS)lcms1.141.151.16Gnome IDL LibrarylibIDL0.850.8.60.8.8Gnome ART librarylibart2.3.172.3.172.3.19Gnome CSS2 Parsing Toolkit (libcroco)libcroco0.6.00.6.00.6.1Gnome EXIF Widget for GTKexifgtk0.3.50.3.50.3.5EXIF Processing Librarylibexif0.6.100.6.120.6.16GNU Cryptographic Librarylibgcrypt1.2.11.2.21.2.4Gnome HTTP Client Librarylibghttp1.0.91.0.91.0.9GNU Privacy Guard Error Librarylibgpg-err1.0.01.0.01.5Gnome Structured File Librarylibgsf1.11.11.13.31.14.5Gnome HTML Widget for GTKgtkhtml2.6.32.11.02.11.0Multi-image Network Graphics (MNG) Librarylibmng1.0.91.0.91.0.10Portable Network Graphics (PNG) Librarylibpng1.2.81.2.81.2.18Gnome SVG Rendering Librarylibrsvg2.9.52.13.32.18.0WMF Conversion Librarylibwmfn/a0.2.8.40.2.8.4W3C Consortium Library (libwww)libwww5.405.405.40libxml2 XML C Parser and Toolkitlibxml22.6.192.6.222.6.29libxslt XSLT C Parser and Toolkitlibxslt1.1.141.1.151.1.21Libtool Dynamic Loadingltdl1.5.221.5.221.5.22MD5 Hash Librarymd51.0.01.0.01.0.0mktempmktemp1.51.51.5OSSP mm Shared Memory Allocation Librarymm1.3.11.4.01.4.2MPEG Encoder/Decoder Librarympeglib1.2.11.2.11.3.1Portable Bitmap Utilities and Librariesnetpbm10.26.110.26.1410.34OpenLDAPopenldap2.2.242.2.302.3.37OpenSLP (Service Location Protocol)openslp1.2.11.2.11.2.1OpenSSLopenssl0.9.7g0.9.7i/0.9.6m0.9.7m/0.9.8e*Pango Layout and Text Rendering Librarypango1.4.11.10.21.17.3Perl Compatible Regular Expressionspcre5.06.47.2pkg-configpkgconfigpre 0.190.190.22Gnome Option Processing Librarypopt1.71.71.10.4True Random Libraryrand1.0.01.0.01.0.0GNU readlinereadline5.05.15.2Sablotron XML, DOM and XPath Processorsablot1.0.11.0.21.0.3Cyrus SASLsasl2.1.202.1.20**2.1.22S-lang Interpreter and Libraryslang1.4.91.4.91.4.9Tcl 8.4tcl848.4.98.4.108.4.15Extended Tcltclx848.3.58.3.58.4TIFF library and utilitiestiff3.7.23.7.43.8.2Tk 8.4tk848.4.98.4.108.4.14trio printf librarytrio1.101.101.12Xalan XSLT Processorxalan1.9.01.10.01.10.0Xerces Validating XML C++ Parserxerces2.6.02.7.02.7.0XML Security Libraryxmlsec11.2.81.2.91.2.10X.org FontsXORGFonts6.8.26.9.07.2.0X.org RuntimeXORGRT6.8.26.9.07.2.0zlib compression libraryzlib1.2.21.2.31.2.3*For OpenServer 6.0.0 MP3,OpenSSL 0.9.8e has been added;OpenSSL 0.9.7 is upgraded from 0.9.7i to 0.9.7m;and OpenSSL 0.9.6m is deprecated and no longer provided.**With respect to Cyrus-SASL:the version did not change in OpenServer 6.0.0 MP2 but the way it was compiledsignificantly changed.In previous (prior to OpenServer 6.0.0 MP2) releases all of the backendswere static.All the backends are now dynamic.

    -

    Calendar 366 II 2.4.2 Crack Mac Osx


    Download ✶✶✶ https://urloso.com/2uyQnD



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Crash Bandicoot N Sane Trilogy (2018) V1.0 [MULTi6-ENG] ELAMIGOS Hack Tool Free Download.md b/spaces/bioriAsaeru/text-to-voice/Crash Bandicoot N Sane Trilogy (2018) V1.0 [MULTi6-ENG] ELAMIGOS Hack Tool Free Download.md deleted file mode 100644 index 368333a9bf13dfec5e95a649ae48fbdd91bbf57f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crash Bandicoot N Sane Trilogy (2018) V1.0 [MULTi6-ENG] ELAMIGOS Hack Tool Free Download.md +++ /dev/null @@ -1,29 +0,0 @@ - -

    Crash Bandicoot N Sane Trilogy (2018) V1.0 [MULTi6-ENG] ELAMIGOS Hack Tool Free Download

    -

    Are you looking for a way to play the Crash Bandicoot N Sane Trilogy on your PC without spending any money or going through any online activation process? If so, you may be interested in the ElAmigos hack tool, which allows you to download and install the game for free and enjoy all its features and content. In this article, we will tell you what the Crash Bandicoot N Sane Trilogy is, what the ElAmigos hack tool is, and how to use it.

    -

    Crash Bandicoot N Sane Trilogy (2018) V1.0 [MULTi6-ENG] ELAMIGOS Hack Tool Free Download


    Download 🗹 https://urloso.com/2uyS26



    -

    What is the Crash Bandicoot N Sane Trilogy?

    -

    The Crash Bandicoot N Sane Trilogy is a collection of three remastered games that were originally released for the PlayStation in the late 1990s: Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, and Crash Bandicoot 3: Warped. These games are platformers that feature the titular marsupial as he battles the evil Dr. Neo Cortex and his minions. The games are known for their colorful graphics, catchy music, and challenging levels.

    -

    The Crash Bandicoot N Sane Trilogy was released for the PlayStation 4 in 2017, and for the PC, Xbox One, and Nintendo Switch in 2018. The remastered version was developed by Vicarious Visions and Iron Galaxy, and published by Activision. The remastered version features updated graphics, sound, and gameplay, as well as new content such as two bonus levels: Stormy Ascent and Future Tense.

    -

    What is the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy?

    -

    The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a program that allows you to play the PC version of the game without having to buy it or activate it online. The hack tool is based on the crack by Codex, and it includes all the updates and DLCs that were released for the game until July 2018. The hack tool also allows you to choose from six languages: English, French, Italian, German, Spanish, and Japanese.

    -

    The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is easy to use and install. You just need to download it from a reliable source, such as ElAmigos-Games.com, extract it to your desired location, and run the setup.exe file. The installation process will take only a few minutes, depending on your CPU speed and hard drive space. After that, you can launch the game from the desktop shortcut or the start menu.

    -

    -

    How to use the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy?

    -

    To use the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy, you need to follow these simple steps:

    -
      -
    1. Visit ElAmigos-Games.com and search for Crash Bandicoot N Sane Trilogy.
    2. -
    3. Click on the download link and choose a server from the list.
    4. -
    5. Wait for the download to finish and extract the RAR file with WinRAR or 7-Zip.
    6. -
    7. Run the setup.exe file and follow the instructions on the screen.
    8. -
    9. Select your preferred language and destination folder.
    10. -
    11. Wait for the installation to complete and close the setup.
    12. -
    13. Launch the game from the desktop shortcut or the start menu.
    14. -
    15. Enjoy playing Crash Bandicoot N Sane Trilogy with the ElAmigos hack tool.
    16. -
    -

    Conclusion

    -

    The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a great way to play one of the best platformer games of all time on your PC without spending any money or going through any online activation process. The hack tool allows you to enjoy all the features and content of the game without any limitations or restrictions. The hack tool is also easy to use and install, and it supports six languages. If you are a fan of Crash Bandicoot or platformer games in general, you should definitely try out the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy.

    -

    Conclusion

    -

    The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a great way to play one of the best platformer games of all time on your PC without spending any money or going through any online activation process. The hack tool allows you to enjoy all the features and content of the game without any limitations or restrictions. The hack tool is also easy to use and install, and it supports six languages. If you are a fan of Crash Bandicoot or platformer games in general, you should definitely try out the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md b/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md deleted file mode 100644 index cb9e2901447a543e768d8a6d04eb3b16d257dbe0..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md +++ /dev/null @@ -1,6 +0,0 @@ -

    doce pilates jim rohn pdf download


    Download Zip 🗸🗸🗸 https://urloso.com/2uyQ6r



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ.py deleted file mode 100644 index 3740e9bb08c5f168a9ab3a6d94561678bad1775c..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ.py +++ /dev/null @@ -1,9 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -model.backbone.bottom_up.stages.depth = 101 diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/README.md deleted file mode 100644 index c86ff62516f4e8e4b1a6c1f33f11192933cf3861..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/README.md +++ /dev/null @@ -1,15 +0,0 @@ - -This directory contains code to prepare a detectron2 model for deployment. -Currently it supports exporting a detectron2 model to TorchScript, ONNX, or (deprecated) Caffe2 format. - -Please see [documentation](https://detectron2.readthedocs.io/tutorials/deployment.html) for its usage. - - -### Acknowledgements - -Thanks to Mobile Vision team at Facebook for developing the Caffe2 conversion tools. - -Thanks to Computing Platform Department - PAI team at Alibaba Group (@bddpqq, @chenbohua3) who -help export Detectron2 models to TorchScript. - -Thanks to ONNX Converter team at Microsoft who help export Detectron2 models to ONNX. diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/packaging/pkg_helpers.bash b/spaces/brjathu/HMR2.0/vendor/detectron2/dev/packaging/pkg_helpers.bash deleted file mode 100644 index 550bb6e5756d43da3d30c8cd9b602b3bd30a7e4a..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/packaging/pkg_helpers.bash +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -# Function to retry functions that sometimes timeout or have flaky failures -retry () { - $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*) -} -# Install with pip a bit more robustly than the default -pip_install() { - retry pip install --progress-bar off "$@" -} - - -setup_cuda() { - # Now work out the CUDA settings - # Like other torch domain libraries, we choose common GPU architectures only. - # See https://github.com/pytorch/pytorch/blob/master/torch/utils/cpp_extension.py - # and https://github.com/pytorch/vision/blob/main/packaging/pkg_helpers.bash for reference. - export FORCE_CUDA=1 - case "$CU_VERSION" in - cu113) - export CUDA_HOME=/usr/local/cuda-11.3/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX" - ;; - cu112) - export CUDA_HOME=/usr/local/cuda-11.2/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX" - ;; - cu111) - export CUDA_HOME=/usr/local/cuda-11.1/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX" - ;; - cu110) - export CUDA_HOME=/usr/local/cuda-11.0/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0+PTX" - ;; - cu102) - export CUDA_HOME=/usr/local/cuda-10.2/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX" - ;; - cu101) - export CUDA_HOME=/usr/local/cuda-10.1/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX" - ;; - cu100) - export CUDA_HOME=/usr/local/cuda-10.0/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX" - ;; - cu92) - export CUDA_HOME=/usr/local/cuda-9.2/ - export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0+PTX" - ;; - cpu) - unset FORCE_CUDA - export CUDA_VISIBLE_DEVICES= - ;; - *) - echo "Unrecognized CU_VERSION=$CU_VERSION" - exit 1 - ;; - esac -} - -setup_wheel_python() { - case "$PYTHON_VERSION" in - 3.7) python_abi=cp37-cp37m ;; - 3.8) python_abi=cp38-cp38 ;; - 3.9) python_abi=cp39-cp39 ;; - *) - echo "Unrecognized PYTHON_VERSION=$PYTHON_VERSION" - exit 1 - ;; - esac - export PATH="/opt/python/$python_abi/bin:$PATH" -} diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/cse_confidence.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/cse_confidence.py deleted file mode 100644 index ee5166f82d45ecb4ea829ec2ecab248161c19421..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/cse_confidence.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from dataclasses import make_dataclass -from functools import lru_cache -from typing import Any, Optional -import torch - - -@lru_cache(maxsize=None) -def decorate_cse_predictor_output_class_with_confidences(BasePredictorOutput: type) -> type: - """ - Create a new output class from an existing one by adding new attributes - related to confidence estimation: - - coarse_segm_confidence (tensor) - - Details on confidence estimation parameters can be found in: - N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning - Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019 - A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020 - - The new class inherits the provided `BasePredictorOutput` class, - it's name is composed of the name of the provided class and - "WithConfidences" suffix. - - Args: - BasePredictorOutput (type): output type to which confidence data - is to be added, assumed to be a dataclass - Return: - New dataclass derived from the provided one that has attributes - for confidence estimation - """ - - PredictorOutput = make_dataclass( - BasePredictorOutput.__name__ + "WithConfidences", - fields=[ - ("coarse_segm_confidence", Optional[torch.Tensor], None), - ], - bases=(BasePredictorOutput,), - ) - - # add possibility to index PredictorOutput - - def slice_if_not_none(data, item): - if data is None: - return None - if isinstance(item, int): - return data[item].unsqueeze(0) - return data[item] - - def PredictorOutput_getitem(self, item): - PredictorOutput = type(self) - base_predictor_output_sliced = super(PredictorOutput, self).__getitem__(item) - return PredictorOutput( - **base_predictor_output_sliced.__dict__, - coarse_segm_confidence=slice_if_not_none(self.coarse_segm_confidence, item), - ) - - PredictorOutput.__getitem__ = PredictorOutput_getitem - - def PredictorOutput_to(self, device: torch.device): - """ - Transfers all tensors to the given device - """ - PredictorOutput = type(self) - base_predictor_output_to = super(PredictorOutput, self).to(device) # pyre-ignore[16] - - def to_device_if_tensor(var: Any): - if isinstance(var, torch.Tensor): - return var.to(device) - return var - - return PredictorOutput( - **base_predictor_output_to.__dict__, - coarse_segm_confidence=to_device_if_tensor(self.coarse_segm_confidence), - ) - - PredictorOutput.to = PredictorOutput_to - return PredictorOutput diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py deleted file mode 100644 index 222dfddffb1f9bedf87f4c345534045b29e2d8ee..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py +++ /dev/null @@ -1,19 +0,0 @@ -from detectron2.model_zoo import get_config -from torch import nn - -model = get_config("common/models/retinanet.py").model -model.backbone.bottom_up.freeze_at = 2 - -# The head will overwrite string "SyncBN" to use domain-specific BN, so we -# provide a class here to use shared BN in training. -model.head.norm = nn.SyncBatchNorm2d - -dataloader = get_config("common/data/coco.py").dataloader -lr_multiplier = get_config("common/coco_schedule.py").lr_multiplier_3x -optimizer = get_config("common/optim.py").SGD -train = get_config("common/train.py").train - -optimizer.lr = 0.01 - -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" -train.max_iter = 270000 # 3x for batchsize = 16 diff --git a/spaces/bzd4576/sovits-sin/monotonic_align/setup.py b/spaces/bzd4576/sovits-sin/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/bzd4576/sovits-sin/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/changlisheng/shangChat/assets/Kelpy-Codos.js b/spaces/changlisheng/shangChat/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/cheetah003/HMMC_t2v_search/main_task_retrieval.py b/spaces/cheetah003/HMMC_t2v_search/main_task_retrieval.py deleted file mode 100644 index b3e49ff0694be22f9b8f23f426fdb52007e0d8d7..0000000000000000000000000000000000000000 --- a/spaces/cheetah003/HMMC_t2v_search/main_task_retrieval.py +++ /dev/null @@ -1,639 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import unicode_literals -from __future__ import print_function -import os - -import torch -from torch.utils.data import (SequentialSampler) -import numpy as np -import random -from thop import profile - -from metrics import logging_rank -import time -import argparse -from sklearn import preprocessing -from transformers import BertTokenizer, AutoTokenizer, AutoModel -from tensorboardX import SummaryWriter -from modules.file_utils import PYTORCH_PRETRAINED_BERT_CACHE -from modules.tokenization_clip import SimpleTokenizer as ClipTokenizer -from modules.modeling import BirdModel_VT, BirdPreTrainedModel, BirdModel -from modules.optimization import BertAdam -from dataloaders.dataloader import DATALOADER_DICT -from modules.until_module import get_dual_matrix -from util import parallel_apply, get_logger -from torch.cuda.amp import autocast, GradScaler - -torch.distributed.init_process_group(backend="nccl") - -global logger - - -def get_args(description='CLIP4Clip on Retrieval Task'): - parser = argparse.ArgumentParser(description=description) - parser.add_argument("--do_pretrain", action='store_true', help="Whether to run training.") - parser.add_argument("--do_train", action='store_true', help="Whether to run training.") - parser.add_argument("--do_eval", action='store_true', help="Whether to run eval on the dev set.") - parser.add_argument("--do_params", action='store_true', help="text the params of the model.") - parser.add_argument("--use_frame_fea", action='store_true', help="whether use frame feature matching text") - parser.add_argument('--task', type=str, default="retrieval", choices=["retrieval_VT", "retrieval"], - help="choose downstream task.") - parser.add_argument('--dataset', type=str, default="bird", choices=["bird", "msrvtt", "vatex", "msvd"], - help="choose dataset.") - parser.add_argument('--num_thread_reader', type=int, default=1, help='') - parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate') - parser.add_argument('--text_lr', type=float, default=0.00001, help='text encoder learning rate') - parser.add_argument('--epochs', type=int, default=20, help='upper epoch limit') - parser.add_argument('--batch_size', type=int, default=256, help='batch size') - parser.add_argument('--batch_size_val', type=int, default=3500, help='batch size eval') - parser.add_argument('--lr_decay', type=float, default=0.9, help='Learning rate exp epoch decay') - parser.add_argument('--weight_decay', type=float, default=0.2, help='Learning rate exp epoch decay') - parser.add_argument('--n_display', type=int, default=100, help='Information display frequence') - parser.add_argument('--seed', type=int, default=42, help='random seed') - parser.add_argument('--max_words', type=int, default=32, help='') - parser.add_argument('--max_frames', type=int, default=12, help='') - parser.add_argument('--top_frames', type=int, default=3, help='') - parser.add_argument('--frame_sample', type=str, default="uniform", choices=["uniform", "random", "uniform_random"], - help='frame sample strategy') - parser.add_argument('--frame_sample_len', type=str, default="fix", choices=["dynamic", "fix"], - help='use dynamic frame length of fix frame length') - parser.add_argument('--language', type=str, default="chinese", choices=["chinese", "english"], - help='language for text encoder') - parser.add_argument('--use_temp', action='store_true', help='whether to use temporal transformer') - - parser.add_argument("--logdir", default=None, type=str, required=False, help="log dir for tensorboardX writer") - parser.add_argument("--output_dir", default=None, type=str, required=True, - help="The output directory where the model predictions and checkpoints will be written.") - parser.add_argument("--cross_model", default="cross-base", type=str, required=False, help="Cross module") - parser.add_argument("--init_model", default=None, type=str, required=False, help="Initial model.") - parser.add_argument("--warmup_proportion", default=0.1, type=float, - help="Proportion of training to perform linear learning rate warmup for. E.g., 0.1 = 10%% of training.") - parser.add_argument('--gradient_accumulation_steps', type=int, default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.") - parser.add_argument('--n_gpu', type=int, default=1, help="Changed in the execute process.") - - parser.add_argument("--cache_dir", default="", type=str, - help="Where do you want to store the pre-trained models downloaded from s3") - - parser.add_argument('--enable_amp', action='store_true', help="whether to use pytorch amp") - - parser.add_argument("--world_size", default=0, type=int, help="distribted training") - parser.add_argument("--local_rank", default=0, type=int, help="distribted training") - parser.add_argument("--rank", default=0, type=int, help="distribted training") - parser.add_argument('--coef_lr', type=float, default=1., help='coefficient for bert branch.') - - args = parser.parse_args() - - # Check paramenters - if args.gradient_accumulation_steps < 1: - raise ValueError("Invalid gradient_accumulation_steps parameter: {}, should be >= 1".format( - args.gradient_accumulation_steps)) - if not args.do_train and not args.do_eval and not args.do_params: - raise ValueError("At least one of `do_train` or `do_eval` or 'do_params' must be True.") - - args.batch_size = int(args.batch_size / args.gradient_accumulation_steps) - - return args - - -def set_seed_logger(args): - global logger - # predefining random initial seeds - random.seed(args.seed) - os.environ['PYTHONHASHSEED'] = str(args.seed) - np.random.seed(args.seed) - torch.manual_seed(args.seed) - torch.cuda.manual_seed(args.seed) - torch.cuda.manual_seed_all(args.seed) # if you are using multi-GPU. - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True - - world_size = torch.distributed.get_world_size() - torch.cuda.set_device(args.local_rank) - args.world_size = world_size - rank = torch.distributed.get_rank() - args.rank = rank - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir, exist_ok=True) - - logger = get_logger(os.path.join(args.output_dir, "log.txt")) - if args.local_rank == 0: - if args.logdir: - args.writer = SummaryWriter(args.logdir) - logger.info("Effective parameters:") - for key in sorted(args.__dict__): - logger.info(" <<< {}: {}".format(key, args.__dict__[key])) - - return args - - -def init_device(args, local_rank): - global logger - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu", local_rank) - - n_gpu = torch.cuda.device_count() - logger.info("device: {} n_gpu: {}".format(device, n_gpu)) - args.n_gpu = n_gpu - - if args.batch_size % args.n_gpu != 0 or args.batch_size_val % args.n_gpu != 0: - raise ValueError( - "Invalid batch_size/batch_size_val and n_gpu parameter: {}%{} and {}%{}, should be == 0".format( - args.batch_size, args.n_gpu, args.batch_size_val, args.n_gpu)) - - return device, n_gpu - - -def init_model(args, device, n_gpu, local_rank): - if args.init_model: - model_state_dict = torch.load(args.init_model, map_location='cpu') - else: - model_state_dict = None - - # Prepare model - cache_dir = args.cache_dir if args.cache_dir else os.path.join(str(PYTORCH_PRETRAINED_BERT_CACHE), 'distributed') - if args.task == "retrieval_VT": - model = BirdModel_VT.from_pretrained(args.cross_model, cache_dir=cache_dir, state_dict=model_state_dict, - task_config=args) - elif args.task == "retrieval": - model = BirdModel.from_pretrained(args.cross_model, cache_dir=cache_dir, state_dict=model_state_dict, - task_config=args) - else: - raise Exception('wrong task! task should in [retrieve_VT, retrieve]') - # args.writer.add_graph(model) - model.to(device) - - return model - - -def prep_optimizer(args, model, num_train_optimization_steps, device, n_gpu, local_rank, coef_lr=1.): - if hasattr(model, 'module'): - model = model.module - - param_optimizer = list(model.named_parameters()) - no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] - - decay_param_tp = [(n, p) for n, p in param_optimizer if not any(nd in n for nd in no_decay)] - no_decay_param_tp = [(n, p) for n, p in param_optimizer if any(nd in n for nd in no_decay)] - - decay_clip_param_tp = [(n, p) for n, p in decay_param_tp if "visual_encoder.visual." in n] - decay_chinesebert_param_tp = [(n, p) for n, p in decay_param_tp if "text_encoder." in n] - decay_noclip_param_tp = [(n, p) for n, p in decay_param_tp if - ("visual_encoder.visual." not in n) and ("text_encoder." not in n)] - - no_decay_clip_param_tp = [(n, p) for n, p in no_decay_param_tp if "visual_encoder.visual." in n] - no_decay_text_param_tp = [(n, p) for n, p in no_decay_param_tp if "text_encoder." in n] - no_decay_noclip_param_tp = [(n, p) for n, p in no_decay_param_tp if - ("visual_encoder.visual." not in n) and ("text_encoder." not in n)] - - weight_decay = args.weight_decay - optimizer_grouped_parameters = [ - {'params': [p for n, p in decay_clip_param_tp], 'weight_decay': weight_decay, 'lr': args.lr * coef_lr}, - {'params': [p for n, p in decay_chinesebert_param_tp], 'weight_decay': weight_decay, 'lr': args.text_lr}, - {'params': [p for n, p in decay_noclip_param_tp], 'weight_decay': weight_decay}, - {'params': [p for n, p in no_decay_clip_param_tp], 'weight_decay': 0.0, 'lr': args.lr * coef_lr}, - {'params': [p for n, p in no_decay_text_param_tp], 'weight_decay': 0.0, 'lr': args.text_lr}, - {'params': [p for n, p in no_decay_noclip_param_tp], 'weight_decay': 0.0} - ] - - scheduler = None - optimizer = BertAdam(optimizer_grouped_parameters, lr=args.lr, warmup=args.warmup_proportion, - schedule='warmup_cosine', b1=0.9, b2=0.98, e=1e-6, - t_total=num_train_optimization_steps, weight_decay=weight_decay, - max_grad_norm=1.0) - - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], - output_device=local_rank, find_unused_parameters=True) - # if args.local_rank == 0: - # for name, parameters in model.named_parameters(): - # logger.info("name:{} requires_grad:{} size:{}".format(name, parameters.requires_grad, parameters.size())) - return optimizer, scheduler, model - - -def save_model(epoch, args, model, type_name=""): - # Only save the model it-self - model_to_save = model.module if hasattr(model, 'module') else model - output_model_file = os.path.join( - args.output_dir, "pytorch_model.bin.{}{}".format("" if type_name == "" else type_name + ".", epoch)) - torch.save(model_to_save.state_dict(), output_model_file) - logger.info("Model saved to %s", output_model_file) - return output_model_file - - -def load_model(epoch, args, n_gpu, device, model_file=None): - if model_file is None or len(model_file) == 0: - model_file = os.path.join(args.output_dir, "pytorch_model.bin.{}".format(epoch)) - if os.path.exists(model_file): - model_state_dict = torch.load(model_file, map_location='cpu') - if args.local_rank == 0: - logger.info("Model loaded from %s", model_file) - # Prepare model - cache_dir = args.cache_dir if args.cache_dir else os.path.join(str(PYTORCH_PRETRAINED_BERT_CACHE), - 'distributed') - if args.task == "retrieval": - model = BirdModel.from_pretrained(args.cross_model, cache_dir=cache_dir, state_dict=model_state_dict, - task_config=args) - elif args.task == "retrieval_VT": - model = BirdModel_VT.from_pretrained(args.cross_model, cache_dir=cache_dir, state_dict=model_state_dict, - task_config=args) - else: - model = None - - model.to(device) - else: - model = None - return model - - -def train_epoch(epoch, args, model, train_dataloader, device, n_gpu, optimizer, scheduler, scaler, global_step, local_rank=0): - global logger - torch.cuda.empty_cache() - model.train() - log_step = args.n_display - start_time = time.time() - total_loss = 0 - load_start_time = time.time() - for step, batch in enumerate(train_dataloader): - load_finish_time = time.time() - if global_step % log_step == 0 and local_rank == 0: - logger.info("data loader time:{}".format(load_finish_time - load_start_time)) - global_step += 1 - if n_gpu == 1: - # multi-gpu does scattering it-self - batch = tuple(t.to(device=device, non_blocking=True) for t in batch) - - with autocast(enabled=args.enable_amp): - if args.task == "retrieval_VT": - query_ids, query_mask, video_data, video_frame, title_ids, title_mask, idx = batch - loss = model(query_ids, query_mask, video_data, video_frame, title_ids, title_mask, idx, global_step) - elif args.task == "retrieval": - query_ids, query_mask, video_data, video_frame, idx = batch - loss = model(query_ids, query_mask, video_data, video_frame, idx, global_step) - else: - raise ValueError("wrong task type:{}".format(args.task)) - if n_gpu > 1: - loss = loss.mean() # mean() to average on multi-gpu. - if args.gradient_accumulation_steps > 1: - loss = loss / args.gradient_accumulation_steps - forward_time = time.time() - if args.enable_amp: - scaler.scale(loss).backward() - else: - loss.backward() - total_loss += float(loss) - backward_time = time.time() - if global_step % log_step == 0 and local_rank == 0: - logger.info("forward_time:{},backward_time:{}".format(forward_time - load_finish_time, backward_time - forward_time)) - - if (step + 1) % args.gradient_accumulation_steps == 0: - torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - - if scheduler is not None: - scheduler.step() # Update learning rate schedule - - if args.enable_amp: - scaler.step(optimizer) - scaler.update() - else: - optimizer.step() - - optimizer.zero_grad() - - if global_step % log_step == 0 and local_rank == 0: - logger.info("Epoch: %d/%s, Step: %d/%d, Lr: %s, Loss: %f, Time/step: %f", epoch + 1, - args.epochs, step + 1, - len(train_dataloader), - "-".join([str('%.9f' % itm) for itm in sorted(list(set(optimizer.get_lr())))]), - float(loss), - (time.time() - start_time) / (log_step * args.gradient_accumulation_steps)) - if args.logdir: - # args.writer.add_scalar('loss', loss.item(), global_step=global_step) - args.writer.add_scalars('lr', {"lr%d" % i: itm for i, itm in enumerate(sorted(list(set(optimizer.get_lr()))))}, - global_step=global_step) - start_time = time.time() - load_start_time = time.time() - total_loss = total_loss / len(train_dataloader) - return total_loss, global_step - - -def _run_on_single_gpu(model, batch_query_output_list, batch_visual_output_list, batch_title_output_list, - batch_frame_output_list): - sim_matrix = [] - sim_matrix_title = [] - sim_matrix_frame = [] - for idx1, query_output in enumerate(batch_query_output_list): - each_row = [] - title_each_row = [] - frame_each_row = [] - for idx2, (visual_output, title_output, frame_output) in enumerate(zip(batch_visual_output_list, - batch_title_output_list, batch_frame_output_list)): - b1b2_logits = model.loose_similarity(query_output, visual_output) - title_logits = model.loose_similarity(query_output, title_output) - frame_logits = model.loose_similarity(query_output, frame_output) - frame_logits = torch.topk(frame_logits, k=model.top_frames, dim=2)[0] - frame_logits = torch.mean(frame_logits, dim=2) - b1b2_logits = b1b2_logits.cpu().detach().numpy() - title_logits = title_logits.cpu().detach().numpy() - frame_logits = frame_logits.cpu().detach().numpy() - each_row.append(b1b2_logits) - title_each_row.append(title_logits) - frame_each_row.append(frame_logits) - # logger.info("b1b2_logits:{}".format(b1b2_logits.shape)) - # logger.info("frame_logits:{}".format(frame_logits.shape)) - - each_row = np.concatenate(tuple(each_row), axis=-1) - # logger.info("each_row:{}".format(each_row.shape)) - title_each_row = np.concatenate(tuple(title_each_row), axis=-1) - # frame_each_row = np.concatenate(tuple(frame_each_row), axis=-1) - frame_each_row = np.concatenate(tuple(frame_each_row), axis=1) - # logger.info("frame_each_row:{}".format(frame_each_row.shape)) - # sim_matrix.append(preprocessing.scale(each_row, axis=1)) - sim_matrix.append(each_row) - sim_matrix_title.append(title_each_row) - sim_matrix_frame.append(frame_each_row) - # logger.info("sim_matrix:{}".format(sim_matrix)) - return sim_matrix, sim_matrix_title, sim_matrix_frame - - -def eval_epoch(args, model, test_dataloader, device, n_gpu): - torch.cuda.empty_cache() - if hasattr(model, 'module'): - model = model.module.to(device) - else: - model = model.to(device) - - model.eval() - logger.info("args.task:{}".format(args.task)) - - # if multi_sentence_ == True: compute the similarity with multi-sentences retrieval - multi_sentence_ = False - - cut_off_points_, sentence_num_, video_num_ = [], -1, -1 - if hasattr(test_dataloader.dataset, 'multi_sentence_per_video') \ - and test_dataloader.dataset.multi_sentence_per_video: - multi_sentence_ = True - cut_off_points_ = test_dataloader.dataset.cut_off_points # used to tag the label when calculate the metric - sentence_num_ = test_dataloader.dataset.sentence_num # used to cut the sentence representation - video_num_ = test_dataloader.dataset.video_num # used to cut the video representation - cut_off_points_ = [itm - 1 for itm in cut_off_points_] - logger.info("multi_sentence_:{}".format(multi_sentence_)) - - with torch.no_grad(): - batch_query_output_list, batch_visual_output_list = [], [] - batch_title_output_list = [] - batch_frame_output_list = [] - total_video_num = 0 - # ---------------------------- - # 1. cache the features - # ---------------------------- - for bid, batch in enumerate(test_dataloader): - batch = tuple(t.to(device) for t in batch) - if args.task == "retrieval_VT": - query_ids, query_mask, video, video_frame, title_ids, title_mask = batch - elif args.task == "retrieval": - query_ids, query_mask, video, video_frame = batch - else: - raise ValueError("wrong task type:{}".format(args.task)) - - print("bid:{}/{}".format(bid, len(test_dataloader)), end="\r") - if multi_sentence_: - # multi-sentences retrieval means: one frame clip has two or more descriptions. - b, *_t = video.shape - # logger.info("query_ids.shape:{}".format(query_ids.shape)) - # logger.info("video.shape:{}".format(video.shape)) - query_output = model.text_encoder(query_ids, query_mask) - batch_query_output_list.append(query_output) - title_output = torch.zeros_like(query_output) - batch_title_output_list.append(title_output) - s_, e_ = total_video_num, total_video_num + b - filter_inds = [itm - s_ for itm in cut_off_points_ if s_ <= itm < e_] - - if len(filter_inds) > 0: - video = video[filter_inds, ...] - visual_output, frame_output = model.visual_encoder(video, video_frame) - # frame_output = torch.mean(frame_output, dim=1) - batch_visual_output_list.append(visual_output) - batch_frame_output_list.append(frame_output) - total_video_num += b - else: - query_output = model.text_encoder(query_ids, query_mask) - visual_output, frame_output = model.visual_encoder(video, video_frame) - # frame_output = torch.mean(frame_output, dim=1) - if args.task == "retrieval_VT": - title_output = model.text_encoder(title_ids, title_mask) - logger.info("title_output.shape:{}".format(title_output.shape)) - elif args.task == "retrieval": - title_output = torch.zeros_like(query_output) - else: - raise ValueError("wrong task type:{}".format(args.task)) - - # logger.info("query_output.shape:{}".format(query_output.shape)) - # logger.info("weight_VTM:{},weight_FTM:{},exp:{}".format(model.weight_VTM, model.weight_FTM, - # model.text_encoder.logit_scale.exp())) - logger.info("visual_output.shape:{}".format(visual_output.shape)) - logger.info("frame_output.shape:{}".format(frame_output.shape)) - - batch_query_output_list.append(query_output) - batch_visual_output_list.append(visual_output) - batch_title_output_list.append(title_output) - batch_frame_output_list.append(frame_output) - - # ---------------------------------- - # 2. calculate the similarity - # ---------------------------------- - logger.info("n_gpu:{}".format(n_gpu)) - # logger.info("model.weight_sum:{}".format(model.weight_sum)) - if n_gpu > 1: - device_ids = list(range(n_gpu)) - batch_t_output_splits = [] - batch_v_output_splits = [] - batch_title_output_splits = [] - batch_frame_output_splits = [] - bacth_len = len(batch_query_output_list) - split_len = (bacth_len + n_gpu - 1) // n_gpu - for dev_id in device_ids: - s_, e_ = dev_id * split_len, (dev_id + 1) * split_len - if dev_id == 0: - batch_t_output_splits.append(batch_query_output_list[s_:e_]) - batch_v_output_splits.append(batch_visual_output_list) - batch_title_output_splits.append(batch_title_output_list) - batch_frame_output_splits.append(batch_frame_output_list) - else: - devc = torch.device('cuda:{}'.format(str(dev_id))) - - devc_batch_list = [b.to(devc) for b in batch_query_output_list[s_:e_]] - batch_t_output_splits.append(devc_batch_list) - devc_batch_list = [b.to(devc) for b in batch_visual_output_list] - batch_v_output_splits.append(devc_batch_list) - devc_batch_list = [b.to(devc) for b in batch_title_output_list] - batch_title_output_splits.append(devc_batch_list) - devc_batch_list = [b.to(devc) for b in batch_frame_output_list] - batch_frame_output_splits.append(devc_batch_list) - - parameters_tuple_list = [(batch_t_output_splits[dev_id], batch_v_output_splits[dev_id], - batch_title_output_splits[dev_id], batch_frame_output_splits[dev_id]) for dev_id in device_ids] - parallel_outputs_tuple = parallel_apply(_run_on_single_gpu, model, parameters_tuple_list, device_ids) - sim_matrix = [] - sim_matrix_title = [] - sim_matrix_frame = [] - for idx in range(len(parallel_outputs_tuple)): - parallel_outputs, parallel_outputs_title, parallel_outputs_frame = parallel_outputs_tuple[idx] - sim_matrix += parallel_outputs - sim_matrix_title += parallel_outputs_title - sim_matrix_frame += parallel_outputs_frame - sim_matrix = np.concatenate(tuple(sim_matrix), axis=0) - sim_matrix_title = np.concatenate(tuple(sim_matrix_title), axis=0) - sim_matrix_frame = np.concatenate(tuple(sim_matrix_frame), axis=0) - else: - sim_matrix_tuple = _run_on_single_gpu(model, batch_query_output_list, batch_visual_output_list, - batch_title_output_list, batch_frame_output_list) - sim_matrix, sim_matrix_title, sim_matrix_frame = sim_matrix_tuple - sim_matrix = np.concatenate(tuple(sim_matrix), axis=0) - sim_matrix_title = np.concatenate(tuple(sim_matrix_title), axis=0) - sim_matrix_frame = np.concatenate(tuple(sim_matrix_frame), axis=0) - - batch_visual_output_list = torch.cat(batch_visual_output_list, dim=0) - batch_frame_output_list = torch.cat(batch_frame_output_list, dim=0) - batch_visual_output_list = batch_visual_output_list.cpu().detach().numpy() - batch_frame_output_list = batch_frame_output_list.cpu().detach().numpy() - # np.save("/ai/swxdisk/data/vatex/features/Chinese_batch_visual_output_list", batch_visual_output_list) - # np.save("/ai/swxdisk/data/vatex/features/Chinese_batch_frame_output_list", batch_frame_output_list) - np.save("/ai/swxdisk/data/vatex/features/English_batch_visual_output_list", batch_visual_output_list) - np.save("/ai/swxdisk/data/vatex/features/English_batch_frame_output_list", batch_frame_output_list) - - # logger.info("sim_matrix:{}".format(sim_matrix.shape)) - # logger.info("sim_matrix_frame:{}".format(sim_matrix_frame.shape)) - # np.save("/ai/swxdisk/data/msrvtt/visualize/sim_matrix", sim_matrix) - # np.save("/ai/swxdisk/data/msrvtt/visualize/sim_matrix_frame_top2", sim_matrix_frame) - # sim_matrix_frame = np.topk(sim_matrix_frame, k=model.top_frames, dim=2)[0] - # sim_matrix_frame = np.mean(sim_matrix_frame, dim=2) - if args.use_frame_fea: - sim_matrix += sim_matrix_frame - - if args.task == "retrieval_VT": - # logger.info("sim_matrix_title:{}".format(sim_matrix_title)) - weight_title = model.weight_title - sim_matrix += weight_title * sim_matrix_title - # sim_matrix = weight_title * sim_matrix_title - - logger.info("sim matrix size: {}".format(np.array(sim_matrix).shape)) - # sim_matrix = get_dual_matrix(sim_matrix) - - tv_metrics = logging_rank(sim_matrix, multi_sentence_, cut_off_points_, logger) - return tv_metrics - - -def main(): - global logger - args = get_args() - args = set_seed_logger(args) - device, n_gpu = init_device(args, args.local_rank) - - # get text pretrained path - pretrained_text = "hfl/chinese-roberta-wwm-ext" - args.pretrained_text = pretrained_text - if args.language == "chinese": - tokenizer = BertTokenizer.from_pretrained(pretrained_text) - else: - tokenizer = ClipTokenizer() - - model = init_model(args, device, n_gpu, args.local_rank) - ## #################################### - # freeze testing - ## #################################### - ''' - assert args.freeze_layer_num <= 12 and args.freeze_layer_num >= -1 - if hasattr(model, "visual_encoder") and args.freeze_layer_num > -1: - for name, param in model.visual_encoder.named_parameters(): - # top layers always need to train - if name.find("ln_final.") == 0 or name.find("text_projection") == 0 or name.find("logit_scale") == 0 \ - or name.find("visual.ln_post.") == 0 or name.find("visual.proj") == 0: - continue # need to train - elif name.find("visual.transformer.resblocks.") == 0 or name.find("transformer.resblocks.") == 0: - layer_num = int(name.split(".resblocks.")[1].split(".")[0]) - if layer_num >= args.freeze_layer_num: - continue # need to train - - if args.linear_patch == "3d" and name.find("conv2."): - continue - else: - # paramenters which < freeze_layer_num will be freezed - param.requires_grad = False - ''' - assert args.dataset in DATALOADER_DICT - test_dataloader, test_length = DATALOADER_DICT[args.dataset]["test"](args, tokenizer) - - if args.local_rank == 0: - logger.info("***** Running test *****") - logger.info(" Num examples = %d", test_length) - logger.info(" Batch size = %d", args.batch_size_val) - logger.info(" Num steps = %d", len(test_dataloader)) - - if args.do_train: - train_dataloader, train_length, train_sampler = DATALOADER_DICT[args.dataset]["train"](args, tokenizer) - - num_train_optimization_steps = (int(len(train_dataloader) + args.gradient_accumulation_steps - 1) - / args.gradient_accumulation_steps) * args.epochs - # logger.info("train_dataloader len = {}".format(len(train_dataloader))) - # logger.info("gradient_accumulation_steps = {}".format(args.gradient_accumulation_steps)) - coef_lr = args.coef_lr - optimizer, scheduler, model = prep_optimizer(args, model, num_train_optimization_steps, device, n_gpu, - args.local_rank, coef_lr=coef_lr) - - if args.local_rank == 0: - logger.info("***** Running training *****") - logger.info(" Num examples = %d", train_length) - logger.info(" Batch size = %d", args.batch_size) - logger.info(" Num steps = %d", num_train_optimization_steps * args.gradient_accumulation_steps) - - best_score = 0.00001 - best_output_model_file = "None" - global_step = 0 - if args.enable_amp: - scaler = GradScaler() - else: - scaler = None - for epoch in range(args.epochs): - train_sampler.set_epoch(epoch) - tr_loss, global_step = train_epoch(epoch, args, model, train_dataloader, device, n_gpu, optimizer, - scheduler, scaler, global_step, local_rank=args.local_rank) - if args.local_rank == 0: - logger.info("Epoch %d/%s Finished, Train Loss: %f", epoch + 1, args.epochs, tr_loss) - # for name, param in model.named_parameters(): - # args.writer.add_histogram(name, param.clone().cpu().data.numpy(), epoch) - # writer.add_histogram(name + '/grad', param.requires_grad_().clone().cpu().data.numpy(), epoch) - if epoch % 1 == 0: - ## Uncomment if want to save checkpoint - output_model_file = save_model(epoch, args, model, type_name="") - # if epoch == 100: - metrics = eval_epoch(args, model, test_dataloader, device, n_gpu) - if args.logdir: - args.writer.add_scalars('metrics', {'R1': metrics["R1"], 'R5': metrics["R5"], - 'R10': metrics["R10"]}, global_step=epoch) - if best_score < metrics["R1"]: - best_score = metrics["R1"] - best_output_model_file = output_model_file - logger.info("The best model is: {}, the R1 is: {:.4f}".format(best_output_model_file, best_score)) - - elif args.do_eval: - if args.local_rank == 0: - eval_epoch(args, model, test_dataloader, device, n_gpu) - elif args.do_params: - logger.info("do_params begin!") - # total = sum([param.nelement() for param in model.parameters()]) - total = sum(p.numel() for p in model.parameters()) - logger.info("Number of parameter: %.2fM" % (total / 1e6)) - for bid, batch in enumerate(test_dataloader): - batch = tuple(t.to(device) for t in batch) - query_ids, query_mask, pos_video_data, pos_title_ids, pos_title_mask, = batch - flops, params = profile(model, (query_ids, query_mask, pos_video_data, pos_title_ids, pos_title_mask,)) - print('flops: %.2f G, params: %.2f M' % (flops / 1e9, params / 1e6)) - break - if args.local_rank == 0 and args.logdir: - args.writer.close() - - -if __name__ == "__main__": - main() diff --git a/spaces/chongjie/PoseDiffusion_MVP/models/denoiser.py b/spaces/chongjie/PoseDiffusion_MVP/models/denoiser.py deleted file mode 100644 index bdd20e7b4f0d20c21995be1d4aadfb97a4400ed1..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/models/denoiser.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import defaultdict -from dataclasses import field, dataclass -from typing import Any, Dict, List, Optional, Tuple, Union, Callable -from util.embedding import TimeStepEmbedding, PoseEmbedding - -import torch -import torch.nn as nn - -from hydra.utils import instantiate - - -logger = logging.getLogger(__name__) - - -class Denoiser(nn.Module): - def __init__( - self, - TRANSFORMER: Dict, - target_dim: int = 9, # TODO: reduce fl dim from 2 to 1 - pivot_cam_onehot: bool = True, - z_dim: int = 384, - mlp_hidden_dim: bool = 128, - ): - super().__init__() - - self.pivot_cam_onehot = pivot_cam_onehot - self.target_dim = target_dim - - self.time_embed = TimeStepEmbedding() - self.pose_embed = PoseEmbedding(target_dim=self.target_dim) - - first_dim = ( - self.time_embed.out_dim - + self.pose_embed.out_dim - + z_dim - + int(self.pivot_cam_onehot) - ) - - d_model = TRANSFORMER.d_model - self._first = nn.Linear(first_dim, d_model) - - # slightly different from the paper that - # we use 2 encoder layers and 6 decoder layers - # here we use a transformer with 8 encoder layers - # call TransformerEncoderWrapper() to build a encoder-only transformer - self._trunk = instantiate(TRANSFORMER, _recursive_=False) - - # TODO: change the implementation of MLP to a more mature one - self._last = MLP( - d_model, - [mlp_hidden_dim, self.target_dim], - norm_layer=nn.LayerNorm, - ) - - def forward( - self, - x: torch.Tensor, # B x N x dim - t: torch.Tensor, # B - z: torch.Tensor, # B x N x dim_z - ): - B, N, _ = x.shape - - t_emb = self.time_embed(t) - # expand t from B x C to B x N x C - t_emb = t_emb.view(B, 1, t_emb.shape[-1]).expand(-1, N, -1) - - x_emb = self.pose_embed(x) - - if self.pivot_cam_onehot: - # add the one hot vector identifying the first camera as pivot - cam_pivot_id = torch.zeros_like(z[..., :1]) - cam_pivot_id[:, 0, ...] = 1.0 - z = torch.cat([z, cam_pivot_id], dim=-1) - - feed_feats = torch.cat([x_emb, t_emb, z], dim=-1) - - input_ = self._first(feed_feats) - - feats_ = self._trunk(input_) - - output = self._last(feats_) - - return output - - -def TransformerEncoderWrapper( - d_model: int, - nhead: int, - num_encoder_layers: int, - dim_feedforward: int = 2048, - dropout: float = 0.1, - norm_first: bool = True, - batch_first: bool = True, -): - encoder_layer = torch.nn.TransformerEncoderLayer( - d_model=d_model, - nhead=nhead, - dim_feedforward=dim_feedforward, - dropout=dropout, - batch_first=batch_first, - norm_first=norm_first, - ) - - _trunk = torch.nn.TransformerEncoder(encoder_layer, num_encoder_layers) - return _trunk - - -class MLP(torch.nn.Sequential): - """This block implements the multi-layer perceptron (MLP) module. - - Args: - in_channels (int): Number of channels of the input - hidden_channels (List[int]): List of the hidden channel dimensions - norm_layer (Callable[..., torch.nn.Module], optional): - Norm layer that will be stacked on top of the convolution layer. - If ``None`` this layer wont be used. Default: ``None`` - activation_layer (Callable[..., torch.nn.Module], optional): - Activation function which will be stacked on top of the - normalization layer (if not None), otherwise on top of the - conv layer. If ``None`` this layer wont be used. - Default: ``torch.nn.ReLU`` - inplace (bool): Parameter for the activation layer, which can - optionally do the operation in-place. Default ``True`` - bias (bool): Whether to use bias in the linear layer. Default ``True`` - dropout (float): The probability for the dropout layer. Default: 0.0 - """ - - def __init__( - self, - in_channels: int, - hidden_channels: List[int], - norm_layer: Optional[Callable[..., torch.nn.Module]] = None, - activation_layer: Optional[ - Callable[..., torch.nn.Module] - ] = torch.nn.ReLU, - inplace: Optional[bool] = True, - bias: bool = True, - norm_first: bool = False, - dropout: float = 0.0, - ): - # The addition of `norm_layer` is inspired from - # the implementation of TorchMultimodal: - # https://github.com/facebookresearch/multimodal/blob/5dec8a/torchmultimodal/modules/layers/mlp.py - params = {} if inplace is None else {"inplace": inplace} - - layers = [] - in_dim = in_channels - - for hidden_dim in hidden_channels[:-1]: - if norm_first and norm_layer is not None: - layers.append(norm_layer(in_dim)) - - layers.append(torch.nn.Linear(in_dim, hidden_dim, bias=bias)) - - if not norm_first and norm_layer is not None: - layers.append(norm_layer(hidden_dim)) - - layers.append(activation_layer(**params)) - - if dropout > 0: - layers.append(torch.nn.Dropout(dropout, **params)) - - in_dim = hidden_dim - - if norm_first and norm_layer is not None: - layers.append(norm_layer(in_dim)) - - layers.append(torch.nn.Linear(in_dim, hidden_channels[-1], bias=bias)) - if dropout > 0: - layers.append(torch.nn.Dropout(dropout, **params)) - - super().__init__(*layers) diff --git a/spaces/chongjie/PoseDiffusion_MVP/models/image_feature_extractor.py b/spaces/chongjie/PoseDiffusion_MVP/models/image_feature_extractor.py deleted file mode 100644 index 05325c864012286d4e0d283d84de47b4fdfdef0b..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/models/image_feature_extractor.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import math -import warnings -from collections import defaultdict -from dataclasses import field, dataclass -from typing import Any, Dict, List, Optional, Tuple, Union, Callable - - -import torch -import torch.nn as nn -import torchvision - -import io -from PIL import Image -import numpy as np - -logger = logging.getLogger(__name__) - -_RESNET_MEAN = [0.485, 0.456, 0.406] -_RESNET_STD = [0.229, 0.224, 0.225] - - -class MultiScaleImageFeatureExtractor(nn.Module): - def __init__( - self, - modelname: str = "dino_vits16", - freeze: bool = False, - scale_factors: list = [1, 1 / 2, 1 / 3], - ): - super().__init__() - self.freeze = freeze - self.scale_factors = scale_factors - - if "res" in modelname: - self._net = getattr(torchvision.models, modelname)(pretrained=True) - self._output_dim = self._net.fc.weight.shape[1] - self._net.fc = nn.Identity() - elif "dino" in modelname: - self._net = torch.hub.load("facebookresearch/dino:main", modelname) - self._output_dim = self._net.norm.weight.shape[0] - else: - raise ValueError(f"Unknown model name {modelname}") - - for name, value in ( - ("_resnet_mean", _RESNET_MEAN), - ("_resnet_std", _RESNET_STD), - ): - self.register_buffer( - name, - torch.FloatTensor(value).view(1, 3, 1, 1), - persistent=False, - ) - - if self.freeze: - for param in self.parameters(): - param.requires_grad = False - - def get_output_dim(self): - return self._output_dim - - def forward(self, image_rgb: torch.Tensor) -> torch.Tensor: - img_normed = self._resnet_normalize_image(image_rgb) - - features = self._compute_multiscale_features(img_normed) - - return features - - def _resnet_normalize_image(self, img: torch.Tensor) -> torch.Tensor: - return (img - self._resnet_mean) / self._resnet_std - - def _compute_multiscale_features( - self, img_normed: torch.Tensor - ) -> torch.Tensor: - multiscale_features = None - - if len(self.scale_factors) <= 0: - raise ValueError( - f"Wrong format of self.scale_factors: {self.scale_factors}" - ) - - for scale_factor in self.scale_factors: - if scale_factor == 1: - inp = img_normed - else: - inp = self._resize_image(img_normed, scale_factor) - - if multiscale_features is None: - multiscale_features = self._net(inp) - else: - multiscale_features += self._net(inp) - - averaged_features = multiscale_features / len(self.scale_factors) - return averaged_features - - @staticmethod - def _resize_image(image: torch.Tensor, scale_factor: float) -> torch.Tensor: - return nn.functional.interpolate( - image, - scale_factor=scale_factor, - mode="bilinear", - align_corners=False, - ) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/tests/test_config.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/tests/test_config.py deleted file mode 100644 index b0020b873973a6665d6f6db26db67a7adfbd8af2..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/tests/test_config.py +++ /dev/null @@ -1,1405 +0,0 @@ -import inspect -import platform - -import catalogue -import pytest -from typing import Dict, Optional, Iterable, Callable, Any, Union, List, Tuple -from types import GeneratorType -import pickle - -from pydantic import BaseModel, StrictFloat, PositiveInt, constr -from pydantic.types import StrictBool - -from confection import ConfigValidationError, Config -from confection.util import Generator, partial -from confection.tests.util import Cat, my_registry, make_tempdir - - -EXAMPLE_CONFIG = """ -[optimizer] -@optimizers = "Adam.v1" -beta1 = 0.9 -beta2 = 0.999 -use_averages = true - -[optimizer.learn_rate] -@schedules = "warmup_linear.v1" -initial_rate = 0.1 -warmup_steps = 10000 -total_steps = 100000 - -[pipeline] - -[pipeline.classifier] -name = "classifier" -factory = "classifier" - -[pipeline.classifier.model] -@layers = "ClassifierModel.v1" -hidden_depth = 1 -hidden_width = 64 -token_vector_width = 128 - -[pipeline.classifier.model.embedding] -@layers = "Embedding.v1" -width = ${pipeline.classifier.model:token_vector_width} - -""" - -OPTIMIZER_CFG = """ -[optimizer] -@optimizers = "Adam.v1" -beta1 = 0.9 -beta2 = 0.999 -use_averages = true - -[optimizer.learn_rate] -@schedules = "warmup_linear.v1" -initial_rate = 0.1 -warmup_steps = 10000 -total_steps = 100000 -""" - - -class HelloIntsSchema(BaseModel): - hello: int - world: int - - class Config: - extra = "forbid" - - -class DefaultsSchema(BaseModel): - required: int - optional: str = "default value" - - class Config: - extra = "forbid" - - -class ComplexSchema(BaseModel): - outer_req: int - outer_opt: str = "default value" - - level2_req: HelloIntsSchema - level2_opt: DefaultsSchema = DefaultsSchema(required=1) - - -good_catsie = {"@cats": "catsie.v1", "evil": False, "cute": True} -ok_catsie = {"@cats": "catsie.v1", "evil": False, "cute": False} -bad_catsie = {"@cats": "catsie.v1", "evil": True, "cute": True} -worst_catsie = {"@cats": "catsie.v1", "evil": True, "cute": False} - - -def test_validate_simple_config(): - simple_config = {"hello": 1, "world": 2} - f, _, v = my_registry._fill(simple_config, HelloIntsSchema) - assert f == simple_config - assert v == simple_config - - -def test_invalidate_simple_config(): - invalid_config = {"hello": 1, "world": "hi!"} - with pytest.raises(ConfigValidationError) as exc_info: - my_registry._fill(invalid_config, HelloIntsSchema) - error = exc_info.value - assert len(error.errors) == 1 - assert "type_error.integer" in error.error_types - - -def test_invalidate_extra_args(): - invalid_config = {"hello": 1, "world": 2, "extra": 3} - with pytest.raises(ConfigValidationError): - my_registry._fill(invalid_config, HelloIntsSchema) - - -def test_fill_defaults_simple_config(): - valid_config = {"required": 1} - filled, _, v = my_registry._fill(valid_config, DefaultsSchema) - assert filled["required"] == 1 - assert filled["optional"] == "default value" - invalid_config = {"optional": "some value"} - with pytest.raises(ConfigValidationError): - my_registry._fill(invalid_config, DefaultsSchema) - - -def test_fill_recursive_config(): - valid_config = {"outer_req": 1, "level2_req": {"hello": 4, "world": 7}} - filled, _, validation = my_registry._fill(valid_config, ComplexSchema) - assert filled["outer_req"] == 1 - assert filled["outer_opt"] == "default value" - assert filled["level2_req"]["hello"] == 4 - assert filled["level2_req"]["world"] == 7 - assert filled["level2_opt"]["required"] == 1 - assert filled["level2_opt"]["optional"] == "default value" - - -def test_is_promise(): - assert my_registry.is_promise(good_catsie) - assert not my_registry.is_promise({"hello": "world"}) - assert not my_registry.is_promise(1) - invalid = {"@complex": "complex.v1", "rate": 1.0, "@cats": "catsie.v1"} - assert my_registry.is_promise(invalid) - - -def test_get_constructor(): - assert my_registry.get_constructor(good_catsie) == ("cats", "catsie.v1") - - -def test_parse_args(): - args, kwargs = my_registry.parse_args(bad_catsie) - assert args == [] - assert kwargs == {"evil": True, "cute": True} - - -def test_make_promise_schema(): - schema = my_registry.make_promise_schema(good_catsie) - assert "evil" in schema.__fields__ - assert "cute" in schema.__fields__ - - -def test_validate_promise(): - config = {"required": 1, "optional": good_catsie} - filled, _, validated = my_registry._fill(config, DefaultsSchema) - assert filled == config - assert validated == {"required": 1, "optional": "meow"} - - -def test_fill_validate_promise(): - config = {"required": 1, "optional": {"@cats": "catsie.v1", "evil": False}} - filled, _, validated = my_registry._fill(config, DefaultsSchema) - assert filled["optional"]["cute"] is True - - -def test_fill_invalidate_promise(): - config = {"required": 1, "optional": {"@cats": "catsie.v1", "evil": False}} - with pytest.raises(ConfigValidationError): - my_registry._fill(config, HelloIntsSchema) - config["optional"]["whiskers"] = True - with pytest.raises(ConfigValidationError): - my_registry._fill(config, DefaultsSchema) - - -def test_create_registry(): - my_registry.dogs = catalogue.create( - my_registry.namespace, "dogs", entry_points=False - ) - assert hasattr(my_registry, "dogs") - assert len(my_registry.dogs.get_all()) == 0 - my_registry.dogs.register("good_boy.v1", func=lambda x: x) - assert len(my_registry.dogs.get_all()) == 1 - - -def test_registry_methods(): - with pytest.raises(ValueError): - my_registry.get("dfkoofkds", "catsie.v1") - my_registry.cats.register("catsie.v123")(None) - with pytest.raises(ValueError): - my_registry.get("cats", "catsie.v123") - - -def test_resolve_no_schema(): - config = {"one": 1, "two": {"three": {"@cats": "catsie.v1", "evil": True}}} - result = my_registry.resolve({"cfg": config})["cfg"] - assert result["one"] == 1 - assert result["two"] == {"three": "scratch!"} - with pytest.raises(ConfigValidationError): - config = {"two": {"three": {"@cats": "catsie.v1", "evil": "true"}}} - my_registry.resolve(config) - - -def test_resolve_schema(): - class TestBaseSubSchema(BaseModel): - three: str - - class TestBaseSchema(BaseModel): - one: PositiveInt - two: TestBaseSubSchema - - class Config: - extra = "forbid" - - class TestSchema(BaseModel): - cfg: TestBaseSchema - - config = {"one": 1, "two": {"three": {"@cats": "catsie.v1", "evil": True}}} - my_registry.resolve({"cfg": config}, schema=TestSchema) - config = {"one": -1, "two": {"three": {"@cats": "catsie.v1", "evil": True}}} - with pytest.raises(ConfigValidationError): - # "one" is not a positive int - my_registry.resolve({"cfg": config}, schema=TestSchema) - config = {"one": 1, "two": {"four": {"@cats": "catsie.v1", "evil": True}}} - with pytest.raises(ConfigValidationError): - # "three" is required in subschema - my_registry.resolve({"cfg": config}, schema=TestSchema) - - -def test_resolve_schema_coerced(): - class TestBaseSchema(BaseModel): - test1: str - test2: bool - test3: float - - class TestSchema(BaseModel): - cfg: TestBaseSchema - - config = {"test1": 123, "test2": 1, "test3": 5} - filled = my_registry.fill({"cfg": config}, schema=TestSchema) - result = my_registry.resolve({"cfg": config}, schema=TestSchema) - assert result["cfg"] == {"test1": "123", "test2": True, "test3": 5.0} - # This only affects the resolved config, not the filled config - assert filled["cfg"] == config - - -def test_read_config(): - byte_string = EXAMPLE_CONFIG.encode("utf8") - cfg = Config().from_bytes(byte_string) - - assert cfg["optimizer"]["beta1"] == 0.9 - assert cfg["optimizer"]["learn_rate"]["initial_rate"] == 0.1 - assert cfg["pipeline"]["classifier"]["factory"] == "classifier" - assert cfg["pipeline"]["classifier"]["model"]["embedding"]["width"] == 128 - - -def test_optimizer_config(): - cfg = Config().from_str(OPTIMIZER_CFG) - optimizer = my_registry.resolve(cfg, validate=True)["optimizer"] - assert optimizer.beta1 == 0.9 - - -def test_config_to_str(): - cfg = Config().from_str(OPTIMIZER_CFG) - assert cfg.to_str().strip() == OPTIMIZER_CFG.strip() - cfg = Config({"optimizer": {"foo": "bar"}}).from_str(OPTIMIZER_CFG) - assert cfg.to_str().strip() == OPTIMIZER_CFG.strip() - - -def test_config_to_str_creates_intermediate_blocks(): - cfg = Config({"optimizer": {"foo": {"bar": 1}}}) - assert ( - cfg.to_str().strip() - == """ -[optimizer] - -[optimizer.foo] -bar = 1 - """.strip() - ) - - -def test_config_roundtrip_bytes(): - cfg = Config().from_str(OPTIMIZER_CFG) - cfg_bytes = cfg.to_bytes() - new_cfg = Config().from_bytes(cfg_bytes) - assert new_cfg.to_str().strip() == OPTIMIZER_CFG.strip() - - -def test_config_roundtrip_disk(): - cfg = Config().from_str(OPTIMIZER_CFG) - with make_tempdir() as path: - cfg_path = path / "config.cfg" - cfg.to_disk(cfg_path) - new_cfg = Config().from_disk(cfg_path) - assert new_cfg.to_str().strip() == OPTIMIZER_CFG.strip() - - -def test_config_roundtrip_disk_respects_path_subclasses(pathy_fixture): - cfg = Config().from_str(OPTIMIZER_CFG) - cfg_path = pathy_fixture / "config.cfg" - cfg.to_disk(cfg_path) - new_cfg = Config().from_disk(cfg_path) - assert new_cfg.to_str().strip() == OPTIMIZER_CFG.strip() - - -def test_config_to_str_invalid_defaults(): - """Test that an error is raised if a config contains top-level keys without - a section that would otherwise be interpreted as [DEFAULT] (which causes - the values to be included in *all* other sections). - """ - cfg = {"one": 1, "two": {"@cats": "catsie.v1", "evil": "hello"}} - with pytest.raises(ConfigValidationError): - Config(cfg).to_str() - config_str = "[DEFAULT]\none = 1" - with pytest.raises(ConfigValidationError): - Config().from_str(config_str) - - -def test_validation_custom_types(): - def complex_args( - rate: StrictFloat, - steps: PositiveInt = 10, # type: ignore - log_level: constr(regex="(DEBUG|INFO|WARNING|ERROR)") = "ERROR", - ): - return None - - my_registry.complex = catalogue.create( - my_registry.namespace, "complex", entry_points=False - ) - my_registry.complex("complex.v1")(complex_args) - cfg = {"@complex": "complex.v1", "rate": 1.0, "steps": 20, "log_level": "INFO"} - my_registry.resolve({"config": cfg}) - cfg = {"@complex": "complex.v1", "rate": 1.0, "steps": -1, "log_level": "INFO"} - with pytest.raises(ConfigValidationError): - # steps is not a positive int - my_registry.resolve({"config": cfg}) - cfg = {"@complex": "complex.v1", "rate": 1.0, "steps": 20, "log_level": "none"} - with pytest.raises(ConfigValidationError): - # log_level is not a string matching the regex - my_registry.resolve({"config": cfg}) - cfg = {"@complex": "complex.v1", "rate": 1.0, "steps": 20, "log_level": "INFO"} - with pytest.raises(ConfigValidationError): - # top-level object is promise - my_registry.resolve(cfg) - with pytest.raises(ConfigValidationError): - # top-level object is promise - my_registry.fill(cfg) - cfg = {"@complex": "complex.v1", "rate": 1.0, "@cats": "catsie.v1"} - with pytest.raises(ConfigValidationError): - # two constructors - my_registry.resolve({"config": cfg}) - - -def test_validation_no_validate(): - config = {"one": 1, "two": {"three": {"@cats": "catsie.v1", "evil": "false"}}} - result = my_registry.resolve({"cfg": config}, validate=False) - filled = my_registry.fill({"cfg": config}, validate=False) - assert result["cfg"]["one"] == 1 - assert result["cfg"]["two"] == {"three": "scratch!"} - assert filled["cfg"]["two"]["three"]["evil"] == "false" - assert filled["cfg"]["two"]["three"]["cute"] is True - - -def test_validation_fill_defaults(): - config = {"cfg": {"one": 1, "two": {"@cats": "catsie.v1", "evil": "hello"}}} - result = my_registry.fill(config, validate=False) - assert len(result["cfg"]["two"]) == 3 - with pytest.raises(ConfigValidationError): - # Required arg "evil" is not defined - my_registry.fill(config) - config = {"cfg": {"one": 1, "two": {"@cats": "catsie.v2", "evil": False}}} - # Fill in with new defaults - result = my_registry.fill(config) - assert len(result["cfg"]["two"]) == 4 - assert result["cfg"]["two"]["evil"] is False - assert result["cfg"]["two"]["cute"] is True - assert result["cfg"]["two"]["cute_level"] == 1 - - -def test_make_config_positional_args(): - @my_registry.cats("catsie.v567") - def catsie_567(*args: Optional[str], foo: str = "bar"): - assert args[0] == "^_^" - assert args[1] == "^(*.*)^" - assert foo == "baz" - return args[0] - - args = ["^_^", "^(*.*)^"] - cfg = {"config": {"@cats": "catsie.v567", "foo": "baz", "*": args}} - assert my_registry.resolve(cfg)["config"] == "^_^" - - -def test_make_config_positional_args_complex(): - @my_registry.cats("catsie.v890") - def catsie_890(*args: Optional[Union[StrictBool, PositiveInt]]): - assert args[0] == 123 - return args[0] - - cfg = {"config": {"@cats": "catsie.v890", "*": [123, True, 1, False]}} - assert my_registry.resolve(cfg)["config"] == 123 - cfg = {"config": {"@cats": "catsie.v890", "*": [123, "True"]}} - with pytest.raises(ConfigValidationError): - # "True" is not a valid boolean or positive int - my_registry.resolve(cfg) - - -def test_positional_args_to_from_string(): - cfg = """[a]\nb = 1\n* = ["foo","bar"]""" - assert Config().from_str(cfg).to_str() == cfg - cfg = """[a]\nb = 1\n\n[a.*.bar]\ntest = 2\n\n[a.*.foo]\ntest = 1""" - assert Config().from_str(cfg).to_str() == cfg - - @my_registry.cats("catsie.v666") - def catsie_666(*args, meow=False): - return args - - cfg = """[a]\n@cats = "catsie.v666"\n* = ["foo","bar"]""" - filled = my_registry.fill(Config().from_str(cfg)).to_str() - assert filled == """[a]\n@cats = "catsie.v666"\n* = ["foo","bar"]\nmeow = false""" - resolved = my_registry.resolve(Config().from_str(cfg)) - assert resolved == {"a": ("foo", "bar")} - cfg = """[a]\n@cats = "catsie.v666"\n\n[a.*.foo]\nx = 1""" - filled = my_registry.fill(Config().from_str(cfg)).to_str() - assert filled == """[a]\n@cats = "catsie.v666"\nmeow = false\n\n[a.*.foo]\nx = 1""" - resolved = my_registry.resolve(Config().from_str(cfg)) - assert resolved == {"a": ({"x": 1},)} - - @my_registry.cats("catsie.v777") - def catsie_777(y: int = 1): - return "meow" * y - - cfg = """[a]\n@cats = "catsie.v666"\n\n[a.*.foo]\n@cats = "catsie.v777\"""" - filled = my_registry.fill(Config().from_str(cfg)).to_str() - expected = """[a]\n@cats = "catsie.v666"\nmeow = false\n\n[a.*.foo]\n@cats = "catsie.v777"\ny = 1""" - assert filled == expected - cfg = """[a]\n@cats = "catsie.v666"\n\n[a.*.foo]\n@cats = "catsie.v777"\ny = 3""" - result = my_registry.resolve(Config().from_str(cfg)) - assert result == {"a": ("meowmeowmeow",)} - - -def test_validation_generators_iterable(): - @my_registry.optimizers("test_optimizer.v1") - def test_optimizer_v1(rate: float) -> None: - return None - - @my_registry.schedules("test_schedule.v1") - def test_schedule_v1(some_value: float = 1.0) -> Iterable[float]: - while True: - yield some_value - - config = {"optimizer": {"@optimizers": "test_optimizer.v1", "rate": 0.1}} - my_registry.resolve(config) - - -def test_validation_unset_type_hints(): - """Test that unset type hints are handled correctly (and treated as Any).""" - - @my_registry.optimizers("test_optimizer.v2") - def test_optimizer_v2(rate, steps: int = 10) -> None: - return None - - config = {"test": {"@optimizers": "test_optimizer.v2", "rate": 0.1, "steps": 20}} - my_registry.resolve(config) - - -def test_validation_bad_function(): - @my_registry.optimizers("bad.v1") - def bad() -> None: - raise ValueError("This is an error in the function") - return None - - @my_registry.optimizers("good.v1") - def good() -> None: - return None - - # Bad function - config = {"test": {"@optimizers": "bad.v1"}} - with pytest.raises(ValueError): - my_registry.resolve(config) - # Bad function call - config = {"test": {"@optimizers": "good.v1", "invalid_arg": 1}} - with pytest.raises(ConfigValidationError): - my_registry.resolve(config) - - -def test_objects_from_config(): - config = { - "optimizer": { - "@optimizers": "my_cool_optimizer.v1", - "beta1": 0.2, - "learn_rate": { - "@schedules": "my_cool_repetitive_schedule.v1", - "base_rate": 0.001, - "repeat": 4, - }, - } - } - - optimizer = my_registry.resolve(config)["optimizer"] - assert optimizer.beta1 == 0.2 - assert optimizer.learn_rate == [0.001] * 4 - - -def test_partials_from_config(): - """Test that functions registered with partial applications are handled - correctly (e.g. initializers).""" - numpy = pytest.importorskip("numpy") - - def uniform_init( - shape: Tuple[int, ...], *, lo: float = -0.1, hi: float = 0.1 - ) -> List[float]: - return numpy.random.uniform(lo, hi, shape).tolist() - - @my_registry.initializers("uniform_init.v1") - def configure_uniform_init( - *, lo: float = -0.1, hi: float = 0.1 - ) -> Callable[[List[float]], List[float]]: - return partial(uniform_init, lo=lo, hi=hi) - - name = "uniform_init.v1" - cfg = {"test": {"@initializers": name, "lo": -0.2}} - func = my_registry.resolve(cfg)["test"] - assert hasattr(func, "__call__") - # The partial will still have lo as an arg, just with default - assert len(inspect.signature(func).parameters) == 3 - # Make sure returned partial function has correct value set - assert inspect.signature(func).parameters["lo"].default == -0.2 - # Actually call the function and verify - assert numpy.asarray(func((2, 3))).shape == (2, 3) - # Make sure validation still works - bad_cfg = {"test": {"@initializers": name, "lo": [0.5]}} - with pytest.raises(ConfigValidationError): - my_registry.resolve(bad_cfg) - bad_cfg = {"test": {"@initializers": name, "lo": -0.2, "other": 10}} - with pytest.raises(ConfigValidationError): - my_registry.resolve(bad_cfg) - - -def test_partials_from_config_nested(): - """Test that partial functions are passed correctly to other registered - functions that consume them (e.g. initializers -> layers).""" - - def test_initializer(a: int, b: int = 1) -> int: - return a * b - - @my_registry.initializers("test_initializer.v1") - def configure_test_initializer(b: int = 1) -> Callable[[int], int]: - return partial(test_initializer, b=b) - - @my_registry.layers("test_layer.v1") - def test_layer(init: Callable[[int], int], c: int = 1) -> Callable[[int], int]: - return lambda x: x + init(c) - - cfg = { - "@layers": "test_layer.v1", - "c": 5, - "init": {"@initializers": "test_initializer.v1", "b": 10}, - } - func = my_registry.resolve({"test": cfg})["test"] - assert func(1) == 51 - assert func(100) == 150 - - -def test_validate_generator(): - """Test that generator replacement for validation in config doesn't - actually replace the returned value.""" - - @my_registry.schedules("test_schedule.v2") - def test_schedule(): - while True: - yield 10 - - cfg = {"@schedules": "test_schedule.v2"} - result = my_registry.resolve({"test": cfg})["test"] - assert isinstance(result, GeneratorType) - - @my_registry.optimizers("test_optimizer.v2") - def test_optimizer2(rate: Generator) -> Generator: - return rate - - cfg = { - "@optimizers": "test_optimizer.v2", - "rate": {"@schedules": "test_schedule.v2"}, - } - result = my_registry.resolve({"test": cfg})["test"] - assert isinstance(result, GeneratorType) - - @my_registry.optimizers("test_optimizer.v3") - def test_optimizer3(schedules: Dict[str, Generator]) -> Generator: - return schedules["rate"] - - cfg = { - "@optimizers": "test_optimizer.v3", - "schedules": {"rate": {"@schedules": "test_schedule.v2"}}, - } - result = my_registry.resolve({"test": cfg})["test"] - assert isinstance(result, GeneratorType) - - @my_registry.optimizers("test_optimizer.v4") - def test_optimizer4(*schedules: Generator) -> Generator: - return schedules[0] - - -def test_handle_generic_type(): - """Test that validation can handle checks against arbitrary generic - types in function argument annotations.""" - - cfg = {"@cats": "generic_cat.v1", "cat": {"@cats": "int_cat.v1", "value_in": 3}} - cat = my_registry.resolve({"test": cfg})["test"] - assert isinstance(cat, Cat) - assert cat.value_in == 3 - assert cat.value_out is None - assert cat.name == "generic_cat" - - -@pytest.mark.parametrize( - "cfg", - [ - "[a]\nb = 1\nc = 2\n\n[a.c]\nd = 3", - "[a]\nb = 1\n\n[a.c]\nd = 2\n\n[a.c.d]\ne = 3", - ], -) -def test_handle_error_duplicate_keys(cfg): - """This would cause very cryptic error when interpreting config. - (TypeError: 'X' object does not support item assignment) - """ - with pytest.raises(ConfigValidationError): - Config().from_str(cfg) - - -@pytest.mark.parametrize( - "cfg,is_valid", - [("[a]\nb = 1\n\n[a.c]\nd = 3", True), ("[a]\nb = 1\n\n[A.c]\nd = 2", False)], -) -def test_cant_expand_undefined_block(cfg, is_valid): - """Test that you can't expand a block that hasn't been created yet. This - comes up when you typo a name, and if we allow expansion of undefined blocks, - it's very hard to create good errors for those typos. - """ - if is_valid: - Config().from_str(cfg) - else: - with pytest.raises(ConfigValidationError): - Config().from_str(cfg) - - -def test_fill_config_overrides(): - config = { - "cfg": { - "one": 1, - "two": {"three": {"@cats": "catsie.v1", "evil": True, "cute": False}}, - } - } - overrides = {"cfg.two.three.evil": False} - result = my_registry.fill(config, overrides=overrides, validate=True) - assert result["cfg"]["two"]["three"]["evil"] is False - # Test that promises can be overwritten as well - overrides = {"cfg.two.three": 3} - result = my_registry.fill(config, overrides=overrides, validate=True) - assert result["cfg"]["two"]["three"] == 3 - # Test that value can be overwritten with promises and that the result is - # interpreted and filled correctly - overrides = {"cfg": {"one": {"@cats": "catsie.v1", "evil": False}, "two": None}} - result = my_registry.fill(config, overrides=overrides) - assert result["cfg"]["two"] is None - assert result["cfg"]["one"]["@cats"] == "catsie.v1" - assert result["cfg"]["one"]["evil"] is False - assert result["cfg"]["one"]["cute"] is True - # Overwriting with wrong types should cause validation error - with pytest.raises(ConfigValidationError): - overrides = {"cfg.two.three.evil": 20} - my_registry.fill(config, overrides=overrides, validate=True) - # Overwriting with incomplete promises should cause validation error - with pytest.raises(ConfigValidationError): - overrides = {"cfg": {"one": {"@cats": "catsie.v1"}, "two": None}} - my_registry.fill(config, overrides=overrides) - # Overrides that don't match config should raise error - with pytest.raises(ConfigValidationError): - overrides = {"cfg.two.three.evil": False, "two.four": True} - my_registry.fill(config, overrides=overrides, validate=True) - with pytest.raises(ConfigValidationError): - overrides = {"cfg.five": False} - my_registry.fill(config, overrides=overrides, validate=True) - - -def test_resolve_overrides(): - config = { - "cfg": { - "one": 1, - "two": {"three": {"@cats": "catsie.v1", "evil": True, "cute": False}}, - } - } - overrides = {"cfg.two.three.evil": False} - result = my_registry.resolve(config, overrides=overrides, validate=True) - assert result["cfg"]["two"]["three"] == "meow" - # Test that promises can be overwritten as well - overrides = {"cfg.two.three": 3} - result = my_registry.resolve(config, overrides=overrides, validate=True) - assert result["cfg"]["two"]["three"] == 3 - # Test that value can be overwritten with promises - overrides = {"cfg": {"one": {"@cats": "catsie.v1", "evil": False}, "two": None}} - result = my_registry.resolve(config, overrides=overrides) - assert result["cfg"]["one"] == "meow" - assert result["cfg"]["two"] is None - # Overwriting with wrong types should cause validation error - with pytest.raises(ConfigValidationError): - overrides = {"cfg.two.three.evil": 20} - my_registry.resolve(config, overrides=overrides, validate=True) - # Overwriting with incomplete promises should cause validation error - with pytest.raises(ConfigValidationError): - overrides = {"cfg": {"one": {"@cats": "catsie.v1"}, "two": None}} - my_registry.resolve(config, overrides=overrides) - # Overrides that don't match config should raise error - with pytest.raises(ConfigValidationError): - overrides = {"cfg.two.three.evil": False, "cfg.two.four": True} - my_registry.resolve(config, overrides=overrides, validate=True) - with pytest.raises(ConfigValidationError): - overrides = {"cfg.five": False} - my_registry.resolve(config, overrides=overrides, validate=True) - - -@pytest.mark.parametrize( - "prop,expected", - [("a.b.c", True), ("a.b", True), ("a", True), ("a.e", True), ("a.b.c.d", False)], -) -def test_is_in_config(prop, expected): - config = {"a": {"b": {"c": 5, "d": 6}, "e": [1, 2]}} - assert my_registry._is_in_config(prop, config) is expected - - -def test_resolve_prefilled_values(): - class Language(object): - def __init__(self): - ... - - @my_registry.optimizers("prefilled.v1") - def prefilled(nlp: Language, value: int = 10): - return (nlp, value) - - # Passing an instance of Language here via the config is bad, since it - # won't serialize to a string, but we still test for it - config = {"test": {"@optimizers": "prefilled.v1", "nlp": Language(), "value": 50}} - resolved = my_registry.resolve(config, validate=True) - result = resolved["test"] - assert isinstance(result[0], Language) - assert result[1] == 50 - - -def test_fill_config_dict_return_type(): - """Test that a registered function returning a dict is handled correctly.""" - - @my_registry.cats.register("catsie_with_dict.v1") - def catsie_with_dict(evil: StrictBool) -> Dict[str, bool]: - return {"not_evil": not evil} - - config = {"test": {"@cats": "catsie_with_dict.v1", "evil": False}, "foo": 10} - result = my_registry.fill({"cfg": config}, validate=True)["cfg"]["test"] - assert result["evil"] is False - assert "not_evil" not in result - result = my_registry.resolve({"cfg": config}, validate=True)["cfg"]["test"] - assert result["not_evil"] is True - - -def test_deepcopy_config(): - numpy = pytest.importorskip("numpy") - config = Config({"a": 1, "b": {"c": 2, "d": 3}}) - copied = config.copy() - # Same values but not same object - assert config == copied - assert config is not copied - - -@pytest.mark.skipif( - platform.python_implementation() == "PyPy", reason="copy does not fail for pypy" -) -def test_deepcopy_config_pickle(): - numpy = pytest.importorskip("numpy") - # Check for error if value can't be pickled/deepcopied - config = Config({"a": 1, "b": numpy}) - with pytest.raises(ValueError): - config.copy() - - -def test_config_to_str_simple_promises(): - """Test that references to function registries without arguments are - serialized inline as dict.""" - config_str = """[section]\nsubsection = {"@registry":"value"}""" - config = Config().from_str(config_str) - assert config["section"]["subsection"]["@registry"] == "value" - assert config.to_str() == config_str - - -def test_config_from_str_invalid_section(): - config_str = """[a]\nb = null\n\n[a.b]\nc = 1""" - with pytest.raises(ConfigValidationError): - Config().from_str(config_str) - - config_str = """[a]\nb = null\n\n[a.b.c]\nd = 1""" - with pytest.raises(ConfigValidationError): - Config().from_str(config_str) - - -def test_config_to_str_order(): - """Test that Config.to_str orders the sections.""" - config = {"a": {"b": {"c": 1, "d": 2}, "e": 3}, "f": {"g": {"h": {"i": 4, "j": 5}}}} - expected = ( - "[a]\ne = 3\n\n[a.b]\nc = 1\nd = 2\n\n[f]\n\n[f.g]\n\n[f.g.h]\ni = 4\nj = 5" - ) - config = Config(config) - assert config.to_str() == expected - - -@pytest.mark.parametrize("d", [".", ":"]) -def test_config_interpolation(d): - """Test that config values are interpolated correctly. The parametrized - value is the final divider (${a.b} vs. ${a:b}). Both should now work and be - valid. The double {{ }} in the config strings are required to prevent the - references from being interpreted as an actual f-string variable. - """ - c_str = """[a]\nfoo = "hello"\n\n[b]\nbar = ${foo}""" - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - c_str = f"""[a]\nfoo = "hello"\n\n[b]\nbar = ${{a{d}foo}}""" - assert Config().from_str(c_str)["b"]["bar"] == "hello" - c_str = f"""[a]\nfoo = "hello"\n\n[b]\nbar = ${{a{d}foo}}!""" - assert Config().from_str(c_str)["b"]["bar"] == "hello!" - c_str = f"""[a]\nfoo = "hello"\n\n[b]\nbar = "${{a{d}foo}}!\"""" - assert Config().from_str(c_str)["b"]["bar"] == "hello!" - c_str = f"""[a]\nfoo = 15\n\n[b]\nbar = ${{a{d}foo}}!""" - assert Config().from_str(c_str)["b"]["bar"] == "15!" - c_str = f"""[a]\nfoo = ["x", "y"]\n\n[b]\nbar = ${{a{d}foo}}""" - assert Config().from_str(c_str)["b"]["bar"] == ["x", "y"] - # Interpolation within the same section - c_str = f"""[a]\nfoo = "x"\nbar = ${{a{d}foo}}\nbaz = "${{a{d}foo}}y\"""" - assert Config().from_str(c_str)["a"]["bar"] == "x" - assert Config().from_str(c_str)["a"]["baz"] == "xy" - - -def test_config_interpolation_lists(): - # Test that lists are preserved correctly - c_str = """[a]\nb = 1\n\n[c]\nd = ["hello ${a.b}", "world"]""" - config = Config().from_str(c_str, interpolate=False) - assert config["c"]["d"] == ["hello ${a.b}", "world"] - config = config.interpolate() - assert config["c"]["d"] == ["hello 1", "world"] - c_str = """[a]\nb = 1\n\n[c]\nd = [${a.b}, "hello ${a.b}", "world"]""" - config = Config().from_str(c_str) - assert config["c"]["d"] == [1, "hello 1", "world"] - config = Config().from_str(c_str, interpolate=False) - # NOTE: This currently doesn't work, because we can't know how to JSON-load - # the uninterpolated list [${a.b}]. - # assert config["c"]["d"] == ["${a.b}", "hello ${a.b}", "world"] - # config = config.interpolate() - # assert config["c"]["d"] == [1, "hello 1", "world"] - c_str = """[a]\nb = 1\n\n[c]\nd = ["hello", ${a}]""" - config = Config().from_str(c_str) - assert config["c"]["d"] == ["hello", {"b": 1}] - c_str = """[a]\nb = 1\n\n[c]\nd = ["hello", "hello ${a}"]""" - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - config_str = """[a]\nb = 1\n\n[c]\nd = ["hello", {"x": ["hello ${a.b}"], "y": 2}]""" - config = Config().from_str(config_str) - assert config["c"]["d"] == ["hello", {"x": ["hello 1"], "y": 2}] - config_str = """[a]\nb = 1\n\n[c]\nd = ["hello", {"x": [${a.b}], "y": 2}]""" - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - - -@pytest.mark.parametrize("d", [".", ":"]) -def test_config_interpolation_sections(d): - """Test that config sections are interpolated correctly. The parametrized - value is the final divider (${a.b} vs. ${a:b}). Both should now work and be - valid. The double {{ }} in the config strings are required to prevent the - references from being interpreted as an actual f-string variable. - """ - # Simple block references - c_str = """[a]\nfoo = "hello"\nbar = "world"\n\n[b]\nc = ${a}""" - config = Config().from_str(c_str) - assert config["b"]["c"] == config["a"] - # References with non-string values - c_str = f"""[a]\nfoo = "hello"\n\n[a.x]\ny = ${{a{d}b}}\n\n[a.b]\nc = 1\nd = [10]""" - config = Config().from_str(c_str) - assert config["a"]["x"]["y"] == config["a"]["b"] - # Multiple references in the same string - c_str = f"""[a]\nx = "string"\ny = 10\n\n[b]\nz = "${{a{d}x}}/${{a{d}y}}\"""" - config = Config().from_str(c_str) - assert config["b"]["z"] == "string/10" - # Non-string references in string (converted to string) - c_str = f"""[a]\nx = ["hello", "world"]\n\n[b]\ny = "result: ${{a{d}x}}\"""" - config = Config().from_str(c_str) - assert config["b"]["y"] == 'result: ["hello", "world"]' - # References to sections referencing sections - c_str = """[a]\nfoo = "x"\n\n[b]\nbar = ${a}\n\n[c]\nbaz = ${b}""" - config = Config().from_str(c_str) - assert config["b"]["bar"] == config["a"] - assert config["c"]["baz"] == config["b"] - # References to section values referencing other sections - c_str = f"""[a]\nfoo = "x"\n\n[b]\nbar = ${{a}}\n\n[c]\nbaz = ${{b{d}bar}}""" - config = Config().from_str(c_str) - assert config["c"]["baz"] == config["b"]["bar"] - # References to sections with subsections - c_str = """[a]\nfoo = "x"\n\n[a.b]\nbar = 100\n\n[c]\nbaz = ${a}""" - config = Config().from_str(c_str) - assert config["c"]["baz"] == config["a"] - # Infinite recursion - c_str = """[a]\nfoo ="x"\n\n[a.b]\nbar = ${a}""" - config = Config().from_str(c_str) - assert config["a"]["b"]["bar"] == config["a"] - c_str = f"""[a]\nfoo = "x"\n\n[b]\nbar = ${{a}}\n\n[c]\nbaz = ${{b.bar{d}foo}}""" - # We can't reference not-yet interpolated subsections - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - # Generally invalid references - c_str = f"""[a]\nfoo = ${{b{d}bar}}""" - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - # We can't reference sections or promises within strings - c_str = """[a]\n\n[a.b]\nfoo = "x: ${c}"\n\n[c]\nbar = 1\nbaz = 2""" - with pytest.raises(ConfigValidationError): - Config().from_str(c_str) - - -def test_config_from_str_overrides(): - config_str = """[a]\nb = 1\n\n[a.c]\nd = 2\ne = 3\n\n[f]\ng = {"x": "y"}""" - # Basic value substitution - overrides = {"a.b": 10, "a.c.d": 20} - config = Config().from_str(config_str, overrides=overrides) - assert config["a"]["b"] == 10 - assert config["a"]["c"]["d"] == 20 - assert config["a"]["c"]["e"] == 3 - # Valid values that previously weren't in config - config = Config().from_str(config_str, overrides={"a.c.f": 100}) - assert config["a"]["c"]["d"] == 2 - assert config["a"]["c"]["e"] == 3 - assert config["a"]["c"]["f"] == 100 - # Invalid keys and sections - with pytest.raises(ConfigValidationError): - Config().from_str(config_str, overrides={"f": 10}) - # This currently isn't expected to work, because the dict in f.g is not - # interpreted as a section while the config is still just the configparser - with pytest.raises(ConfigValidationError): - Config().from_str(config_str, overrides={"f.g.x": "z"}) - # With variables (values) - config_str = """[a]\nb = 1\n\n[a.c]\nd = 2\ne = ${a:b}""" - config = Config().from_str(config_str, overrides={"a.b": 10}) - assert config["a"]["b"] == 10 - assert config["a"]["c"]["e"] == 10 - # With variables (sections) - config_str = """[a]\nb = 1\n\n[a.c]\nd = 2\n[e]\nf = ${a.c}""" - config = Config().from_str(config_str, overrides={"a.c.d": 20}) - assert config["a"]["c"]["d"] == 20 - assert config["e"]["f"] == {"d": 20} - - -def test_config_reserved_aliases(): - """Test that the auto-generated pydantic schemas auto-alias reserved - attributes like "validate" that would otherwise cause NameError.""" - - @my_registry.cats("catsie.with_alias") - def catsie_with_alias(validate: StrictBool = False): - return validate - - cfg = {"@cats": "catsie.with_alias", "validate": True} - resolved = my_registry.resolve({"test": cfg}) - filled = my_registry.fill({"test": cfg}) - assert resolved["test"] is True - assert filled["test"] == cfg - cfg = {"@cats": "catsie.with_alias", "validate": 20} - with pytest.raises(ConfigValidationError): - my_registry.resolve({"test": cfg}) - - -@pytest.mark.parametrize("d", [".", ":"]) -def test_config_no_interpolation(d): - """Test that interpolation is correctly preserved. The parametrized - value is the final divider (${a.b} vs. ${a:b}). Both should now work and be - valid. The double {{ }} in the config strings are required to prevent the - references from being interpreted as an actual f-string variable. - """ - numpy = pytest.importorskip("numpy") - c_str = f"""[a]\nb = 1\n\n[c]\nd = ${{a{d}b}}\ne = \"hello${{a{d}b}}"\nf = ${{a}}""" - config = Config().from_str(c_str, interpolate=False) - assert not config.is_interpolated - assert config["c"]["d"] == f"${{a{d}b}}" - assert config["c"]["e"] == f'"hello${{a{d}b}}"' - assert config["c"]["f"] == "${a}" - config2 = Config().from_str(config.to_str(), interpolate=True) - assert config2.is_interpolated - assert config2["c"]["d"] == 1 - assert config2["c"]["e"] == "hello1" - assert config2["c"]["f"] == {"b": 1} - config3 = config.interpolate() - assert config3.is_interpolated - assert config3["c"]["d"] == 1 - assert config3["c"]["e"] == "hello1" - assert config3["c"]["f"] == {"b": 1} - # Bad non-serializable value - cfg = {"x": {"y": numpy.asarray([[1, 2], [4, 5]], dtype="f"), "z": f"${{x{d}y}}"}} - with pytest.raises(ConfigValidationError): - Config(cfg).interpolate() - - -def test_config_no_interpolation_registry(): - config_str = """[a]\nbad = true\n[b]\n@cats = "catsie.v1"\nevil = ${a:bad}\n\n[c]\n d = ${b}""" - config = Config().from_str(config_str, interpolate=False) - assert not config.is_interpolated - assert config["b"]["evil"] == "${a:bad}" - assert config["c"]["d"] == "${b}" - filled = my_registry.fill(config) - resolved = my_registry.resolve(config) - assert resolved["b"] == "scratch!" - assert resolved["c"]["d"] == "scratch!" - assert filled["b"]["evil"] == "${a:bad}" - assert filled["b"]["cute"] is True - assert filled["c"]["d"] == "${b}" - interpolated = filled.interpolate() - assert interpolated.is_interpolated - assert interpolated["b"]["evil"] is True - assert interpolated["c"]["d"] == interpolated["b"] - config = Config().from_str(config_str, interpolate=True) - assert config.is_interpolated - filled = my_registry.fill(config) - resolved = my_registry.resolve(config) - assert resolved["b"] == "scratch!" - assert resolved["c"]["d"] == "scratch!" - assert filled["b"]["evil"] is True - assert filled["c"]["d"] == filled["b"] - # Resolving a non-interpolated filled config - config = Config().from_str(config_str, interpolate=False) - assert not config.is_interpolated - filled = my_registry.fill(config) - assert not filled.is_interpolated - assert filled["c"]["d"] == "${b}" - resolved = my_registry.resolve(filled) - assert resolved["c"]["d"] == "scratch!" - - -def test_config_deep_merge(): - config = {"a": "hello", "b": {"c": "d"}} - defaults = {"a": "world", "b": {"c": "e", "f": "g"}} - merged = Config(defaults).merge(config) - assert len(merged) == 2 - assert merged["a"] == "hello" - assert merged["b"] == {"c": "d", "f": "g"} - config = {"a": "hello", "b": {"@test": "x", "foo": 1}} - defaults = {"a": "world", "b": {"@test": "x", "foo": 100, "bar": 2}, "c": 100} - merged = Config(defaults).merge(config) - assert len(merged) == 3 - assert merged["a"] == "hello" - assert merged["b"] == {"@test": "x", "foo": 1, "bar": 2} - assert merged["c"] == 100 - config = {"a": "hello", "b": {"@test": "x", "foo": 1}, "c": 100} - defaults = {"a": "world", "b": {"@test": "y", "foo": 100, "bar": 2}} - merged = Config(defaults).merge(config) - assert len(merged) == 3 - assert merged["a"] == "hello" - assert merged["b"] == {"@test": "x", "foo": 1} - assert merged["c"] == 100 - # Test that leaving out the factory just adds to existing - config = {"a": "hello", "b": {"foo": 1}, "c": 100} - defaults = {"a": "world", "b": {"@test": "y", "foo": 100, "bar": 2}} - merged = Config(defaults).merge(config) - assert len(merged) == 3 - assert merged["a"] == "hello" - assert merged["b"] == {"@test": "y", "foo": 1, "bar": 2} - assert merged["c"] == 100 - # Test that switching to a different factory prevents the default from being added - config = {"a": "hello", "b": {"@foo": 1}, "c": 100} - defaults = {"a": "world", "b": {"@bar": "y"}} - merged = Config(defaults).merge(config) - assert len(merged) == 3 - assert merged["a"] == "hello" - assert merged["b"] == {"@foo": 1} - assert merged["c"] == 100 - config = {"a": "hello", "b": {"@foo": 1}, "c": 100} - defaults = {"a": "world", "b": "y"} - merged = Config(defaults).merge(config) - assert len(merged) == 3 - assert merged["a"] == "hello" - assert merged["b"] == {"@foo": 1} - assert merged["c"] == 100 - - -def test_config_deep_merge_variables(): - config_str = """[a]\nb= 1\nc = 2\n\n[d]\ne = ${a:b}""" - defaults_str = """[a]\nx = 100\n\n[d]\ny = 500""" - config = Config().from_str(config_str, interpolate=False) - defaults = Config().from_str(defaults_str) - merged = defaults.merge(config) - assert merged["a"] == {"b": 1, "c": 2, "x": 100} - assert merged["d"] == {"e": "${a:b}", "y": 500} - assert merged.interpolate()["d"] == {"e": 1, "y": 500} - # With variable in defaults: overwritten by new value - config = Config().from_str("""[a]\nb= 1\nc = 2""") - defaults = Config().from_str("""[a]\nb = 100\nc = ${a:b}""", interpolate=False) - merged = defaults.merge(config) - assert merged["a"]["c"] == 2 - - -def test_config_to_str_roundtrip(): - numpy = pytest.importorskip("numpy") - cfg = {"cfg": {"foo": False}} - config_str = Config(cfg).to_str() - assert config_str == "[cfg]\nfoo = false" - config = Config().from_str(config_str) - assert dict(config) == cfg - cfg = {"cfg": {"foo": "false"}} - config_str = Config(cfg).to_str() - assert config_str == '[cfg]\nfoo = "false"' - config = Config().from_str(config_str) - assert dict(config) == cfg - # Bad non-serializable value - cfg = {"cfg": {"x": numpy.asarray([[1, 2, 3, 4], [4, 5, 3, 4]], dtype="f")}} - config = Config(cfg) - with pytest.raises(ConfigValidationError): - config.to_str() - # Roundtrip with variables: preserve variables correctly (quoted/unquoted) - config_str = """[a]\nb = 1\n\n[c]\nd = ${a:b}\ne = \"hello${a:b}"\nf = "${a:b}\"""" - config = Config().from_str(config_str, interpolate=False) - assert config.to_str() == config_str - - -def test_config_is_interpolated(): - """Test that a config object correctly reports whether it's interpolated.""" - config_str = """[a]\nb = 1\n\n[c]\nd = ${a:b}\ne = \"hello${a:b}"\nf = ${a}""" - config = Config().from_str(config_str, interpolate=False) - assert not config.is_interpolated - config = config.merge(Config({"x": {"y": "z"}})) - assert not config.is_interpolated - config = Config(config) - assert not config.is_interpolated - config = config.interpolate() - assert config.is_interpolated - config = config.merge(Config().from_str(config_str, interpolate=False)) - assert not config.is_interpolated - - -@pytest.mark.parametrize( - "section_order,expected_str,expected_keys", - [ - # fmt: off - ([], "[a]\nb = 1\nc = 2\n\n[a.d]\ne = 3\n\n[a.f]\ng = 4\n\n[h]\ni = 5\n\n[j]\nk = 6", ["a", "h", "j"]), - (["j", "h", "a"], "[j]\nk = 6\n\n[h]\ni = 5\n\n[a]\nb = 1\nc = 2\n\n[a.d]\ne = 3\n\n[a.f]\ng = 4", ["j", "h", "a"]), - (["h"], "[h]\ni = 5\n\n[a]\nb = 1\nc = 2\n\n[a.d]\ne = 3\n\n[a.f]\ng = 4\n\n[j]\nk = 6", ["h", "a", "j"]) - # fmt: on - ], -) -def test_config_serialize_custom_sort(section_order, expected_str, expected_keys): - cfg = { - "j": {"k": 6}, - "a": {"b": 1, "d": {"e": 3}, "c": 2, "f": {"g": 4}}, - "h": {"i": 5}, - } - cfg_str = Config(cfg).to_str() - assert Config(cfg, section_order=section_order).to_str() == expected_str - keys = list(Config(section_order=section_order).from_str(cfg_str).keys()) - assert keys == expected_keys - keys = list(Config(cfg, section_order=section_order).keys()) - assert keys == expected_keys - - -def test_config_custom_sort_preserve(): - """Test that sort order is preserved when merging and copying configs, - or when configs are filled and resolved.""" - cfg = {"x": {}, "y": {}, "z": {}} - section_order = ["y", "z", "x"] - expected = "[y]\n\n[z]\n\n[x]" - config = Config(cfg, section_order=section_order) - assert config.to_str() == expected - config2 = config.copy() - assert config2.to_str() == expected - config3 = config.merge({"a": {}}) - assert config3.to_str() == f"{expected}\n\n[a]" - config4 = Config(config) - assert config4.to_str() == expected - config_str = """[a]\nb = 1\n[c]\n@cats = "catsie.v1"\nevil = true\n\n[t]\n x = 2""" - section_order = ["c", "a", "t"] - config5 = Config(section_order=section_order).from_str(config_str) - assert list(config5.keys()) == section_order - filled = my_registry.fill(config5) - assert filled.section_order == section_order - - -def test_config_pickle(): - config = Config({"foo": "bar"}, section_order=["foo", "bar", "baz"]) - data = pickle.dumps(config) - config_new = pickle.loads(data) - assert config_new == {"foo": "bar"} - assert config_new.section_order == ["foo", "bar", "baz"] - - -def test_config_fill_extra_fields(): - """Test that filling a config from a schema removes extra fields.""" - - class TestSchemaContent(BaseModel): - a: str - b: int - - class Config: - extra = "forbid" - - class TestSchema(BaseModel): - cfg: TestSchemaContent - - config = Config({"cfg": {"a": "1", "b": 2, "c": True}}) - with pytest.raises(ConfigValidationError): - my_registry.fill(config, schema=TestSchema) - filled = my_registry.fill(config, schema=TestSchema, validate=False)["cfg"] - assert filled == {"a": "1", "b": 2} - config2 = config.interpolate() - filled = my_registry.fill(config2, schema=TestSchema, validate=False)["cfg"] - assert filled == {"a": "1", "b": 2} - config3 = Config({"cfg": {"a": "1", "b": 2, "c": True}}, is_interpolated=False) - filled = my_registry.fill(config3, schema=TestSchema, validate=False)["cfg"] - assert filled == {"a": "1", "b": 2} - - class TestSchemaContent2(BaseModel): - a: str - b: int - - class Config: - extra = "allow" - - class TestSchema2(BaseModel): - cfg: TestSchemaContent2 - - filled = my_registry.fill(config, schema=TestSchema2, validate=False)["cfg"] - assert filled == {"a": "1", "b": 2, "c": True} - - -def test_config_validation_error_custom(): - class Schema(BaseModel): - hello: int - world: int - - config = {"hello": 1, "world": "hi!"} - with pytest.raises(ConfigValidationError) as exc_info: - my_registry._fill(config, Schema) - e1 = exc_info.value - assert e1.title == "Config validation error" - assert e1.desc is None - assert not e1.parent - assert e1.show_config is True - assert len(e1.errors) == 1 - assert e1.errors[0]["loc"] == ("world",) - assert e1.errors[0]["msg"] == "value is not a valid integer" - assert e1.errors[0]["type"] == "type_error.integer" - assert e1.error_types == set(["type_error.integer"]) - # Create a new error with overrides - title = "Custom error" - desc = "Some error description here" - e2 = ConfigValidationError.from_error(e1, title=title, desc=desc, show_config=False) - assert e2.errors == e1.errors - assert e2.error_types == e1.error_types - assert e2.title == title - assert e2.desc == desc - assert e2.show_config is False - assert e1.text != e2.text - - -def test_config_parsing_error(): - config_str = "[a]\nb c" - with pytest.raises(ConfigValidationError): - Config().from_str(config_str) - - -def test_config_fill_without_resolve(): - class BaseSchema(BaseModel): - catsie: int - - config = {"catsie": {"@cats": "catsie.v1", "evil": False}} - filled = my_registry.fill(config) - resolved = my_registry.resolve(config) - assert resolved["catsie"] == "meow" - assert filled["catsie"]["cute"] is True - with pytest.raises(ConfigValidationError): - my_registry.resolve(config, schema=BaseSchema) - filled2 = my_registry.fill(config, schema=BaseSchema) - assert filled2["catsie"]["cute"] is True - resolved = my_registry.resolve(filled2) - assert resolved["catsie"] == "meow" - # With unavailable function - class BaseSchema2(BaseModel): - catsie: Any - other: int = 12 - - config = {"catsie": {"@cats": "dog", "evil": False}} - filled3 = my_registry.fill(config, schema=BaseSchema2) - assert filled3["catsie"] == config["catsie"] - assert filled3["other"] == 12 - - -def test_config_dataclasses(): - cat = Cat("testcat", value_in=1, value_out=2) - config = {"cfg": {"@cats": "catsie.v3", "arg": cat}} - result = my_registry.resolve(config)["cfg"] - assert isinstance(result, Cat) - assert result.name == cat.name - assert result.value_in == cat.value_in - assert result.value_out == cat.value_out - - -@pytest.mark.parametrize( - "greeting,value,expected", - [ - # simple substitution should go fine - [342, "${vars.a}", int], - ["342", "${vars.a}", str], - ["everyone", "${vars.a}", str], - ], -) -def test_config_interpolates(greeting, value, expected): - str_cfg = f""" - [project] - my_par = {value} - - [vars] - a = "something" - """ - overrides = {"vars.a": greeting} - cfg = Config().from_str(str_cfg, overrides=overrides) - assert type(cfg["project"]["my_par"]) == expected - - -@pytest.mark.parametrize( - "greeting,value,expected", - [ - # fmt: off - # simple substitution should go fine - ["hello 342", "${vars.a}", "hello 342"], - ["hello everyone", "${vars.a}", "hello everyone"], - ["hello tout le monde", "${vars.a}", "hello tout le monde"], - ["hello 42", "${vars.a}", "hello 42"], - # substituting an element in a list - ["hello 342", "[1, ${vars.a}, 3]", "hello 342"], - ["hello everyone", "[1, ${vars.a}, 3]", "hello everyone"], - ["hello tout le monde", "[1, ${vars.a}, 3]", "hello tout le monde"], - ["hello 42", "[1, ${vars.a}, 3]", "hello 42"], - # substituting part of a string - [342, "hello ${vars.a}", "hello 342"], - ["everyone", "hello ${vars.a}", "hello everyone"], - ["tout le monde", "hello ${vars.a}", "hello tout le monde"], - pytest.param("42", "hello ${vars.a}", "hello 42", marks=pytest.mark.xfail), - # substituting part of a implicit string inside a list - [342, "[1, hello ${vars.a}, 3]", "hello 342"], - ["everyone", "[1, hello ${vars.a}, 3]", "hello everyone"], - ["tout le monde", "[1, hello ${vars.a}, 3]", "hello tout le monde"], - pytest.param("42", "[1, hello ${vars.a}, 3]", "hello 42", marks=pytest.mark.xfail), - # substituting part of a explicit string inside a list - [342, "[1, 'hello ${vars.a}', '3']", "hello 342"], - ["everyone", "[1, 'hello ${vars.a}', '3']", "hello everyone"], - ["tout le monde", "[1, 'hello ${vars.a}', '3']", "hello tout le monde"], - pytest.param("42", "[1, 'hello ${vars.a}', '3']", "hello 42", marks=pytest.mark.xfail), - # more complicated example - [342, "[{'name':'x','script':['hello ${vars.a}']}]", "hello 342"], - ["everyone", "[{'name':'x','script':['hello ${vars.a}']}]", "hello everyone"], - ["tout le monde", "[{'name':'x','script':['hello ${vars.a}']}]", "hello tout le monde"], - pytest.param("42", "[{'name':'x','script':['hello ${vars.a}']}]", "hello 42", marks=pytest.mark.xfail), - # fmt: on - ], -) -def test_config_overrides(greeting, value, expected): - str_cfg = f""" - [project] - commands = {value} - - [vars] - a = "world" - """ - overrides = {"vars.a": greeting} - assert "${vars.a}" in str_cfg - cfg = Config().from_str(str_cfg, overrides=overrides) - assert expected in str(cfg) - - -def test_warn_single_quotes(): - str_cfg = f""" - [project] - commands = 'do stuff' - """ - - with pytest.warns(UserWarning, match="single-quoted"): - cfg = Config().from_str(str_cfg) - - # should not warn if single quotes are in the middle - str_cfg = f""" - [project] - commands = some'thing - """ - cfg = Config().from_str(str_cfg) - - -def test_parse_strings_interpretable_as_ints(): - """Test whether strings interpretable as integers are parsed correctly (i. e. as strings).""" - cfg = Config().from_str(f"""[a]\nfoo = [${{b.bar}}, "00${{b.bar}}", "y"]\n\n[b]\nbar = 3""") - assert cfg["a"]["foo"] == [3, "003", "y"] - assert cfg["b"]["bar"] == 3 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/security/oauth2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/security/oauth2.py deleted file mode 100644 index e4c4357e7303aad2bb7e4b86fb08ac34d37dbad2..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/security/oauth2.py +++ /dev/null @@ -1,231 +0,0 @@ -from typing import Any, Dict, List, Optional, Union, cast - -from fastapi.exceptions import HTTPException -from fastapi.openapi.models import OAuth2 as OAuth2Model -from fastapi.openapi.models import OAuthFlows as OAuthFlowsModel -from fastapi.param_functions import Form -from fastapi.security.base import SecurityBase -from fastapi.security.utils import get_authorization_scheme_param -from starlette.requests import Request -from starlette.status import HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN - -# TODO: import from typing when deprecating Python 3.9 -from typing_extensions import Annotated - - -class OAuth2PasswordRequestForm: - """ - This is a dependency class, use it like: - - @app.post("/login") - def login(form_data: OAuth2PasswordRequestForm = Depends()): - data = form_data.parse() - print(data.username) - print(data.password) - for scope in data.scopes: - print(scope) - if data.client_id: - print(data.client_id) - if data.client_secret: - print(data.client_secret) - return data - - - It creates the following Form request parameters in your endpoint: - - grant_type: the OAuth2 spec says it is required and MUST be the fixed string "password". - Nevertheless, this dependency class is permissive and allows not passing it. If you want to enforce it, - use instead the OAuth2PasswordRequestFormStrict dependency. - username: username string. The OAuth2 spec requires the exact field name "username". - password: password string. The OAuth2 spec requires the exact field name "password". - scope: Optional string. Several scopes (each one a string) separated by spaces. E.g. - "items:read items:write users:read profile openid" - client_id: optional string. OAuth2 recommends sending the client_id and client_secret (if any) - using HTTP Basic auth, as: client_id:client_secret - client_secret: optional string. OAuth2 recommends sending the client_id and client_secret (if any) - using HTTP Basic auth, as: client_id:client_secret - """ - - def __init__( - self, - *, - grant_type: Annotated[Union[str, None], Form(pattern="password")] = None, - username: Annotated[str, Form()], - password: Annotated[str, Form()], - scope: Annotated[str, Form()] = "", - client_id: Annotated[Union[str, None], Form()] = None, - client_secret: Annotated[Union[str, None], Form()] = None, - ): - self.grant_type = grant_type - self.username = username - self.password = password - self.scopes = scope.split() - self.client_id = client_id - self.client_secret = client_secret - - -class OAuth2PasswordRequestFormStrict(OAuth2PasswordRequestForm): - """ - This is a dependency class, use it like: - - @app.post("/login") - def login(form_data: OAuth2PasswordRequestFormStrict = Depends()): - data = form_data.parse() - print(data.username) - print(data.password) - for scope in data.scopes: - print(scope) - if data.client_id: - print(data.client_id) - if data.client_secret: - print(data.client_secret) - return data - - - It creates the following Form request parameters in your endpoint: - - grant_type: the OAuth2 spec says it is required and MUST be the fixed string "password". - This dependency is strict about it. If you want to be permissive, use instead the - OAuth2PasswordRequestForm dependency class. - username: username string. The OAuth2 spec requires the exact field name "username". - password: password string. The OAuth2 spec requires the exact field name "password". - scope: Optional string. Several scopes (each one a string) separated by spaces. E.g. - "items:read items:write users:read profile openid" - client_id: optional string. OAuth2 recommends sending the client_id and client_secret (if any) - using HTTP Basic auth, as: client_id:client_secret - client_secret: optional string. OAuth2 recommends sending the client_id and client_secret (if any) - using HTTP Basic auth, as: client_id:client_secret - """ - - def __init__( - self, - grant_type: Annotated[str, Form(pattern="password")], - username: Annotated[str, Form()], - password: Annotated[str, Form()], - scope: Annotated[str, Form()] = "", - client_id: Annotated[Union[str, None], Form()] = None, - client_secret: Annotated[Union[str, None], Form()] = None, - ): - super().__init__( - grant_type=grant_type, - username=username, - password=password, - scope=scope, - client_id=client_id, - client_secret=client_secret, - ) - - -class OAuth2(SecurityBase): - def __init__( - self, - *, - flows: Union[OAuthFlowsModel, Dict[str, Dict[str, Any]]] = OAuthFlowsModel(), - scheme_name: Optional[str] = None, - description: Optional[str] = None, - auto_error: bool = True, - ): - self.model = OAuth2Model( - flows=cast(OAuthFlowsModel, flows), description=description - ) - self.scheme_name = scheme_name or self.__class__.__name__ - self.auto_error = auto_error - - async def __call__(self, request: Request) -> Optional[str]: - authorization = request.headers.get("Authorization") - if not authorization: - if self.auto_error: - raise HTTPException( - status_code=HTTP_403_FORBIDDEN, detail="Not authenticated" - ) - else: - return None - return authorization - - -class OAuth2PasswordBearer(OAuth2): - def __init__( - self, - tokenUrl: str, - scheme_name: Optional[str] = None, - scopes: Optional[Dict[str, str]] = None, - description: Optional[str] = None, - auto_error: bool = True, - ): - if not scopes: - scopes = {} - flows = OAuthFlowsModel( - password=cast(Any, {"tokenUrl": tokenUrl, "scopes": scopes}) - ) - super().__init__( - flows=flows, - scheme_name=scheme_name, - description=description, - auto_error=auto_error, - ) - - async def __call__(self, request: Request) -> Optional[str]: - authorization = request.headers.get("Authorization") - scheme, param = get_authorization_scheme_param(authorization) - if not authorization or scheme.lower() != "bearer": - if self.auto_error: - raise HTTPException( - status_code=HTTP_401_UNAUTHORIZED, - detail="Not authenticated", - headers={"WWW-Authenticate": "Bearer"}, - ) - else: - return None - return param - - -class OAuth2AuthorizationCodeBearer(OAuth2): - def __init__( - self, - authorizationUrl: str, - tokenUrl: str, - refreshUrl: Optional[str] = None, - scheme_name: Optional[str] = None, - scopes: Optional[Dict[str, str]] = None, - description: Optional[str] = None, - auto_error: bool = True, - ): - if not scopes: - scopes = {} - flows = OAuthFlowsModel( - authorizationCode=cast( - Any, - { - "authorizationUrl": authorizationUrl, - "tokenUrl": tokenUrl, - "refreshUrl": refreshUrl, - "scopes": scopes, - }, - ) - ) - super().__init__( - flows=flows, - scheme_name=scheme_name, - description=description, - auto_error=auto_error, - ) - - async def __call__(self, request: Request) -> Optional[str]: - authorization = request.headers.get("Authorization") - scheme, param = get_authorization_scheme_param(authorization) - if not authorization or scheme.lower() != "bearer": - if self.auto_error: - raise HTTPException( - status_code=HTTP_401_UNAUTHORIZED, - detail="Not authenticated", - headers={"WWW-Authenticate": "Bearer"}, - ) - else: - return None # pragma: nocover - return param - - -class SecurityScopes: - def __init__(self, scopes: Optional[List[str]] = None): - self.scopes = scopes or [] - self.scope_str = " ".join(self.scopes) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py deleted file mode 100644 index e620b48a55bd0ce720a34c309d295839edabe5aa..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py +++ /dev/null @@ -1,534 +0,0 @@ -# cython: language_level=3 -# distutils: define_macros=CYTHON_TRACE_NOGIL=1 - -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - -import math - -from .errors import Error as Cu2QuError, ApproxNotFoundError - - -__all__ = ["curve_to_quadratic", "curves_to_quadratic"] - -MAX_N = 100 - -NAN = float("NaN") - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex) -def dot(v1, v2): - """Return the dot product of two vectors. - - Args: - v1 (complex): First vector. - v2 (complex): Second vector. - - Returns: - double: Dot product. - """ - return (v1 * v2.conjugate()).real - - -@cython.cfunc -@cython.inline -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - _1=cython.complex, _2=cython.complex, _3=cython.complex, _4=cython.complex -) -def calc_cubic_points(a, b, c, d): - _1 = d - _2 = (c / 3.0) + d - _3 = (b + c) / 3.0 + _2 - _4 = a + d + c + b - return _1, _2, _3, _4 - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -def calc_cubic_parameters(p0, p1, p2, p3): - c = (p1 - p0) * 3.0 - b = (p2 - p1) * 3.0 - c - d = p0 - a = p3 - d - c - b - return a, b, c, d - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -def split_cubic_into_n_iter(p0, p1, p2, p3, n): - """Split a cubic Bezier into n equal parts. - - Splits the curve into `n` equal parts by curve time. - (t=0..1/n, t=1/n..2/n, ...) - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - An iterator yielding the control points (four complex values) of the - subcurves. - """ - # Hand-coded special-cases - if n == 2: - return iter(split_cubic_into_two(p0, p1, p2, p3)) - if n == 3: - return iter(split_cubic_into_three(p0, p1, p2, p3)) - if n == 4: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_two(a[0], a[1], a[2], a[3]) - + split_cubic_into_two(b[0], b[1], b[2], b[3]) - ) - if n == 6: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_three(a[0], a[1], a[2], a[3]) - + split_cubic_into_three(b[0], b[1], b[2], b[3]) - ) - - return _split_cubic_into_n_gen(p0, p1, p2, p3, n) - - -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, - n=cython.int, -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - dt=cython.double, delta_2=cython.double, delta_3=cython.double, i=cython.int -) -@cython.locals( - a1=cython.complex, b1=cython.complex, c1=cython.complex, d1=cython.complex -) -def _split_cubic_into_n_gen(p0, p1, p2, p3, n): - a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) - dt = 1 / n - delta_2 = dt * dt - delta_3 = dt * delta_2 - for i in range(n): - t1 = i * dt - t1_2 = t1 * t1 - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d - yield calc_cubic_points(a1, b1, c1, d1) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def split_cubic_into_two(p0, p1, p2, p3): - """Split a cubic Bezier into two equal parts. - - Splits the curve into two equal parts at t = 0.5 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Two cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals( - mid1=cython.complex, - deriv1=cython.complex, - mid2=cython.complex, - deriv2=cython.complex, -) -def split_cubic_into_three(p0, p1, p2, p3): - """Split a cubic Bezier into three equal parts. - - Splits the curve into three equal parts at t = 1/3 and t = 2/3 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Three cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) - deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - return ( - (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(_p1=cython.complex, _p2=cython.complex) -def cubic_approx_control(t, p0, p1, p2, p3): - """Approximate a cubic Bezier using a quadratic one. - - Args: - t (double): Position of control point. - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - complex: Location of candidate control point on quadratic curve. - """ - _p1 = p0 + (p1 - p0) * 1.5 - _p2 = p3 + (p2 - p3) * 1.5 - return _p1 + (_p2 - _p1) * t - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals(ab=cython.complex, cd=cython.complex, p=cython.complex, h=cython.double) -def calc_intersect(a, b, c, d): - """Calculate the intersection of two lines. - - Args: - a (complex): Start point of first line. - b (complex): End point of first line. - c (complex): Start point of second line. - d (complex): End point of second line. - - Returns: - complex: Location of intersection if one present, ``complex(NaN,NaN)`` - if no intersection was found. - """ - ab = b - a - cd = d - c - p = ab * 1j - try: - h = dot(p, a - c) / dot(p, cd) - except ZeroDivisionError: - return complex(NAN, NAN) - return c + cd * h - - -@cython.cfunc -@cython.returns(cython.int) -@cython.locals( - tolerance=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - """Check if a cubic Bezier lies within a given distance of the origin. - - "Origin" means *the* origin (0,0), not the start of the curve. Note that no - checks are made on the start and end positions of the curve; this function - only checks the inside of the curve. - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - tolerance (double): Distance from origin. - - Returns: - bool: True if the cubic Bezier ``p`` entirely lies within a distance - ``tolerance`` of the origin, False otherwise. - """ - # First check p2 then p1, as p2 has higher error early on. - if abs(p2) <= tolerance and abs(p1) <= tolerance: - return True - - # Split. - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - if abs(mid) > tolerance: - return False - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return cubic_farthest_fit_inside( - p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) - - -@cython.cfunc -@cython.inline -@cython.locals(tolerance=cython.double) -@cython.locals( - q1=cython.complex, - c0=cython.complex, - c1=cython.complex, - c2=cython.complex, - c3=cython.complex, -) -def cubic_approx_quadratic(cubic, tolerance): - """Approximate a cubic Bezier with a single quadratic within a given tolerance. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - tolerance (double): Permitted deviation from the original curve. - - Returns: - Three complex numbers representing control points of the quadratic - curve if it fits within the given tolerance, or ``None`` if no suitable - curve could be calculated. - """ - - q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - if math.isnan(q1.imag): - return None - c0 = cubic[0] - c3 = cubic[3] - c1 = c0 + (q1 - c0) * (2 / 3) - c2 = c3 + (q1 - c3) * (2 / 3) - if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - return None - return c0, q1, c3 - - -@cython.cfunc -@cython.locals(n=cython.int, tolerance=cython.double) -@cython.locals(i=cython.int) -@cython.locals(all_quadratic=cython.int) -@cython.locals( - c0=cython.complex, c1=cython.complex, c2=cython.complex, c3=cython.complex -) -@cython.locals( - q0=cython.complex, - q1=cython.complex, - next_q1=cython.complex, - q2=cython.complex, - d1=cython.complex, -) -def cubic_approx_spline(cubic, n, tolerance, all_quadratic): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - n (int): Number of quadratic Bezier curves in the spline. - tolerance (double): Permitted deviation from the original curve. - - Returns: - A list of ``n+2`` complex numbers, representing control points of the - quadratic spline if it fits within the given tolerance, or ``None`` if - no suitable spline could be calculated. - """ - - if n == 1: - return cubic_approx_quadratic(cubic, tolerance) - if n == 2 and all_quadratic == False: - return cubic - - cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n) - - # calculate the spline of quadratics and check errors at the same time. - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - q2 = cubic[0] - d1 = 0j - spline = [cubic[0], next_q1] - for i in range(1, n + 1): - # Current cubic to convert - c0, c1, c2, c3 = next_cubic - - # Current quadratic approximation of current cubic - q0 = q2 - q1 = next_q1 - if i < n: - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - spline.append(next_q1) - q2 = (q1 + next_q1) * 0.5 - else: - q2 = c3 - - # End-point deltas - d0 = d1 - d1 = q2 - c3 - - if abs(d1) > tolerance or not cubic_farthest_fit_inside( - d0, - q0 + (q1 - q0) * (2 / 3) - c1, - q2 + (q1 - q2) * (2 / 3) - c2, - d1, - tolerance, - ): - return None - spline.append(cubic[3]) - - return spline - - -@cython.locals(max_err=cython.double) -@cython.locals(n=cython.int) -@cython.locals(all_quadratic=cython.int) -def curve_to_quadratic(curve, max_err, all_quadratic=True): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four 2D tuples representing control points of - the cubic Bezier curve. - max_err (double): Permitted deviation from the original curve. - all_quadratic (bool): If True (default) returned value is a - quadratic spline. If False, it's either a single quadratic - curve or a single cubic curve. - - Returns: - If all_quadratic is True: A list of 2D tuples, representing - control points of the quadratic spline if it fits within the - given tolerance, or ``None`` if no suitable spline could be - calculated. - - If all_quadratic is False: Either a quadratic curve (if length - of output is 3), or a cubic curve (if length of output is 4). - """ - - curve = [complex(*p) for p in curve] - - for n in range(1, MAX_N + 1): - spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - if spline is not None: - # done. go home - return [(s.real, s.imag) for s in spline] - - raise ApproxNotFoundError(curve) - - -@cython.locals(l=cython.int, last_i=cython.int, i=cython.int) -@cython.locals(all_quadratic=cython.int) -def curves_to_quadratic(curves, max_errors, all_quadratic=True): - """Return quadratic Bezier splines approximating the input cubic Beziers. - - Args: - curves: A sequence of *n* curves, each curve being a sequence of four - 2D tuples. - max_errors: A sequence of *n* floats representing the maximum permissible - deviation from each of the cubic Bezier curves. - all_quadratic (bool): If True (default) returned values are a - quadratic spline. If False, they are either a single quadratic - curve or a single cubic curve. - - Example:: - - >>> curves_to_quadratic( [ - ... [ (50,50), (100,100), (150,100), (200,50) ], - ... [ (75,50), (120,100), (150,75), (200,60) ] - ... ], [1,1] ) - [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]] - - The returned splines have "implied oncurve points" suitable for use in - TrueType ``glif`` outlines - i.e. in the first spline returned above, - the first quadratic segment runs from (50,50) to - ( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...). - - Returns: - If all_quadratic is True, a list of splines, each spline being a list - of 2D tuples. - - If all_quadratic is False, a list of curves, each curve being a quadratic - (length 3), or cubic (length 4). - - Raises: - fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation - can be found for all curves with the given parameters. - """ - - curves = [[complex(*p) for p in curve] for curve in curves] - assert len(max_errors) == len(curves) - - l = len(curves) - splines = [None] * l - last_i = i = 0 - n = 1 - while True: - spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - if spline is None: - if n == MAX_N: - break - n += 1 - last_i = i - continue - splines[i] = spline - i = (i + 1) % l - if i == last_i: - # done. go home - return [[(s.real, s.imag) for s in spline] for spline in splines] - - raise ApproxNotFoundError(curves) diff --git a/spaces/cihyFjudo/fairness-paper-search/Dawn Sandlmodels 30 Sets See Why Dawn is One of the Top Models at SandLModels.md b/spaces/cihyFjudo/fairness-paper-search/Dawn Sandlmodels 30 Sets See Why Dawn is One of the Top Models at SandLModels.md deleted file mode 100644 index 581621492e77b45a4ae6360b453df8d9bed5f88d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Dawn Sandlmodels 30 Sets See Why Dawn is One of the Top Models at SandLModels.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dawn Sandlmodels 30 Sets


    Download Zip ✯✯✯ https://tinurli.com/2uwi4z



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md b/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md deleted file mode 100644 index 211ebc45e2692b71f793bc4362763bb3fd89a5a5..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Free Download Game Hamsterball Gold Full Version


    Download File 🗹 https://tinurli.com/2uwjck



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md b/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md deleted file mode 100644 index b6611e9c2139f4f2c37b0492dd46553cf23a2edf..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing


    Download Filehttps://tinurli.com/2uwkSg



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c deleted file mode 100644 index 5318b04a390deee40612c5aa2242c60c132d7630..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c +++ /dev/null @@ -1,121 +0,0 @@ -/* - * DFPWM encoder - * Copyright (c) 2022 Jack Bruienne - * Copyright (c) 2012, 2016 Ben "GreaseMonkey" Russell - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * DFPWM1a encoder - */ - -#include "libavutil/internal.h" -#include "avcodec.h" -#include "codec_id.h" -#include "codec_internal.h" -#include "encode.h" - -typedef struct { - int fq, q, s, lt; -} DFPWMState; - -// DFPWM codec from https://github.com/ChenThread/dfpwm/blob/master/1a/ -// Licensed in the public domain - -// note, len denotes how many compressed bytes there are (uncompressed bytes / 8). -static void au_compress(DFPWMState *state, int len, uint8_t *outbuf, const uint8_t *inbuf) -{ - unsigned d = 0; - for (int i = 0; i < len; i++) { - for (int j = 0; j < 8; j++) { - int nq, st, ns; - // get sample - int v = *(inbuf++) - 128; - // set bit / target - int t = (v > state->q || (v == state->q && v == 127) ? 127 : -128); - d >>= 1; - if(t > 0) - d |= 0x80; - - // adjust charge - nq = state->q + ((state->s * (t-state->q) + 512)>>10); - if(nq == state->q && nq != t) - nq += (t == 127 ? 1 : -1); - state->q = nq; - - // adjust strength - st = (t != state->lt ? 0 : 1023); - ns = state->s; - if(ns != st) - ns += (st != 0 ? 1 : -1); - if(ns < 8) ns = 8; - state->s = ns; - - state->lt = t; - } - - // output bits - *(outbuf++) = d; - } -} - -static av_cold int dfpwm_enc_init(struct AVCodecContext *ctx) -{ - DFPWMState *state = ctx->priv_data; - - state->fq = 0; - state->q = 0; - state->s = 0; - state->lt = -128; - - ctx->bits_per_coded_sample = 1; - - return 0; -} - -static int dfpwm_enc_frame(struct AVCodecContext *ctx, struct AVPacket *packet, - const struct AVFrame *frame, int *got_packet) -{ - DFPWMState *state = ctx->priv_data; - int size = frame->nb_samples * frame->ch_layout.nb_channels / 8 + (frame->nb_samples % 8 > 0 ? 1 : 0); - int ret = ff_get_encode_buffer(ctx, packet, size, 0); - - if (ret) { - *got_packet = 0; - return ret; - } - - au_compress(state, size, packet->data, frame->data[0]); - - *got_packet = 1; - return 0; -} - -const FFCodec ff_dfpwm_encoder = { - .p.name = "dfpwm", - CODEC_LONG_NAME("DFPWM1a audio"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_DFPWM, - .priv_data_size = sizeof(DFPWMState), - .init = dfpwm_enc_init, - FF_CODEC_ENCODE_CB(dfpwm_enc_frame), - .p.sample_fmts = (const enum AVSampleFormat[]){AV_SAMPLE_FMT_U8, AV_SAMPLE_FMT_NONE}, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_VARIABLE_FRAME_SIZE | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h deleted file mode 100644 index e6d9d346d9cc13c23092f5e1f4501d20157e1def..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h +++ /dev/null @@ -1,131 +0,0 @@ -/* - * Copyright (C) 2007 Marco Gerards - * Copyright (C) 2009 David Conrad - * Copyright (C) 2011 Jordi Ortiz - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DIRAC_H -#define AVCODEC_DIRAC_H - -/** - * @file - * Interface to Dirac Decoder/Encoder - * @author Marco Gerards - * @author David Conrad - * @author Jordi Ortiz - */ - -#include "avcodec.h" - -/** - * The spec limits the number of wavelet decompositions to 4 for both - * level 1 (VC-2) and 128 (long-gop default). - * 5 decompositions is the maximum before >16-bit buffers are needed. - * Schroedinger allows this for DD 9,7 and 13,7 wavelets only, limiting - * the others to 4 decompositions (or 3 for the fidelity filter). - * - * We use this instead of MAX_DECOMPOSITIONS to save some memory. - */ -#define MAX_DWT_LEVELS 5 - -/** - * Parse code values: - * - * Dirac Specification -> - * 9.6.1 Table 9.1 - * - * VC-2 Specification -> - * 10.4.1 Table 10.1 - */ - -enum DiracParseCodes { - DIRAC_PCODE_SEQ_HEADER = 0x00, - DIRAC_PCODE_END_SEQ = 0x10, - DIRAC_PCODE_AUX = 0x20, - DIRAC_PCODE_PAD = 0x30, - DIRAC_PCODE_PICTURE_CODED = 0x08, - DIRAC_PCODE_PICTURE_RAW = 0x48, - DIRAC_PCODE_PICTURE_LOW_DEL = 0xC8, - DIRAC_PCODE_PICTURE_HQ = 0xE8, - DIRAC_PCODE_INTER_NOREF_CO1 = 0x0A, - DIRAC_PCODE_INTER_NOREF_CO2 = 0x09, - DIRAC_PCODE_INTER_REF_CO1 = 0x0D, - DIRAC_PCODE_INTER_REF_CO2 = 0x0E, - DIRAC_PCODE_INTRA_REF_CO = 0x0C, - DIRAC_PCODE_INTRA_REF_RAW = 0x4C, - DIRAC_PCODE_INTRA_REF_PICT = 0xCC, - DIRAC_PCODE_MAGIC = 0x42424344, -}; - -typedef struct DiracVersionInfo { - int major; - int minor; -} DiracVersionInfo; - -typedef struct AVDiracSeqHeader { - unsigned width; - unsigned height; - uint8_t chroma_format; ///< 0: 444 1: 422 2: 420 - - uint8_t interlaced; - uint8_t top_field_first; - - uint8_t frame_rate_index; ///< index into dirac_frame_rate[] - uint8_t aspect_ratio_index; ///< index into dirac_aspect_ratio[] - - uint16_t clean_width; - uint16_t clean_height; - uint16_t clean_left_offset; - uint16_t clean_right_offset; - - uint8_t pixel_range_index; ///< index into dirac_pixel_range_presets[] - uint8_t color_spec_index; ///< index into dirac_color_spec_presets[] - - int profile; - int level; - - AVRational framerate; - AVRational sample_aspect_ratio; - - enum AVPixelFormat pix_fmt; - enum AVColorRange color_range; - enum AVColorPrimaries color_primaries; - enum AVColorTransferCharacteristic color_trc; - enum AVColorSpace colorspace; - - DiracVersionInfo version; - int bit_depth; -} AVDiracSeqHeader; - -/** - * Parse a Dirac sequence header. - * - * @param dsh this function will allocate and fill an AVDiracSeqHeader struct - * and write it into this pointer. The caller must free it with - * av_free(). - * @param buf the data buffer - * @param buf_size the size of the data buffer in bytes - * @param log_ctx if non-NULL, this function will log errors here - * @return 0 on success, a negative AVERROR code on failure - */ -int av_dirac_parse_sequence_header(AVDiracSeqHeader **dsh, - const uint8_t *buf, size_t buf_size, - void *log_ctx); - -#endif /* AVCODEC_DIRAC_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md b/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md deleted file mode 100644 index 5665a8a837ffbbac73463ec4cc12b214796fbfb9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md +++ /dev/null @@ -1,178 +0,0 @@ -
    -

    AetherSX2 Pro APK: The Ultimate PS2 Emulator for Android

    -

    Do you miss playing your favorite PS2 games on your Android device? Do you want to enjoy the nostalgia of classic titles like God of War, Final Fantasy, Grand Theft Auto, and more? If yes, then you need to try AetherSX2 Pro APK, the best PS2 emulator for Android.

    -

    aethersx2 pro apk


    DOWNLOADhttps://urlca.com/2uOasI



    -

    AetherSX2 Pro APK is a modified version of AetherSX2, a popular PS2 emulator for Android. It offers many features and enhancements that make it superior to the original app. In this article, we will tell you everything you need to know about AetherSX2 Pro APK, including its features, how to download and install it, how to use it, and its pros and cons.

    -

    What is AetherSX2 Pro APK?

    -

    AetherSX2 Pro APK is a PS2 emulator for Android that allows you to play PS2 games on your smartphone or tablet. It is based on the open-source project PCSX2, which is a well-known PS2 emulator for PC. AetherSX2 Pro APK is not available on the Google Play Store, but you can download it from third-party sources like [Apkmody](^1^).

    -

    AetherSX2 Pro APK is the PRO version of AetherSX2 APK. By using the AetherSX2 Pro APK, you can easily complete any tasks and requirements in it. Often you need to spend a lot of time or money to get rewards easily, but by using AetherSX2 Pro APK, you often achieve your goals in a very short time.

    -

    Features of AetherSX2 Pro APK

    -

    AetherSX2 Pro APK has many features that make it stand out from other PS2 emulators for Android. Here are some of them:

    -

    aethersx2 pro apk download
    -aethersx2 pro apk free
    -aethersx2 pro apk latest version
    -aethersx2 pro apk no ads
    -aethersx2 pro apk android 11
    -aethersx2 pro apk ps2 emulator
    -aethersx2 pro apk best settings
    -aethersx2 pro apk compatible games
    -aethersx2 pro apk reddit
    -aethersx2 pro apk mod
    -aethersx2 pro apk bios
    -aethersx2 pro apk play store
    -aethersx2 pro apk requirements
    -aethersx2 pro apk vs damonps2
    -aethersx2 pro apk how to use
    -aethersx2 pro apk cheats
    -aethersx2 pro apk update
    -aethersx2 pro apk review
    -aethersx2 pro apk offline
    -aethersx2 pro apk 2023
    -aethersx2 pro apk 60fps
    -aethersx2 pro apk vulkan
    -aethersx2 pro apk opengl
    -aethersx2 pro apk pcsx2
    -aethersx2 pro apk guide
    -aethersx2 pro apk tutorial
    -aethersx2 pro apk performance
    -aethersx2 pro apk optimization
    -aethersx2 pro apk controller support
    -aethersx2 pro apk multiplayer
    -aethersx2 pro apk widescreen
    -aethersx2 pro apk iso files
    -aethersx2 pro apk roms download
    -aethersx2 pro apk god of war 2
    -aethersx2 pro apk shadow of the colossus
    -aethersx2 pro apk gta san andreas
    -aethersx2 pro apk resident evil 4
    -aethersx2 pro apk final fantasy xii
    -aethersx2 pro apk metal gear solid 3
    -aethersx2 pro apk kingdom hearts 2
    -aethersx2 pro apk dragon ball z budokai tenkaichi 3
    -aethersx2 pro apk devil may cry 3
    -aethersx2 pro apk silent hill 3
    -aethersx2 pro apk gran turismo 4
    -aethersx2 pro apk tekken 5
    -aethersx2 pro apk persona 4

    -

    High compatibility with PS2 games

    -

    AetherSX2 Pro APK supports a large number of PS2 games, from popular titles like Metal Gear Solid, Resident Evil, Kingdom Hearts, Tekken, and more. You can also play games from different regions, such as Japan, Europe, and North America. You can check the compatibility list on the official website of PCSX2.

    -

    Enhanced graphics and sound quality

    -

    AetherSX2 Pro APK improves the graphics and sound quality of PS2 games by using various plugins and settings. You can adjust the resolution, frame rate, anti-aliasing, texture filtering, and more. You can also enable HD rendering, which makes the games look sharper and smoother. The sound quality is also improved by using Dolby Surround Sound and other audio enhancements.

    -

    Customizable controls and settings

    -

    AetherSX2 Pro APK allows you to customize the controls and settings according to your preference. You can use the virtual buttons on the screen or connect an external controller via Bluetooth or USB. You can also map the buttons to different functions and adjust the sensitivity and vibration. You can also change the language, theme, orientation, and other options in the settings menu.

    -

    Save and load states

    -

    AetherSX2 Pro APK lets you save and load states anytime you want. This means you can save your progress in any game and resume it later without losing anything. You can also load states from different slots and switch between them easily. This feature is very useful for games that have long or difficult levels, or for games that do not have a save function.

    -

    Multiplayer mode and online support

    -

    AetherSX2 Pro APK enables you to play multiplayer games with your friends or other players online. You can use the local multiplayer mode, which allows you to connect two devices via Wi-Fi or Bluetooth and play on the same screen. You can also use the online multiplayer mode, which allows you to join or host online rooms and play with other players around the world. You can also chat with other players and send them messages.

    -

    How to download and install AetherSX2 Pro APK?

    -

    If you want to download and install AetherSX2 Pro APK on your Android device, you need to follow these steps:

    -

    Requirements for AetherSX2 Pro APK

    -

    Before you download and install AetherSX2 Pro APK, you need to make sure that your device meets the following requirements:

    -
      -
    • Your device must have Android 5.0 or higher.
    • -
    • Your device must have at least 2 GB of RAM and 4 GB of free storage space.
    • -
    • Your device must support OpenGL ES 3.0 or higher.
    • -
    • You must enable the installation of apps from unknown sources in your device settings.
    • -
    • You must have a stable internet connection to download the app and the PS2 games.
    • -
    -

    Steps to download and install AetherSX2 Pro APK

    -

    After you have checked the requirements, you can follow these steps to download and install AetherSX2 Pro APK:

    -
      -
    1. Go to [Apkmody] and search for AetherSX2 Pro APK. You will see a download button on the page. Click on it and wait for the download to finish.
    2. -
    3. Once the download is complete, go to your file manager and locate the downloaded file. Tap on it and select install. Wait for the installation to finish.
    4. -
    5. After the installation is done, you will see an icon of AetherSX2 Pro APK on your home screen or app drawer. Tap on it and launch the app.
    6. -
    7. You will see a welcome screen with some instructions and tips. Read them carefully and tap on next.
    8. -
    9. You will see a screen where you can grant some permissions to the app. These permissions are necessary for the app to function properly. Tap on allow for each permission.
    10. -
    11. You will see a screen where you can choose the language and theme of the app. Select your preferred options and tap on next.
    12. -
    13. You will see a screen where you can scan your device for PS2 games. Tap on scan and wait for the app to find any PS2 games that you have stored on your device. If you do not have any PS2 games, you can skip this step and download them later from the internet.
    14. -
    15. You will see a screen where you can select a game to play. Tap on any game that you want to play and enjoy!
    16. -
    -

    How to use AetherSX2 Pro APK?

    -

    Now that you have downloaded and installed AetherSX2 Pro APK, you might be wondering how to use it. Here are some tips and tricks that will help you use AetherSX2 Pro APK effectively:

    -

    How to load PS2 games on AetherSX2 Pro APK

    -

    If you want to load PS2 games on AetherSX2 Pro APK, you have two options: You can either use the games that you have scanned from your device, or you can download them from the internet. To use the games that you have scanned from your device, simply tap on them from the game list and start playing. To download games from the internet, follow these steps:

    -
      -
    1. Go to any website that offers PS2 games for download, such as [CoolROM] or [Emuparadise]. Search for the game that you want to download and click on it.
    2. -
    3. You will see a page with some information about the game, such as its genre, rating, size, etc. You will also see a download link or button. Click on it and wait for the download to start.
    4. -
    5. Once the download is complete, go to your file manager and locate the downloaded file. It will be in a compressed format, such as ZIP or RAR. You need to extract it using an app like [ZArchiver] or [RAR].
    6. -
    7. After extracting the file, you will see a folder with the name of the game. Inside it, you will find a file with the extension .iso, .bin, .img, or .mdf. This is the game file that you need to load on AetherSX2 Pro APK.
    8. -
    9. Copy or move the game file to a folder on your device where you want to store your PS2 games. You can create a new folder or use an existing one.
    10. -
    11. Launch AetherSX2 Pro APK and tap on the menu icon on the top left corner. Tap on settings and then tap on paths. Tap on the folder icon next to PS2 games and select the folder where you have stored your PS2 games. Tap on OK and then tap on back.
    12. -
    13. Tap on the refresh icon on the top right corner and wait for the app to scan your PS2 games. You will see the game that you have downloaded appear on the game list. Tap on it and start playing.
    14. -
    -

    How to adjust the settings on AetherSX2 Pro APK

    -

    If you want to adjust the settings on AetherSX2 Pro APK, you can do so by tapping on the menu icon on the top left corner and tapping on settings. You will see various options that you can change, such as:

    -
      -
    • Graphics: Here you can change the resolution, frame rate, aspect ratio, anti-aliasing, texture filtering, and more. You can also enable HD rendering and FPS counter.
    • -
    • Sound: Here you can change the volume, sound quality, audio latency, and more. You can also enable Dolby Surround Sound and audio enhancements.
    • -
    • Controls: Here you can change the layout, size, opacity, and position of the virtual buttons. You can also map the buttons to different functions and adjust the sensitivity and vibration. You can also connect an external controller via Bluetooth or USB.
    • -
    • System: Here you can change the language, theme, orientation, and other options. You can also enable cheats, speed hacks, and skip BIOS.
    • -
    -

    You can also access some of these settings while playing a game by tapping on the pause icon on the top right corner and tapping on settings. You can also save and load states from this menu.

    -

    How to play multiplayer games on AetherSX2 Pro APK

    -

    If you want to play multiplayer games on AetherSX2 Pro APK, you have two options: You can either use the local multiplayer mode or the online multiplayer mode. To use the local multiplayer mode, follow these steps:

    -
      -
    1. Make sure that both devices have AetherSX2 Pro APK installed and have the same PS2 game file stored in their devices.
    2. -
    3. Connect both devices via Wi-Fi or Bluetooth. Make sure that they are on the same network or paired with each other.
    4. -
    5. Launch AetherSX2 Pro APK on both devices and select the same PS2 game from the game list.
    6. -
    7. On one device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on host and wait for the other device to join.
    8. -
    9. On the other device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on join and select the device that is hosting from the list.
    10. -
    11. Once both devices are connected, you will see a split screen with each device showing half of the game. You can now play the game together on the same screen.
    12. -
    -

    To use the online multiplayer mode, follow these steps:

    -
      -
    1. Make sure that both devices have AetherSX2 Pro APK installed and have the same PS2 game file stored in their devices.
    2. -
    3. Connect both devices to the internet. Make sure that they have a stable and fast connection.
    4. -
    5. Launch AetherSX2 Pro APK on both devices and select the same PS2 game from the game list.
    6. -
    7. On one device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on online and then tap on create room. Enter a name and a password for your room and tap on OK.
    8. -
    9. On the other device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on online and then tap on join room. Enter the name and the password of the room that you want to join and tap on OK.
    10. -
    11. Once both devices are connected, you will see a screen with each device showing the full game. You can now play the game together online.
    12. -
    -

    Pros and cons of AetherSX2 Pro APK

    -

    AetherSX2 Pro APK is a great app for PS2 lovers, but it also has some pros and cons that you should be aware of. Here are some of them:

    -

    Pros

    -
      -
    • It allows you to play PS2 games on your Android device without any hassle.
    • -
    • It supports a large number of PS2 games from different regions and genres.
    • -
    • It improves the graphics and sound quality of PS2 games by using various plugins and settings.
    • -
    • It lets you customize the controls and settings according to your preference.
    • -
    • It enables you to save and load states anytime you want.
    • -
    • It allows you to play multiplayer games with your friends or other players online.
    • -
    -

    Cons

    -
      -
    • It is not available on the Google Play Store, so you need to download it from third-party sources.
    • -
    • It may not work well on some devices or with some games due to compatibility issues or bugs.
    • -
    • It may consume a lot of battery and CPU power while running PS2 games.
    • -
    • It may require a lot of storage space for PS2 games and app data.
    • -
    -

    Conclusion

    -

    AetherSX2 Pro APK is a must-have app for PS2 fans who want to play their favorite games on their Android devices. It offers many features and enhancements that make it superior to other PS2 emulators for Android. It is easy to download, install, and use, and it supports a large number of PS2 games. It also allows you to play multiplayer games with your friends or other players online. However, it also has some drawbacks, such as compatibility issues, battery consumption, storage space, and security risks. Therefore, you should use it at your own risk and discretion.

    -

    Frequently Asked Questions

    -

    Here are some frequently asked questions about AetherSX2 Pro APK:

    -

    Is AetherSX2 Pro APK safe to use?

    -

    AetherSX2 Pro APK is not an official app from Sony or PCSX2, so it may not be safe to use. It may contain viruses, malware, spyware, or other harmful elements that may damage your device or compromise your privacy. Therefore, you should only download it from trusted sources like [Apkmody] and scan it with an antivirus app before installing it. You should also backup your data before using it and avoid using it for illegal purposes.

    -

    Is AetherSX2 Pro APK legal to use?

    -

    AetherSX2 Pro APK is not legal to use in some countries or regions where PS2 emulation is prohibited or restricted by law. It may also infringe the intellectual property rights of Sony or other game developers who own the PS2 games. Therefore, you should only use it for personal or educational purposes and not for commercial or profit-making purposes. You should also only use it with PS2 games that you own legally or have permission to use.

    -

    How can I get more PS2 games for AetherSX2 Pro APK?

    -

    You can get more PS2 games for AetherSX2 Pro APK by downloading them from the internet or by ripping them from your own PS2 discs. To download them from the internet, you can use websites like [CoolROM] or [Emuparadise] that offer PS2 games for download. To rip them from your own PS2 discs, you can use software like [ImgBurn] or [ DVD Decrypter] that can create ISO files from your PS2 discs. You can then transfer the ISO files to your device and load them on AetherSX2 Pro APK.

    -

    How can I improve the performance of AetherSX2 Pro APK?

    -

    You can improve the performance of AetherSX2 Pro APK by following these tips:

    -
      -
    • Use a device that has a powerful processor, enough RAM, and sufficient storage space.
    • -
    • Close any background apps or processes that may slow down your device or consume resources.
    • -
    • Update your device software and AetherSX2 Pro APK to the latest version.
    • -
    • Adjust the graphics and sound settings to lower values if you experience lag or stuttering.
    • -
    • Use a stable and fast internet connection if you play online multiplayer games.
    • -
    -

    How can I contact the developer of AetherSX2 Pro APK?

    -

    You can contact the developer of AetherSX2 Pro APK by visiting their official website or social media pages. You can also send them an email or leave a comment on their blog. Here are some of their contact details:

    -
      -
    • Website: [AetherSX2]
    • -
    • Email: [aethersx2@gmail.com]
    • -
    • Facebook: [AetherSX2]
    • -
    • Twitter: [@AetherSX2]
    • -
    • YouTube: [AetherSX2]
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md deleted file mode 100644 index 272ee5f0b6f891dc1ad70a414f61419c76d3790c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md +++ /dev/null @@ -1,132 +0,0 @@ -
    -

    Angry Birds 1 APK: How to Download and Play the Classic Game on Your Android Device

    -

    Introduction

    -

    Do you remember the game that started it all? The game that launched a global phenomenon and spawned countless sequels, spin-offs, movies, and merchandise? Yes, we are talking about Angry Birds, the original game that made us fall in love with slingshotting colorful birds at green pigs. If you want to relive the nostalgia and enjoy the classic gameplay, you can download Angry Birds 1 APK on your Android device. In this article, we will show you how to do that and how to play the game like a pro.

    -

    What is Angry Birds 1?

    -

    Angry Birds 1 is the first game in the Angry Birds series, developed by Rovio Entertainment and released in 2009. The game is based on a simple but addictive premise: you have to use a slingshot to launch birds at structures made of various materials, such as wood, stone, glass, and ice, where pigs are hiding. Your goal is to destroy all the pigs in each level using as few birds as possible. The game features hundreds of levels across different episodes, each with its own theme and challenges.

    -

    angry birds 1 apk


    Download Zip ····· https://urlca.com/2uOdNr



    -

    Why download Angry Birds 1 APK?

    -

    You might be wondering why you should download Angry Birds 1 APK when you can just play the game on Google Play Store. Well, there are a few reasons why you might prefer the APK version over the official one. First of all, the APK version is free and does not require any in-app purchases or ads. Secondly, the APK version has all the episodes unlocked from the start, so you don't have to wait or pay to access them. Thirdly, the APK version is compatible with older devices and operating systems that might not support the latest updates of the official version. Finally, the APK version allows you to play offline without any internet connection.

    -

    How to download and install Angry Birds 1 APK

    -

    Step 1: Find a reliable source for the APK file

    -

    The first thing you need to do is to find a trustworthy website that offers the Angry Birds 1 APK file for download. You can use a search engine like Google or Bing to look for one, or you can use one of these links:

    - -

    Make sure you check the reviews and ratings of the website before downloading anything from it. Also, avoid clicking on any suspicious ads or pop-ups that might appear on the website.

    -

    Step 2: Enable unknown sources on your device

    -

    The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, follow these steps:

    -
      -
    1. Go to Settings > Security > Unknown sources.
    2. -
    3. Toggle on the switch or check the box next to Unknown sources.
    4. -
    5. A warning message will appear. Tap OK or Yes to confirm.
    6. -
    -

    You can disable unknown sources after installing Angry Birds 1 APK if you want.

    -

    Step 3: Download and install the APK file

    -

    The final thing you need to do is to download and install the APK file on your device To download and install the APK file on your device, follow these steps:

      -
    1. Open the website where you found the Angry Birds 1 APK file and tap on the download button or link.
    2. -
    3. Wait for the download to finish. You can check the progress in the notification bar or the download manager of your device.
    4. -
    5. Once the download is complete, tap on the APK file to open it. You might see a prompt asking you to choose an app to open the file. Choose Package Installer or Install.
    6. -
    7. A screen will appear showing the permissions required by the app. Tap on Install or Next to continue.
    8. -
    9. Wait for the installation to finish. You can see the progress on the screen.
    10. -
    11. Once the installation is done, tap on Open or Done to launch or exit the app.
    12. -
    -

    Congratulations! You have successfully downloaded and installed Angry Birds 1 APK on your Android device. You can now enjoy playing the classic game anytime, anywhere.

    -

    How to play Angry Birds 1

    -

    The basic gameplay

    -

    The basic gameplay of Angry Birds 1 is very simple and intuitive. You just have to drag your finger on the screen to aim and release to launch a bird from the slingshot. The farther you pull back, the more power and speed you will give to the bird. You can also adjust the angle of your shot by moving your finger up or down. The goal is to hit and destroy all the pigs in each level using as few birds as possible. You will earn stars based on how well you perform in each level. You can replay any level as many times as you want to improve your score and get more stars.

    -

    The different types of birds

    -

    One of the fun aspects of Angry Birds 1 is that you can use different types of birds with different abilities and characteristics. Here are some of them:

    -

    angry birds classic apk download
    -angry birds game free download for android
    -angry birds original apk mod
    -angry birds 1.0 apk
    -angry birds apk offline
    -angry birds apk old version
    -angry birds apk full version
    -angry birds apk unlimited money
    -angry birds apk for pc
    -angry birds apk android 2.3
    -angry birds apk latest version
    -angry birds apk hack
    -angry birds apk no ads
    -angry birds apk revdl
    -angry birds apk pure
    -angry birds apk mirror
    -angry birds apk uptodown
    -angry birds apk rexdl
    -angry birds apk mob.org
    -angry birds apk apkpure
    -angry birds apk mod unlimited everything
    -angry birds apk mod all unlocked
    -angry birds apk mod unlimited powerups
    -angry birds apk mod all levels unlocked
    -angry birds apk mod unlimited coins and gems
    -angry birds 1.6.3 apk download
    -angry birds 1.5.2 apk download
    -angry birds 1.4.4 apk download
    -angry birds 1.3.5 apk download
    -angry birds 1.2.2 apk download
    -angry birds 1.1.0 apk download
    -how to install angry birds 1 apk
    -how to play angry birds 1 apk
    -how to update angry birds 1 apk
    -how to download angry birds 1 apk for free
    -how to get unlimited powerups in angry birds 1 apk
    -how to unlock all levels in angry birds 1 apk
    -how to remove ads in angry birds 1 apk
    -how to hack angry birds 1 apk with lucky patcher
    -how to backup and restore angry birds 1 apk data

    -
      -
    • Red: The most common and basic bird. It does not have any special ability, but it is reliable and versatile.
    • -
    • Blue: A small bird that can split into three smaller birds when you tap on the screen. It is good for breaking glass and hitting multiple targets.
    • -
    • Yellow: A fast bird that can speed up when you tap on the screen. It is good for breaking wood and hitting hard-to-reach places.
    • -
    • Black: A heavy bird that can explode when you tap on the screen or after a few seconds of impact. It is good for breaking stone and causing massive damage.
    • -
    • White: A light bird that can drop an egg bomb when you tap on the screen. It is good for hitting targets below or behind obstacles.
    • -
    • Green: A boomerang bird that can change direction when you tap on the screen. It is good for hitting targets that are out of sight or behind walls.
    • -
    • Big Red: A giant version of the red bird that has more power and weight. It is good for breaking anything in its way.
    • -
    -

    The different types of pigs

    -

    The pigs are your enemies in Angry Birds 1. They come in different sizes, shapes, and colors, and they have different levels of durability and intelligence. Here are some of them:

    -
      -
    • Small Pig: The smallest and weakest pig. It can be easily destroyed by any bird or debris.
    • -
    • Medium Pig: A slightly bigger and stronger pig. It can withstand some hits, but not too much.
    • -
    • Large Pig: A big and tough pig. It can take a lot of hits before being destroyed.
    • -
    • Helmet Pig: A medium pig with a helmet that protects its head. It can resist more damage than a normal medium pig.
    • -
    • Moustache Pig: A large pig with a moustache that makes it look more menacing. It has the same durability as a normal large pig.
    • -
    • King Pig: The leader and boss of all the pigs. He is usually hidden behind layers of protection and requires a lot of hits to be destroyed.
    • -
    -

    The different types of levels

    -

    The levels in Angry Birds 1 are divided into episodes, each with its own theme and setting. Some of the episodes are:

    -
      -
    • Poached Eggs: The first episode, where you are introduced to the basic gameplay and characters.
    • -
    • Mighty Hoax: The second episode, where you face fake cardboard pigs and a mysterious big pig.
    • -
    • Danger Above: The third episode, where you fly above the clouds and encounter new types of birds and pigs.
    • -
    • The Big Setup: The fourth episode, where you face the construction workers who built the pig structures.
    • -
    • Ham 'Em High: The fifth episode, where you travel to the Wild West and face cowboy pigs and TNT barrels.
    • -
    • Mine and Dine: The sixth episode, where you explore the underground mines and face miner pigs and stalactites.
    • -
    • Birdday Party: The seventh episode, where you celebrate the birthday of the birds and face cake-themed levels.
    • -
    • Bad Piggies: The eighth episode, where you play from the perspective of the pigs and try to steal the eggs from the birds.
    • -
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, Angry Birds 1 is a classic game that you can download and play on your Android device using the APK file. You can enjoy the original gameplay, characters, and levels that made this game a global hit. You can also benefit from the advantages of the APK version, such as being free, unlocked, compatible, and offline. All you need to do is to find a reliable source for the APK file, enable unknown sources on your device, and download and install the APK file. Then, you can launch the game and start slinging birds at pigs.

    -

    Call to action

    -

    If you are ready to experience the fun and excitement of Angry Birds 1, don't wait any longer. Download Angry Birds 1 APK today and join millions of fans around the world. You won't regret it!

    -

    FAQs

    -

    Here are some frequently asked questions about Angry Birds 1 APK:

    -
      -
    • Q: Is Angry Birds 1 APK safe to download and install?
    • -
    • A: Yes, as long as you download it from a reputable website that does not contain any malware or viruses. You should also scan the APK file with an antivirus app before installing it.
    • -
    • Q: Is Angry Birds 1 APK legal to use?
    • -
    • A: Yes, as long as you do not distribute or sell it without permission from Rovio Entertainment. You should also respect their intellectual property rights and trademarks.
    • -
    • Q: Is Angry Birds 1 APK compatible with my device?
    • -
    • A: Yes, as long as your device meets the minimum requirements for running the game. You need an Android device with at least 4.1 version or higher, 100 MB of free storage space, and 512 MB of RAM.
    • -
    • Q: How can I update Angry Birds 1 APK?
    • -
    • A: You can update Angry Birds 1 APK by downloading and installing the latest version from the same website where you got the previous one. You should also check for updates regularly to enjoy new features and bug fixes.
    • -
    • Q: How can I contact Rovio Entertainment for support or feedback?
    • -
    • A: You can contact Rovio Entertainment by visiting their official website at https://www.rovio.com/, or by following them on social media platforms such as Facebook, Twitter, Instagram, YouTube, and LinkedIn.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md deleted file mode 100644 index 8572d8752ca4150ce042dc3585463a62ee22658f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md +++ /dev/null @@ -1,100 +0,0 @@ - -

    Call of Duty Warzone Mobile: How to Download and Play the Next-Gen Battle Royale on Your Phone

    -

    If you are a fan of Call of Duty and battle royale games, you might be wondering how to download and play Call of Duty Warzone Mobile, the latest addition to the COD franchise. Call of Duty Warzone Mobile is a mobile adaptation of the wildly popular PC and console game, Call of Duty Warzone, which has over 100 million players worldwide. In this article, we will tell you everything you need to know about Call of Duty Warzone Mobile, including what it is, how to download it, how to play it, and some tips and tricks to help you win.

    -

    What is Call of Duty Warzone Mobile?

    -

    Call of Duty Warzone Mobile is a mobile battle royale game that features authentic COD gameplay, shared progression, and up to 120 player count matches on mobile devices. The game is powered by unified Call of Duty technology, which means that your Battle Pass and friends list sync across Call of Duty Modern Warfare II and Call of Duty Warzone. You can also enjoy social features like chat channels and in-game events.

    -

    call of duty warzone mobile download apkpure


    Download File > https://urlca.com/2uOf6r



    -

    A mobile adaptation of the popular PC and console game

    -

    Call of Duty Warzone Mobile is based on Call of Duty Warzone, which is a free-to-play battle royale game that was released in March 2020. The game takes place in Verdansk, a fictional city inspired by Donetsk in Ukraine. The game mode involves up to 150 players dropping into the map and fighting for survival until only one team or solo player remains. The game also features a unique mechanic called the Gulag, where eliminated players can fight for a chance to respawn.

    -

    Features authentic COD gameplay, shared progression, and up to 120 player count matches

    -

    Call of Duty Warzone Mobile delivers authentic COD gameplay on mobile devices, with first-class graphics and intuitive controls. Everything from movement, aiming, weapon handling, physics, animations, and sound have been optimized for mobile gamers. The game also features up to 120 player count matches, which means more competition and more action. You can also enjoy shared progression with Call of Duty Modern Warfare II and Call of Duty Warzone, which means that your Battle Pass progress, weapon unlocks, skins, operators, and more are synced across platforms.

    -

    Pre-register for a chance to unlock rewards at launch

    -

    Call of Duty Warzone Mobile is expected to launch worldwide in Fall 2023 on Android and iOS devices. However, you can pre-register for the game now and earn rewards if global milestones are hit. These rewards include exclusive vinyls, emblems, weapons, operators, and even a new map called Shoot House. You can pre-register for Call of Duty Warzone Mobile through the App Store or Google Play Store or via the [Call of Duty Warzone Mobile webpage](^1^). -

    Killstreaks are special abilities that you can use to gain an edge in combat. You can find killstreaks from loot boxes, buy stations, or loadout drops. There are different types of killstreaks such as UAV, cluster strike, precision airstrike, shield turret, sentry gun, and more. You can activate killstreaks by tapping on the killstreak icon on the right side of the screen.

    -

    Vehicles are modes of transportation that you can use to traverse Verdansk faster and safer. You can find vehicles scattered around the map or call them in from buy stations. There are different types of vehicles such as ATV, SUV, cargo truck, helicopter, and more. You can drive or ride vehicles by tapping on the vehicle icon on the left side of the screen.

    -

    Weapons are your primary means of offense and defense in Call of Duty Warzone Mobile. You can find weapons from loot boxes, enemies, or loadout drops. There are different types of weapons such as assault rifles, submachine guns, shotguns, sniper rifles, pistols, and more. You can equip two primary weapons and one secondary weapon at a time. You can also customize your weapons with attachments, camos, charms, stickers, and more.

    -

    Win a duel in the Gulag to get a second chance

    -

    One of the most unique features of Call of Duty Warzone Mobile is the Gulag. The Gulag is a prison where eliminated players can fight for a chance to respawn. When you die for the first time in a match, you will be taken to the Gulag and wait for your turn to face another player in a 1v1 duel. The winner of the duel will be redeployed back into Verdansk. The loser will be eliminated for good unless their teammates buy them back from a buy station.

    -

    The Gulag is a small map with different layouts and weapons each time. You will have a few seconds to prepare before the duel starts. You will have a pistol or a shotgun as your weapon and a lethal or tactical equipment as your gadget. You will also have a health bar that regenerates over time. The objective is to kill your opponent or capture the flag in the middle of the map before the time runs out.

    -

    * call of duty warzone mobile apk free download
    -* how to install call of duty warzone on android
    -* call of duty warzone mobile apk + obb
    -* call of duty warzone mobile apk download for android
    -* call of duty warzone mobile apk mod
    -* call of duty warzone mobile apk offline
    -* call of duty warzone mobile apk latest version
    -* call of duty warzone mobile apk no verification
    -* call of duty warzone mobile apk pure
    -* call of duty warzone mobile apk data
    -* call of duty warzone mobile download apkpure for pc
    -* call of duty warzone mobile download apkpure ios
    -* call of duty warzone mobile download apkpure 2023
    -* call of duty warzone mobile download apkpure update
    -* call of duty warzone mobile download apkpure online
    -* call of duty warzone mobile download apkpure hack
    -* call of duty warzone mobile download apkpure beta
    -* call of duty warzone mobile download apkpure full
    -* call of duty warzone android download apkpure
    -* call of duty warzone ios download apkpure
    -* call of duty warzone apk download apkpure
    -* call of duty warzone game download apkpure
    -* call of duty warzone app download apkpure
    -* call of duty warzone free download apkpure
    -* call of duty warzone mod apk download apkpure
    -* best site to download call of duty warzone mobile apk
    -* how to play call of duty warzone on mobile without downloading
    -* is there a call of duty warzone mobile version
    -* when will call of duty warzone be available for mobile
    -* how to get call of duty warzone on android phone
    -* how to download and install call of duty warzone on android device
    -* how to run call of duty warzone on android emulator
    -* how to play call of duty warzone with controller on android
    -* how to fix lag in call of duty warzone on android
    -* how to update call of duty warzone on android without losing data
    -* can you play call of duty warzone on ios devices
    -* how to download and install call of duty warzone on ios device
    -* how to run call of duty warzone on ios emulator
    -* how to play call of duty warzone with controller on ios
    -* how to fix lag in call of duty warzone on ios
    -* how to update call of duty warzone on ios without losing data
    -* what are the requirements for playing call of duty warzone on mobile devices
    -* what are the features and modes in call of duty warzone mobile game
    -* what are the tips and tricks for playing call of duty warzone mobile game
    -* what are the best weapons and loadouts in call of duty warzone mobile game
    -* what are the best settings and graphics options for playing call of duty warzone mobile game
    -* what are the common errors and bugs in call of duty warzone mobile game
    -* what are the solutions and fixes for the errors and bugs in call of duty warzone mobile game

    -

    Tips and Tricks for Call of Duty Warzone Mobile

    -

    Now that you know how to play Call of Duty Warzone Mobile, you might be looking for some tips and tricks to improve your skills and win more matches. Here are some of the best tips and tricks for Call of Duty Warzone Mobile:

    -

    Use headphones and pings to communicate with your squad

    -

    Communication is key in Call of Duty Warzone Mobile, especially if you are playing with a squad. You can use headphones and voice chat to communicate with your teammates and coordinate your strategies. You can also use pings to mark enemies, locations, items, or dangers on the map. You can ping by tapping on the ping icon on the left side of the screen and selecting the option you want.

    -

    Mount your weapon and aim for the head

    -

    Shooting is one of the most important skills in Call of Duty Warzone Mobile. You need to be accurate and fast to take down your enemies before they take you down. One way to improve your shooting is to mount your weapon on walls, windows, or cover. This will reduce your recoil and increase your stability. You can mount your weapon by tapping on the mount icon on the right side of the screen when you are near a suitable surface.

    -

    Another way to improve your shooting is to aim for the head. Headshots deal more damage than body shots and can often result in instant kills. You can aim for the head by using the aim assist feature or by adjusting your crosshair manually. You can also use attachments like scopes or lasers to enhance your aiming.

    -

    Always pick up bounty and scavenger contracts

    -

    Contracts are optional missions that you can find and activate throughout Verdansk. They offer rewards such as cash, loot, intel, or loadouts. There are different types of contracts such as bounty , scavenger, recon, most wanted, and supply run. However, the best contracts to pick up are bounty and scavenger contracts.

    -

    Bounty contracts are contracts that assign you a target to hunt down and kill within a time limit. You can find bounty contracts from yellow loot boxes or buy stations. When you activate a bounty contract, you will see a yellow circle on the map that indicates the general location of your target. You will also see a bar that indicates how close or far they are from you. If you kill your target or someone else does, you will earn a cash reward. If the time runs out or your target escapes, you will earn a smaller reward.

    -

    Scavenger contracts are contracts that require you to find and open three loot boxes within a time limit. You can find scavenger contracts from blue loot boxes or buy stations. When you activate a scavenger contract, you will see a yellow magnifying glass on the map that indicates the location of the first loot box. When you open it, you will see the location of the next one, and so on. If you open all three loot boxes, you will earn a cash reward and a rare loot item such as armor satchel, gas mask, or self-revive kit.

    -

    Go for loadouts and customize your weapons

    -

    Loadouts are custom sets of weapons and equipment that you can create and use in Call of Duty Warzone Mobile. You can create up to 10 loadouts in the loadout menu on the main screen. You can choose your primary weapon, secondary weapon, lethal equipment, tactical equipment, perks, and operator skin for each loadout. You can also customize your weapons with attachments, camos, charms, stickers, and more.

    -

    You can access your loadouts in two ways in Call of Duty Warzone Mobile. One way is to buy a loadout drop from a buy station for $10,000. A loadout drop is a red smoke marker that drops a crate containing your loadouts. You can use it to change your weapons and equipment in the middle of the match. However, be careful as other players can also see and use your loadout drop.

    -

    Another way is to wait for a free loadout drop that occurs twice per match. A free loadout drop is a green smoke marker that drops a crate containing your loadouts near your location. You can use it to change your weapons and equipment without spending any cash. However, be quick as other players can also see and use your free loadout drop.

    -

    Keep track of the redeployment flares and the gas circle

    -

    Two of the most important things to keep track of in Call of Duty Warzone Mobile are the redeployment flares and the gas circle. Redeployment flares are red flares that indicate when a player has been redeployed back into Verdansk. This can happen when they win a duel in the Gulag or when their teammates buy them back from a buy station. You can use redeployment flares to locate and ambush enemies who have just returned to the game.

    -

    The gas circle is the green circle that indicates the safe zone on the map. The gas circle shrinks over time and forces players into a smaller area. Anyone who is outside the gas circle will take damage over time and eventually die. You can use the gas circle to plan your movements and avoid getting caught in the gas.

    -

    Conclusion

    -

    Call of Duty Warzone Mobile is an exciting mobile battle royale game that offers authentic COD gameplay, shared progression, and up to 120 player count matches on mobile devices. The game is expected to launch worldwide in Fall 2023 on Android and iOS devices, but you can pre-register for it now and earn rewards if global milestones are hit. If you want to download and play Call of Duty Warzone Mobile on your phone, you need to check the system requirements for your device, sign up for Call of Duty Warzone Mobile through the App Store or Google Play Store, and wait for the game to be available in your region. If you want to win more matches in Call of Duty Warzone Mobile, you need to choose the best controls and settings for your device, drop into Verdansk and fight for survival, use contracts, killstreaks, vehicles, and weapons to gain an advantage , and win a duel in the Gulag to get a second chance. You also need to use headphones and pings to communicate with your squad, mount your weapon and aim for the head, always pick up bounty and scavenger contracts, go for loadouts and customize your weapons, and keep track of the redeployment flares and the gas circle. We hope this article has helped you learn more about Call of Duty Warzone Mobile and how to download and play it on your phone. Happy gaming!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Call of Duty Warzone Mobile:

    -

    Is Call of Duty Warzone Mobile free to play?

    -

    Yes, Call of Duty Warzone Mobile is free to play. You do not need to pay anything to download or play the game. However, you can purchase in-game items such as Battle Pass, COD Points, bundles, and crates with real money if you want to enhance your gaming experience.

    -

    Is Call of Duty Warzone Mobile cross-platform?

    -

    Yes, Call of Duty Warzone Mobile is cross-platform. You can play with or against players who are using Android or iOS devices. However, you cannot play with or against players who are using PC or console devices.

    -

    How can I link my Call of Duty account to Call of Duty Warzone Mobile?

    -

    You can link your Call of Duty account to Call of Duty Warzone Mobile by tapping on the settings icon on the main menu and selecting the account tab. You will see an option to link your Call of Duty account or create a new one. By linking your Call of Duty account, you can enjoy shared progression, social features, and rewards across Call of Duty Modern Warfare II and Call of Duty Warzone.

    -

    How can I report a bug or a cheater in Call of Duty Warzone Mobile?

    -

    You can report a bug or a cheater in Call of Duty Warzone Mobile by tapping on the settings icon on the main menu and selecting the feedback tab. You will see an option to report a bug or a player. You will need to provide details such as your username, device model, game mode, map, time, description, and screenshot or video evidence if possible. Your report will be sent to the developers for review and action.

    -

    How can I get more information about Call of Duty Warzone Mobile?

    -

    You can get more information about Call of Duty Warzone Mobile by visiting the [Call of Duty Warzone Mobile webpage] or following the official social media channels such as Facebook, Twitter, Instagram, YouTube, and Discord. You can also join the community forums and chat with other players and developers.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md b/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md deleted file mode 100644 index fa03d6ba50ca38f7aee11a709aba1856e18fe74f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md +++ /dev/null @@ -1,124 +0,0 @@ - -

    How to Download the Game of Thrones Mod for CK2

    -

    If you are a fan of both Crusader Kings II and A Song of Ice and Fire, you might have heard of a mod that combines them into one immersive experience. The Game of Thrones mod for CK2 is a full-conversion mod that transforms the medieval strategy game into the world of George R. R. Martin's fantasy saga. You can play as any of the major or minor characters from the books, and experience the events of the story or create your own alternative scenarios.

    -

    In this article, we will show you how to download and install this amazing mod, and give you some tips and tricks on how to play it. Whether you want to conquer Westeros as Aegon the Conqueror, defend it as Robert Baratheon, or break it as Daenerys Targaryen, this mod will let you live your fantasy.

    -

    how to download the game of thrones mod for ck2


    Download File ::: https://urlca.com/2uO9bC



    -

    What is the Game of Thrones Mod for CK2?

    -

    The Game of Thrones mod for CK2 is a total conversion mod that changes every aspect of the game to match the setting and lore of A Song of Ice and Fire. The mod was first released in 2012 by a team led by Cabezaestufa, and has since been updated regularly with new features and content.

    -

    Some of the main features of the mod are:

    -
      -
    • A new map that covers Westeros, Essos, and parts of Sothoryos and Ulthos.
    • -
    • Thousands of new characters from the books, each with their own traits, skills, relationships, claims, and ambitions.
    • -
    • New events that follow or diverge from the plot of the books, such as wars, rebellions, weddings, assassinations, prophecies, dreams, visions, duels, trials, tournaments, feasts, plagues, invasions, etc.
    • -
    • New mechanics that reflect the culture and politics of the world, such as feudal contracts, crown authority, vassal management, council power, succession laws, religions, cultures, bloodlines, dynasties, cadet branches, knightly orders, mercenary companies, holy orders, pirates, slavers, nomads, etc.
    • -
    • New graphics that enhance the visual appeal of the game, such as portraits, flags, coats of arms, icons, interface elements,
    • etc.
    -

    How to Install the Game of Thrones Mod for CK2?

    -

    There are two ways to install the Game of Thrones mod for CK2: manually or through Steam Workshop. Both methods require that you have the latest version of Crusader Kings II and all the necessary DLCs. The mod is compatible with CK2 version 3.3.3 and requires the following DLCs: Sword of Islam, Legacy of Rome, The Republic, The Old Gods, Sons of Abraham, Charlemagne, Way of Life, Horse Lords, Conclave, The Reaper's Due, Monks and Mystics, Jade Dragon, Holy Fury, and Iron Century. Here are the steps for each method:

    Manual Installation

    -
      -
    1. Download the latest version of the mod from the official forum or the moddb page. You will need to register an account and link it to your Steam profile to access the forum.
    2. -
    3. Extract the downloaded zip file to your CK2 mod folder. The default location is C:\Users\YourName\Documents\Paradox Interactive\Crusader Kings II\mod.
    4. -
    5. Make sure that you have two files in your mod folder: A Game of Thrones.mod and A Game of Thrones folder.
    6. -
    7. Launch CK2 and select A Game of Thrones from the mod list in the launcher.
    8. -
    9. Enjoy the game!
    10. -
    -

    Steam Workshop Installation

    -
      -
    1. Subscribe to the mod on Steam Workshop. You can find it by searching for A Game of Thrones in the workshop browser.
    2. -
    3. Wait for Steam to download and install the mod automatically.
    4. -
    5. Launch CK2 and select A Game of Thrones from the mod list in the launcher.
    6. -
    7. Enjoy the game!
    8. -
    -

    How to Play the Game of Thrones Mod for CK2?

    -

    Once you have installed the mod, you are ready to enter the world of A Song of Ice and Fire. You can choose from a variety of characters, scenarios, and difficulty levels to suit your preferences. Here are some tips and tricks on how to play the mod:

    -

    Choosing a Character

    -

    The mod offers a wide range of characters to play as, from kings and queens to lords and ladies, from knights and nobles to bastards and peasants. You can filter them by rank, culture, religion, dynasty, or bookmark in the character selection screen. You can also use the search function to find a specific character by name or title.

    -

    Some characters have special events or challenges associated with them, such as Daenerys Targaryen's quest to reclaim the Iron Throne, Jon Snow's dilemma at the Wall, or Robb Stark's war for independence. These characters are marked with a star icon in the character selection screen. You can also create your own custom character using the ruler designer DLC or the console commands.

    -

    Choosing a Scenario

    -

    The mod features several scenarios or bookmarks that correspond to different periods in the history of Westeros and Essos. Each scenario has its own starting date, map, characters, events, and challenges. You can choose from the following scenarios:

    -
      -
    • The Bleeding Years: The year 7999 since the Landing of Aegon I Targaryen. The Seven Kingdoms are divided and constantly at war with each other. The Iron Throne does not exist yet.
    • -
    • Aegon's Conquest: The year 1 since Aegon's Landing. Aegon I Targaryen has invaded Westeros with his dragons and his sisters. He aims to unify the Seven Kingdoms under his rule.
    • -
    • The Conquest of Dorne: The year 157 since Aegon's Landing. Daeron I Targaryen has launched a campaign to conquer Dorne, the only kingdom that resisted Aegon's Conquest. He faces fierce resistance from the Dornish people.
    • -
    • The Dance of Dragons: The year 129 since Aegon's Landing. A civil war has erupted between two rival branches of House Targaryen over the succession to the Iron Throne. The war is marked by dragon battles and bloodshed.
    • -
    • The Blackfyre Rebellion: The year 196 since Aegon's Landing. Daemon Blackfyre, a bastard son of King Aegon IV Targaryen, has risen in rebellion against his half-brother King Daeron II Targaryen. He claims to be the true heir to the Iron Throne.
    • -
    • The War of Conquest: The year 298 since Aegon's Landing. Robert Baratheon has rebelled against King Aerys II Targaryen, also known as the Mad King. He is supported by Jon Arryn, Eddard Stark, and Hoster Tully. He faces opposition from Tywin Lannister, Mace Tyrell, and Doran Martell.
    • -
    • The Crowned Stag: The year 1 since Robert's Rebellion. Robert Baratheon has defeated the Targaryens and claimed the Iron Throne. He is married to Cersei Lannister, and has appointed Jon Arryn as his Hand. He faces challenges from the surviving Targaryens, the Greyjoys, and the Others.
    • -
    • The Greyjoy Rebellion: The year 9 since Robert's Rebellion. Balon Greyjoy, the Lord of the Iron Islands, has declared himself King of the Iron Islands and launched a rebellion against Robert Baratheon. He is opposed by Robert's allies, such as Eddard Stark, Stannis Baratheon, and Tywin Lannister.
    • -
    • The Clash of Kings: The year 2 since Eddard Stark's death. After the death of King Robert Baratheon and his Hand Eddard Stark, Westeros is plunged into a civil war. Five kings claim the Iron Throne: Joffrey Baratheon, Renly Baratheon, Stannis Baratheon, Robb Stark, and Balon Greyjoy. Meanwhile, Daenerys Targaryen is gathering her forces in Essos, and Jon Snow is facing the threat of the wildlings beyond the Wall.
    • -
    • A Feast for Crows: The year 4 since Eddard Stark's death. The War of the Five Kings has ended with the deaths of Robb Stark, Balon Greyjoy, Renly Baratheon, and Joffrey Baratheon. Stannis Baratheon has gone to the Wall to fight the wildlings and the Others. Tommen Baratheon sits on the Iron Throne, but he is controlled by his mother Cersei Lannister and his uncle Tyrion Lannister. Daenerys Targaryen rules Meereen, but she faces enemies from within and without. Arya Stark is training to become a Faceless Man in Braavos. Bran Stark is learning to become a greenseer beyond the Wall.
    • -
    • A Dance with Dragons: The year 5 since Eddard Stark's death. The War of the Five Kings has reignited with the arrival of Aegon Targaryen, a young man who claims to be the son of Rhaegar Targaryen and Elia Martell. He is supported by Jon Connington, a former Hand of King Aerys II Targaryen, and the Golden Company, a mercenary army. He invades Westeros with the intention of taking the Iron Throne from Tommen Baratheon. Meanwhile, Daenerys Targaryen faces a new threat from the Dothraki, who have gathered under a new khal named Khal Jhaqo. Jon Snow is stabbed by his own men at the Wall for letting the wildlings through. Tyrion Lannister joins forces with Jorah Mormont and a dwarf named Penny to find Daenerys. Cersei Lannister is imprisoned by the Faith Militant for her crimes. Jaime Lannister is missing in the Riverlands. Sansa Stark is hiding in the Vale under the guise of Alayne Stone, the bastard daughter of Petyr Baelish.
    • -
    • The Winds of Winter: The year 6 since Eddard Stark's death. This scenario is based on the unreleased sixth book of A Song of Ice and Fire by George R. R. Martin. It is not canon and may differ from the actual book when it comes out.
    • -
    -

    Choosing a Difficulty Level

    -

    The mod allows you to choose from four difficulty levels: Easy, Normal, Hard, and Very Hard. The difficulty level affects how challenging the game will be for you and your opponents. It affects factors such as AI aggressiveness, event frequency, revolt risk, disease spread, attrition rate, etc.

    -

    How to install CK2:AGOT mod
    -CK2 Game of Thrones mod download link
    -Crusader Kings 2 A Game of Thrones mod tutorial
    -How to play CK2 with A Game of Thrones mod
    -CK2 AGOT mod latest version
    -How to update CK2 Game of Thrones mod
    -Crusader Kings 2 A Song of Ice and Fire mod guide
    -How to enable CK2 AGOT mod
    -CK2 Game of Thrones mod steam workshop
    -Crusader Kings 2 A Game of Thrones mod review
    -How to uninstall CK2 AGOT mod
    -CK2 Game of Thrones mod compatibility
    -Crusader Kings 2 A Game of Thrones mod wiki
    -How to fix CK2 AGOT mod crashes
    -CK2 Game of Thrones mod best start date
    -Crusader Kings 2 A Game of Thrones mod cheats
    -How to create a custom character in CK2 AGOT mod
    -CK2 Game of Thrones mod submods
    -Crusader Kings 2 A Game of Thrones mod tips and tricks
    -How to join the CK2 AGOT mod discord
    -CK2 Game of Thrones mod changelog
    -Crusader Kings 2 A Game of Thrones mod features
    -How to duel in CK2 AGOT mod
    -CK2 Game of Thrones mod scenarios
    -Crusader Kings 2 A Game of Thrones mod factions
    -How to hatch a dragon in CK2 AGOT mod
    -CK2 Game of Thrones mod requirements
    -Crusader Kings 2 A Game of Thrones mod events
    -How to marry Daenerys in CK2 AGOT mod
    -CK2 Game of Thrones mod console commands
    -Crusader Kings 2 A Game of Thrones mod map
    -How to become a white walker in CK2 AGOT mod
    -CK2 Game of Thrones mod skins
    -Crusader Kings 2 A Game of Thrones mod characters
    -How to win the iron throne in CK2 AGOT mod
    -CK2 Game of Thrones mod bugs and fixes
    -Crusader Kings 2 A Game of Thrones mod religions
    -How to play as a night's watch in CK2 AGOT mod
    -CK2 Game of Thrones mod graphics settings
    -Crusader Kings 2 A Game of Thrones mod cultures
    -How to colonize Valyria in CK2 AGOT mod
    -CK2 Game of Thrones mod gameplay videos
    -Crusader Kings 2 A Game of Thrones mod development diary
    -How to play as a wildling in CK2 AGOT mod
    -CK2 Game of Thrones mod performance optimization
    -Crusader Kings 2 A Game of Thrones mod forum and community

    -

    You can change the difficulty level at any time during the game by going to the Game Options menu.

    -

    Conclusion

    -

    The Game of Thrones mod for CK2 is one of the best mods ever made for any game. It lets you immerse yourself in a rich and detailed world that is faithful to the books and full of possibilities. You can create your own stories and adventures, or relive your favorite moments from the books or show.

    -

    If you are looking for a new way to enjoy Crusader Kings II or A Song of Ice and Fire, you should definitely try out this mod. You will not regret it.

    -

    FAQs

    -

    Here are some frequently asked questions about the Game of Thrones mod for CK2:

    -

    Is the mod compatible with other mods?

    -

    No, the mod is not compatible with other mods that change the map, characters, events, mechanics, or graphics of CK2. It is designed to be played as a standalone mod. However, you can use some submods that are made specifically for the Game of Thrones mod. You can find them on the official forum or the Steam Workshop.

    -

    How often is the mod updated?

    -

    The mod is updated regularly by the developers, usually every few months. The updates include new features, content, bug fixes, and compatibility patches. You can check the official forum or the moddb page for the latest news and updates on the mod.

    -

    Does the mod contain spoilers for the books or show?

    -

    Yes, the mod contains spoilers for both the books and the show. The mod follows the canon of the books, not the show, so some events and characters may differ from what you have seen on TV. The mod also includes some events and characters that have not yet appeared in the books, but are based on hints or leaks from George R. R. Martin or his editors. If you want to avoid spoilers, you should read all the books before playing the mod.

    -

    How do I report bugs or give feedback on the mod?

    -

    If you encounter any bugs or issues while playing the mod, you can report them on the official forum or the Steam Workshop page. The developers are very active and responsive, and they appreciate any feedback or suggestions from the players. You can also join their Discord server to chat with them and other fans of the mod.

    -

    Are there any submods that enhance the mod?

    -

    Yes, there are many submods that add new features, content, or options to the mod. Some of the most popular submods are:

    -
      -
    • More Bloodlines: Adds hundreds of new bloodlines to the game, based on historical or legendary figures from Westeros and Essos.
    • -
    • Sinful Mods: Adds various options to make the game more realistic, immersive, or challenging, such as slavery, torture, cannibalism, incest, etc.
    • -
    • Flamequeen's Ultimate Building Submod: Adds new buildings and upgrades to the game, such as castles, temples, towns, mines, etc.
    • -
    • Colonize Valyria: Allows you to colonize and restore the ancient empire of Valyria.
    • -
    • AGOT More Decisions: Adds more decisions and actions to the game, such as legitimizing bastards, changing laws, declaring wars, etc.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md b/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md deleted file mode 100644 index 6b5ae21d7e9d06dcc554615838dc4956ee5ef0f7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md +++ /dev/null @@ -1,165 +0,0 @@ -
    -

    How to Download Social Dummy iOS: A Guide for Creating Fake Social Media Posts

    -

    Have you ever wanted to create fake social media posts for fun or testing purposes? If so, you might be interested in an app called Social Dummy iOS. This app allows you to recreate the timelines of popular social media apps, such as Facebook, Twitter, Instagram, WhatsApp, and more, with fake but faithful posts. You can customize the posts with different options and styles, and share or save them as screenshots or videos. In this article, we will show you how to download and use Social Dummy iOS, as well as some tips and tricks for making the most out of it.

    -

    What is Social Dummy iOS?

    -

    A brief introduction to the app and its features

    -

    Social Dummy iOS is a simple and easy-to-use entertainment tool that lets you create fake social media posts. You can choose from a list of social media platforms, such as Facebook, Twitter, Instagram, WhatsApp, Snapchat, TikTok, YouTube, and more, and create realistic posts with your own content or predefined templates. You can also edit the profiles, followers, likes, comments, messages, stories, reels, live streams, and other aspects of each platform. The app offers you a unique way of stylising your posts in different formats with many customisation options available.

    -

    download social dummy ios


    Download Zip ===== https://urlca.com/2uO9u7



    -

    Why you might want to use it for entertainment or testing purposes

    -

    There are many reasons why you might want to use Social Dummy iOS for entertainment or testing purposes. For example, you can:

    -
      -
    • Prank your friends or family by showing them fake posts from celebrities, influencers, or yourself.
    • -
    • Test your social media marketing strategies or designs by creating mockups of your posts.
    • -
    • Express your creativity or humor by making funny or parody posts.
    • -
    • Learn how different social media platforms work by exploring their features and functions.
    • -
    • Have fun and enjoy yourself by creating any kind of posts you want.
    • -
    -

    How to Download and Install Social Dummy iOS

    -

    The steps to find and download the app from the App Store

    -

    To download and install Social Dummy iOS on your iPhone, iPad, or iPod touch, you need to follow these steps:

    -
      -
    1. Open the App Store on your device and search for "Social Dummy" or "Social Dummy Notes".
    2. -
    3. Select the app that has a blue icon with a white dummy head and a pencil.
    4. -
    5. Tap on "Get" or "Install" and wait for the app to download.
    6. -
    7. Once the app is downloaded, tap on " Open" or the app icon to launch the app.
    8. -
    -

    The requirements and compatibility of the app

    -

    Before you download and install Social Dummy iOS, you need to make sure that your device meets the following requirements and compatibility:

    -
      -
    • You need to have iOS 13.0 or later installed on your device.
    • -
    • You need to have at least 125 MB of free space on your device.
    • -
    • You need to have an internet connection to use the app.
    • -
    • The app is compatible with iPhone, iPad, and iPod touch.
    • -
    -

    How to create an account and log in

    -

    After you download and install Social Dummy iOS, you need to create an account and log in to use the app. You can do this by following these steps:

    -
      -
    1. When you open the app for the first time, you will see a welcome screen with a "Create Account" button. Tap on it to proceed.
    2. -
    3. Enter your email address, password, and username in the fields provided. You can also choose to sign up with your Apple ID or Google account.
    4. -
    5. Tap on "Create Account" and wait for a confirmation email to be sent to your email address.
    6. -
    7. Open the email and tap on the link to verify your account.
    8. -
    9. Go back to the app and tap on "Log In". Enter your email address and password, or choose to log in with your Apple ID or Google account.
    10. -
    11. Tap on "Log In" and you will be taken to the main screen of the app.
    12. -
    -

    How to Use Social Dummy iOS

    -

    How to choose from different social media platforms and create fake posts

    -

    To use Social Dummy iOS, you need to choose from different social media platforms and create fake posts. You can do this by following these steps:

    -
      -
    1. On the main screen of the app, you will see a list of social media platforms that you can choose from. Tap on the one that you want to use.
    2. -
    3. You will be taken to a screen that shows a mockup of the timeline of that platform. You can scroll up and down to see the existing posts, or tap on the "+" button at the bottom right corner to create a new post.
    4. -
    5. You will be taken to a screen that shows a template of the post that you can edit. You can change the profile picture, name, username, date, time, content, media, location, hashtags, mentions, reactions, comments, and other details of the post. You can also use predefined templates or randomize the post by tapping on the buttons at the top right corner.
    6. -
    7. When you are done editing the post, tap on "Done" at the top left corner. You will see a preview of the post on the timeline. You can edit or delete it by tapping on it again.
    8. -
    -

    How to customize the posts with various options and styles

    -

    To customize the posts with various options and styles, you need to use the settings menu of Social Dummy iOS. You can do this by following these steps:

    -

    download social dummy ios app
    -download social dummy ios free
    -download social dummy ios latest version
    -download social dummy ios cnet
    -download social dummy ios 148apps
    -download social dummy ios fake social media generator
    -download social dummy ios for iphone
    -download social dummy ios for ipad
    -download social dummy ios for ipod touch
    -download social dummy ios review
    -download social dummy ios tutorial
    -download social dummy ios update
    -download social dummy ios screenshot
    -download social dummy ios features
    -download social dummy ios alternatives
    -how to download social dummy ios
    -where to download social dummy ios
    -why download social dummy ios
    -what is social dummy ios app
    -what can social dummy ios do
    -how to use social dummy ios app
    -how to create fake posts with social dummy ios app
    -how to make fake screenshots with social dummy ios app
    -how to edit fake posts with social dummy ios app
    -how to delete fake posts with social dummy ios app
    -how to share fake posts with social dummy ios app
    -how to customize fake posts with social dummy ios app
    -how to change avatar with social dummy ios app
    -how to change username with social dummy ios app
    -how to change date and time with social dummy ios app
    -how to add comments and likes with social dummy ios app
    -how to add hashtags and mentions with social dummy ios app
    -how to add emojis and stickers with social dummy ios app
    -how to add links and images with social dummy ios app
    -how to add videos and gifs with social dummy ios app
    -how to create fake twitter posts with social dummy ios app
    -how to create fake imessage posts with social dummy ios app
    -how to create fake instagram posts with social dummy ios app
    -how to create fake youtube posts with social dummy ios app
    -how to create fake facebook posts with social dummy ios app
    -how to create fake tumblr posts with social dummy ios app
    -how to create fake snapchat posts with social dummy ios app
    -how to create fake facetime posts with social dummy ios app
    -how to create fake whatsapp posts with social dummy ios app
    -how to create fake call posts with social dummy ios app
    -how to create fake spotify posts with social dummy ios app
    -how to create fake netflix posts with social dummy ios app
    -how to create fake safari posts with social dummy ios app

    -
      -
    1. On the main screen of the app, tap on the gear icon at the top left corner to open the settings menu.
    2. -
    3. You will see a list of options that you can change, such as theme, language, font size, date format, time format, currency symbol, etc. Tap on the one that you want to change and select your preference.
    4. -
    5. You can also tap on "Style" to change the appearance of each social media platform. You can choose from different colors, layouts, icons, logos, etc. Tap on "Apply" to save your changes.
    6. -
    7. You can also tap on "Advanced" to access more options, such as enabling or disabling ads, notifications, stories, reels, live streams, etc. Tap on "Apply" to save your changes.
    8. -
    -

    How to share or save the posts as screenshots or videos

    -

    To share or save the posts as screenshots or videos, you need to use the share menu of Social Dummy iOS. You can do this by following these steps:

    -
      -
    1. On the screen that shows a mockup of the timeline of a social media platform, tap on the share icon at the top right corner to open the share menu.
    2. -
    3. You will see a list of options that you can choose from, such as screenshot, video recording, copy link, copy text, etc. Tap on the one that you want to use.
    4. -
    5. If you choose screenshot or video recording, you will see a preview of the image or video that you can edit or crop. Tap on "Done" to save or share the image or video.
    6. -
    7. If you choose copy link or copy text, you will see a message that confirms that the link or text has been copied to your clipboard. You can paste it to any app or platform that you want.
    8. -
    -

    Tips and Tricks for Using Social Dummy iOS

    -

    How to make the posts more realistic and engaging

    -

    To make the posts more realistic and engaging, you need to use some tips and tricks that can improve the quality and credibility of your posts. Here are some of them:

    -
      -
    • Use relevant and trending topics, hashtags, mentions, and media for your posts. You can search for them on the internet or use the app's suggestions.
    • -
    • Use proper grammar, spelling, punctuation, and capitalization for your posts. You can use the app's spell check or proofread your posts before publishing them.
    • -
    • Use different tones, styles, and emotions for your posts. You can use emojis, stickers, gifs, memes, filters, effects, etc. to express yourself.
    • -
    • Use different types of posts, such as text, image, video, audio, link, poll, quiz, etc. to vary your content and attract more attention.
    • -
    • Use realistic numbers and dates for your posts. You can use the app's randomize feature or adjust them manually.
    • -
    -

    How to avoid common mistakes and errors

    -

    To avoid common mistakes and errors, you need to be aware of some potential issues that might occur when using Social Dummy iOS. Here are some of them:

    -
      -
    • Do not use real or sensitive information for your posts. You might violate the privacy or security of yourself or others.
    • -
    • Do not use offensive or inappropriate content for your posts. You might offend or harm yourself or others.
    • -
    • Do not use the app for illegal or unethical purposes. You might face legal or moral consequences.
    • -
    • Do not use the app for real social media accounts. You might confuse or mislead yourself or others.
    • -
    • Do not rely on the app for accurate or reliable information. You might get false or outdated information.
    • -
    -

    How to get help and support from the developer or the community

    -

    To get help and support from the developer or the community, you need to use the contact options of Social Dummy iOS. You can do this by following these steps:

    -
      -
    1. On the main screen of the app, tap on the gear icon at the top left corner to open the settings menu.
    2. -
    3. Tap on "Help" to access a list of frequently asked questions and answers that might solve your problems.
    4. -
    5. Tap on "Contact" to send an email to the developer with your feedback, suggestions, bug reports, or questions.
    6. -
    7. Tap on "Social" to follow the developer on Twitter, Instagram, YouTube, or Discord. You can also join the community of other users and share your creations or ideas.
    8. -
    -

    Conclusion

    -

    A summary of the main points and benefits of using Social Dummy iOS

    -

    Social Dummy iOS is a fun and useful app that allows you to create fake social media posts for entertainment or testing purposes. You can choose from different social media platforms and create realistic posts with various options and styles. You can also share or save the posts as screenshots or videos. The app is easy to download and install, and compatible with most iOS devices. The app also offers you tips and tricks for making the posts more realistic and engaging, as well as help and support from the developer or the community.

    -

    A call to action for the readers to try it out and have fun

    -

    If you are interested in creating fake social media posts for fun or testing purposes, you should definitely try out Social Dummy iOS. It is a simple and easy-to-use entertainment tool that lets you recreate the timelines of popular social media apps with fake but faithful posts. You can download it from the App Store for free and start creating your own fake posts in minutes. You can also share your creations with your friends or family, or join the community of other users and see what they have made. So what are you waiting for? Download Social Dummy iOS today and have fun!

    -

    FAQs

    -

    Q1. Is Social Dummy iOS free or paid?

    -

    A1. Social Dummy iOS is free to download and use. However, it contains ads that can be removed by purchasing a premium subscription for $0.99 per month or $9.99 per year.

    -

    Q2. Is Social Dummy iOS safe and legal to use?

    A2. Social Dummy iOS is safe and legal to use as long as you follow the terms and conditions of the app and the social media platforms that you are mimicking. You should not use the app for real or sensitive information, offensive or inappropriate content, illegal or unethical purposes, or real social media accounts. You should also respect the privacy and security of yourself and others.

    -

    Q3. Can I use Social Dummy iOS for real social media accounts?

    -

    A3. No, you cannot use Social Dummy iOS for real social media accounts. The app is only meant for creating fake posts for entertainment or testing purposes. You should not use the app to impersonate, deceive, or harm yourself or others on real social media platforms.

    -

    Q4. What are some alternative apps to Social Dummy iOS?

    -

    A4. Some alternative apps to Social Dummy iOS are:

    -
      -
    • Fake Chat Maker: This app allows you to create fake chat conversations for various messaging apps, such as WhatsApp, Messenger, iMessage, etc. You can customize the messages, photos, videos, voice notes, emojis, stickers, etc. You can also share or save the chats as screenshots or videos.
    • -
    • Fake Tweet Generator: This app allows you to create fake tweets for Twitter. You can customize the profile picture, name, username, date, time, content, media, location, hashtags, mentions, retweets, likes, comments, etc. You can also share or save the tweets as screenshots or videos.
    • -
    • Fake Post Generator: This app allows you to create fake posts for various social media apps, such as Facebook, Instagram, Snapchat, etc. You can customize the profile picture, name, username, date, time, content, media, location, hashtags, mentions, reactions, comments, etc. You can also share or save the posts as screenshots or videos.
    • -
    -

    Q5. How can I contact the developer of Social Dummy iOS?

    -

    A5. You can contact the developer of Social Dummy iOS by sending an email to support@socialdummy.app or by following them on Twitter (@SocialDummyApp), Instagram (@socialdummyapp), YouTube (Social Dummy), or Discord (Social Dummy).

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md b/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md deleted file mode 100644 index 16b755be96bdc3c5ca0f13b905c8dd70845694de..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md +++ /dev/null @@ -1,173 +0,0 @@ -
    -

    Kurikulum az: Azrbaycan tehsl sisteminin yeni anlayışı

    -

    Kurikulum az Azrbaycan tehsl sisteminde yeni bir anlayışdır v o tehsl prosesi il bağlı bütün faliyytlrin teşkilini v hyata keçirilmsini ks etdirn konseptual snd kimi başa düşülür.

    -

    Kurikulum az ndir?

    -

    Kurikulum az tehsl prosesinin mqsd mzmum teşkil v qiymtlndirm aspektlrini hat edir.

    -

    kurıkulum az


    DOWNLOAD ————— https://urlca.com/2uOdUB



    -

    Kurikulum az tehsl prosesinin mqsd aspektini ks etdirir

    -

    Kurikulum az təhsil prosesinin məqsəd aspektini kəşf etdirir. Bu aspekt, təhsilin nə üçün və nəyə görə aparıldığını, təhsilin ictimai və fərdi faydasını, təhsilin istiqamətlərini və prioritetlərini əhatə edir. Kurikulum az, təhsilin məqsədini Azərbaycan Respublikasının Təhsil Haqqında Qanununda və Milli Təhsil Konsepsiyasında əsaslandırır. Kurikulum az, təhsilin məqsədini aşağıdakı kimi təsvir edir:

    -
      -
    • Tehsil insanın şxsiyytini inkişaf etdirn v ona hüquq v vzfeleri öyrtn bir prosesdir.
    • -
    • Tehsil insanın intellektual, ruhi, fiziki, estetik, sosial v moral potensialını açğa çxaran bir prosesdir.
    • -
    • Tehsil insanın milli v universallıq dyrinlri il bağlı biliklri, bacarıqları, dyrinlri v dyrclri yaratmağa kömklü olan bir prosesdir.
    • -
    • Tehsil insanın hmiyytli problemlri hll etmk üçün tfkkür bacarığını inkişaf etdirn bir prosesdir.
    • -
    • Tehsil insanın özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğe imkan veren bir prosesdir.
    • -
    • Tehsil insanın müxtlif mhitlrd ictimai münasibtlr qurmağa, ictimai hllr il maraqlanmağa, ictimai faliyyt göstrmğe hazırlayan bir prosesdir.
    • -
    • Tehsil insanın özünü müasir dünyaya uyğunlaşdıran, özünü hmiyytli dylr il tanışdıran, özünü yeni texnologiyalardan istifad etmk bacarığına malik edn bir prosesdir.
    • -

      Kurikulum az təhsil prosesinin məzmun aspektini kəşf etdirir

      -

      Kurikulum az təhsil prosesinin məzmun aspektini kəşf etdirir. Bu aspekt, təhsildə nələrin öyrənilməsi və necə öyrənilməsi lazım olduğunu, təhsildə hansı biliklər, bacarıqlar, dəyərlər və dərcələrin əldə edilməsi məqsəd olunduğunu əhatə edir. Kurikulum az, təhsilin məzmununu ümumi təhsilin fenn kurikulumlarına əsaslanır. Kurikulum az, fenn kurikulumlarının mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir.

      -

      Kurikulum az fenn kurikulumlarının mündëricatını əks etdirir

      -

      Kurikulum az fenn kurikulumlarının mündëricatını əks etdirir. Bu mündëricat, fennin öyrnilmsi üçün vacib olan mövzuları, konseptlri, faktları, qaydaları v s. ifad edir. Kurikulum az, fenn kurikulumlarının mündëricatını ümumi taktiki elementlrl il bağlı olaraq tşkil edir. Bu elementlr aşağıdakılardır:

      -
        -
      • Alt-standartlar: Fennin hrl bir mövzusunun öyrnilmsi üçün vacib olan bilik v bacarıqları müyyenleşdirn spesifik ifadlrdir.
      • -
      • Tlim nticlr: Fennin hrl bir mövzusunun öyrnilmsindn sonra tlbinin göstrmsi üçün vacib olan bilik v bacarıqları müyyenleşdirn spesifik ifadlrdir.
      • -
      • Öyrnm faliyytlri: Fennin hrl bir mövzusunun öyrnilmsi üçün vacib olan interaktiv v effektiv metod v texnikalardır.
      • -

        Kurikulum az fenn kurikulumlarının standartlarını əks etdirir

        -

        Kurikulum az fenn kurikulumlarının standartlarını əks etdirir. Bu standartlar, fennin hər bir sinif üçün öyrənilməsi üçün vacib olan minimal tələbləri müəyyənləşdirir. Kurikulum az, fenn kurikulumlarının standartlarını ümumi təhsilin səviyyələrinə uyğun olaraq təşkil edir. Bu səviyyələr aşağıdakılardır:

        -
          -
        • İbtidai təhsil: 1-ci sinifdən 4-cü sinifə qədər olan təhsil mərhələsidir.
        • -
        • Əsas təhsil: 5-ci sinifdən 9-cu sinifə qədər olan təhsil mərhələsidir.
        • -
        • Orta təhsil: 10-cu sinifdən 11-ci sinifə qədər olan təhsil mərhələsidir.
        • -
        -

        Kurikulum az fenn kurikulumlarının təlim nəticələrini əks etdirir

        -

        Kurikulum az fenn kurikulumlarının təlim nəticələrini əks etdirir. Bu nəticlrlr, fennin hrl bir sinif üçün öyrnilmsindn sonra tlbinin göstrmsi üçün vacib olan bilik v bacarıqları ifad edir. Kurikulum az, fenn kurikulumlarının tlim nticlrini bilik v onun növlri, taksonomiya, tfkkür növlri, problem hlltme bacarığı kimi kriteriyalara uyğun olaraq tşkil edir. Bu kriteriyalar aşağıdakılardır:

        -
          -
        • Bilik v onun növlri: Bilik, insanın öyrndiyi v baxış açısını şkillndirn mğlumat v anlayışlardır. Bilik üç növ olur: Faktual bilik, konseptual bilik v prosedural bilik.
        • -
        • Taksonomiya: Taksonomiya, bilik v bacarıqların sviyy v kompleksliyin müyyenleşdirilmsi üçün istifad olunan bir sistmdir. Taksonomiya altı sviyy olur: Yadda saxlama, başa düşm, tdbiq etm, analiz etm, sintez etm v qiymtlndirm.
        • -
        • Tfkkür növlri: Tfkkür növlri, insanın problemlri hll etmk üçün istifad etdiyi düşünc proseslrinin növlridir. Tfkkür növlri üç növ olur: Rutin tfkkür, kritik tfkkür v yaradıcı tfkkür.
        • -
        • Problem hlltme bacarığı: Problem hlltme bacarığı, insanın problemlri tanımağa, analiz etmğe, alternativ yollar tapmağa v yaxşı hll seçmğe imkan veren bir bacarıqdır.
        • -

          Kurikulum az fenn kurikulumlarının öyrənmə fəaliyyətlərini əks etdirir

          -

          Kurikulum az fenn kurikulumlarının öyrənmə fəaliyyətlərini əks etdirir. Bu fəaliyyətlər, fennin hər bir sinif üçün öyrənilməsi üçün vacib olan interaktiv və effektiv metod və texnikalardır. Kurikulum az, fenn kurikulumlarının öyrənmə fəaliyyətlərini interaktiv dərsin təşkili, motivasiya və refleksiya kimi taktiki elementlərlə əlaqələndirir. Bu elementlər aşağıdakılardır:

          -

          kurikulum testlər
          -kurikulum ümumi
          -kurikulum fənn
          -kurikulum təhsil
          -kurikulum məzmunu
          -kurikulum taksonomiya
          -kurikulum təfəkkür
          -kurikulum qiymətləndirmə
          -kurikulum pedaqoji
          -kurikulum öyrənmə
          -kurikulum intellekt
          -kurikulum inkişaf
          -kurikulum motivasiya
          -kurikulum refleksiya
          -kurikulum interaktiv
          -kurikulum planlaşdırma
          -kurikulum standartlar
          -kurikulum nəticələr
          -kurikulum fəaliyyət
          -kurikulum mühit
          -kurikulum şagirdlər
          -kurikulum müəllimlər
          -kurikulum valideynlər
          -kurikulum münasibətlər
          -kurikulum biheyverizm
          -kurikulum sınaq imtahanı
          -kurikulum abituriyent imtahanı
          -kurikulum TQDK sualları
          -kurikulum online testlər
          -kurikulum inşalar nağıllar
          -kurikulum balların hesablanması
          -kurikulum təhsil naziri əmri
          -kurikulum pedaqoji ictimaiyyət
          -kurikulum təlim prosesi konseptual sənəd
          -kurikulum Vikipediya məqaləsi
          -kurikulum Azərbaycan Respublikası təhsil sistemi
          -kurikulum terminlərin təsviri
          -kurikulum təhsil qanunu
          -kurikulum təhsilin təşkilinə dair ümumi pedaqoji tələblər
          -kurikulum təhsil prosesinin iştirakçılarının hüquq və vəzifələri
          -kurikulum bilik və onun növləri
          -kurikulum Benjamin Blum idrak taksonomiyası
          -kurikulum Hovard Qardner çoxnövlü intellekt nəzəriyyəsi
          -kurikulum Jan Piaje zehni inkişafın mərhələləri nəzəriyyəsi
          -kurikulum Lev Vıqotski sosial inkişaf nəzəriyyəsi
          -kurikulum Karol Dvek düşüncə tərzi nəzəriyyesi

          -
            -
          • Interaktiv dərsin təşkili: Interaktiv dərsin təşkili, tələbələrin aktiv iştirakını təmin eden, onların bir-biri ilə və müəllim ilə münasibət qurmağa imkan veren, onların fikirlrini ifad etmklrin v dinlmklrin öyrnmlrin sağlayan bir dersin tşkilatıdır.
          • -
          • Motivasiya: Motivasiya, tlim prosesind tlbinin marağını artıran, onun öyrnm istyini yaratmağa kömklü olan v onun öyrnm prosesind aktiv rol oynamağa stimul veren bir faktordur.
          • -
          • Refleksiya: Refleksiya, tlbinin öz öyrnm prosesini qiymtlndirm v yaxşlaşdırmğa çalışdığı bir faliyytdir. Refleksiya tlbinin özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğa imkan verir.
          • -
          -

          Kurikulum az nümunlri

          -

          Kurikulum az nümunlri, kurikulum az konseptinin praktiki tminatını göstrn sndlrdir. Kurikulum az nümunlri ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Bu fasil vahidlrlrin hrl biri ayrı-ayrı sndlr olaraq hazırlanır v tehsl nazirliyinin tsdiqlmsindn sonra pedaqoji ictimaiyytin istifadsin verilir.

          -

          Kurikulum az ümumi fasil vahidlrl bölünür

          -

          Kurikulum az ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Bu fasil vahidlrlrin hrl biri ayrı-ayrı sndlr olaraq hazırlanır v tehsl nazirliyinin tsdiqlmsindn sonra pedaqoji ictimaiyytin istifadsin verilir. Bu sndlr aşağıdakı cddlrdn ibartrdir:

          - - - - -refleksiya -
          Fasil adıMündricatı
          Tehsil sferasıTehsil qanunu, terminlrin tsviri, ümumi pedaqoji tlbler, iştirakçıların hüquq v vzfeleri
          Fenn kurikulumlarıBilik v onun növlri, taksonomiya, tfkkür növlri, problem hlltme bacarığı
          -

          Kurikulum az təhsil sferası sənədi

          -

          Kurikulum az təhsil sferası sənədi, kurikulum az konseptinin təhsil sferasına aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənəd, təhsilin qanuni əsaslarını, terminlərin izahını, ümumi pedaqoji tələbləri, iştirakçıların hüquq və vəzifələrini əks etdirir. Bu sənəd, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sənəddir. Bu sənədin mündricatı aşağıdakı kimi taksim olunur:

          -
            -
          • Tehsil qanunu: Bu bölm, tehslin qanuni ssaslarını ks etdirir. Bu ssaslar Azrbaycan Respublikasının Konstitusiyasında, Azrbaycan Respublikasının Tehsil Haqqında Qanununda v digr qanunvericilik aktlarında müyyenleşdirilmişdir.
          • -
          • Terminlrin tsviri: Bu bölm, tehsl prosesind istifad olunan terminlrin izahnı ks etdirir. Bu izahnlar tehsl nazirliyinin tsdiqlmiş terminoloji lüğtindn alınmışdır.
          • -
          • Ümumi pedaqoji tlbler: Bu bölm, tehsl prosesind iştirak edn bütün şxslrin riayt etmsi vacib olan pedaqoji tlbleri ks etdirir. Bu tlbler tehslin keyfiyytini yüksltmk, tehslin effektivliyini artırmaq, tehslin demokratikliyini tminaltmaq mqsdi il müyyenleşdirilmişdir.
          • -
          • Iştirakçıların hüquq v vzfeleri: Bu bölm, tehsl prosesind iştirak edn bütün şxslrin hüquq v vzfelerini ks etdirir. Bu hüquq v vzfeler tehsl nazirliyinin tsdiqlmiş nizamnamlrind müyyenleşdirilmişdir.
          • -

            Kurikulum az fenn kurikulumları sənədləri

            -

            Kurikulum az fenn kurikulumları sənədləri, kurikulum az konseptinin fenn kurikulumlarına aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənədlər, hər bir fennin ümumi təhsilin hər bir sinifi üçün öyrənilməsi üçün vacib olan mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir. Bu sənədlər, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sıra sənədlərdir. Bu sıra sırası il aşağıdakı fennlri əhatə edir:

            -
              -
            • Azrbaycan dili v Ədbyatı
            • -
            • Rus dili v Ədbyatı
            • -
            • İngilis dili
            • -
            • Riyaziyyat
            • -
            • Fizika
            • -
            • Kimya
            • -
            • Bilogiya
            • -
            • Coğrafiya
            • -
            • Tarix
            • -
            • Əxlaq v Hüquq
            • -
            • Vtndaşlıq Tdbiri
            • -
            • Musiqi
            • -
            • Rsm v Xalq Tdbirlri
            • -
            • Bdn Tdbiri v Sğlam Hvt
            • -
            • İnformatika v Texnologiyalar
            • -

              Kurikulum az taktiki elementlər sənədləri

              -

              Kurikulum az taktiki elementlər sənədləri, kurikulum az konseptinin taktiki elementlərə aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənədlər, hər bir fennin ümumi təhsilin hər bir sinifi üçün öyrənilməsi üçün vacib olan illik və gündəlik planlaşdırma, interaktiv dərsin təşkili, motivasiya və refleksiya kimi elementləri əks etdirir. Bu sənədlər, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sıra sənədlərdir. Bu sıra sırası il aşağıdakı elementlri əhatə edir:

              -
                -
              • İllik və gündəlik planlaşdırma: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan mövzuların vaxt çrxına gör planlaşdırılmasını ks etdirir. Bu planlaşdırma tlbinin öyrnm prosesind rahat v effektiv iştirak etmsini tminaltmaq mqsdi il hazırlanır.
              • -
              • Interaktiv dersin teşkili: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan interaktiv v effektiv metod v texnikalarn tminatını ks etdirir. Bu metod v texnikalar tlbinin aktiv iştirakını, onların bir-biri il v müllim il münasibt qurmağa imkan verir.
              • -
              • Motivasiya: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan tlbinin marağını artıran, onun öyrnm istyini yaratmağa kömklü olan v onun öyrnm prosesind aktiv rol oynamağa stimul veren faktorları ks etdirir.
              • -
              • Refleksiya: Bu element, fennin hrl bir sinif üçün öyrnilmsindn sonra tlbinin öz öyrnm prosesini qiymtlndirm v yaxşlaşdırmğa çalışdığı faliyytlri ks etdirir. Bu faliyytlr tlbinin özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğe imkan verir.
              • -
              -

              Kurikulum azın faydaları

              -

              Kurikulum azın faydaları, kurikulum az konseptinin tehsl sistemin ümumi inkişafına v tlbinin keyfiyytli tehsl almasına nisbt olaraq göstrdiyi müsbtl nticlrdir. Kurikulum azın faydaları aşağıdakı kimi sadalana bilr:

              -
                -
              • Kurikulum az tehsl prosesini daha mqsdlri, mzmumlu, teşkilli v qiymtlndirilmiş edir.
              • -
              • Kurikulum az tehsl prosesind iştirak edn bütün şxslrin hüquq v vzfelerini müyyenleşdirir v riayt edilmsini tminaltırır.
              • -
              • Kurikulum az tlbinin bilik, bacarıq, dyr v drc lrinin inkişafına kömklü olur.
              • -
              • Kurikulum az tlbinin tfkkür bacarığını, problem hlltme bacarığını, yaradıcı bacarığını inkişaf etdirir.
              • -li>Kurikulum az tələbənin milli və universallıq dəyərləri ilə bağlı bilikləri, bacarıqları, dəyərləri və dərcələrini yaratmağa köməkli olur. -
              • Kurikulum az tələbənin özünü tanımağa, özünü ifad etməyə, özünü inkişaf etdirməyə imkan verir.
              • -
              • Kurikulum az tələbənin müxtəlif mühitlərdə ictimai münasibətlər qurmağa, ictimai hallar ilə maraqlanmağa, ictimai faliyyət göstərməyə hazırlayır.
              • -
              • Kurikulum az tələbənin özünü müasir dünyaya uyğunlaşdıran, özünü hümmiyytli dillər ilə tanışdıran, özünü yeni texnologiyalardan istifad etmək bacarığına malik edir.
              • -
              -

              Kurikulum azın çatdırılması

              -

              Kurikulum azın çatdırılması, kurikulum az konseptinin təhsil sisteminin bütün sviyy v strukturlarında hyata keçirilmisinin tminatını ifad edir. Kurikulum azın çatdırılması üçün aşağıdakı addımlar atılır:

              -
                -
              • Kurikulum azın hazırlanması: Bu addmd, kurikulum az konseptinin tşkilatçı v mzmum aspektlrinin hazırlanmasını ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
              • -
              • Kurikulum azın tsdiqlnmsi: Bu addmd, kurikulum az konseptinin rsmi olaraq tsdiqlnmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
              • -
              • Kurikulum azın yayımlanması: Bu addmd, kurikulum az konseptinin pedaqoji ictimaiyytin istifadsin verilmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
              • -
              • Kurikulum azın hyata keçirilmsi: Bu addmd, kurikulum az konseptinin tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir.
              • -
              • Kurikulum azın monitorinqi v qiymtlndirilmsi: Bu addmd, kurikulum az konseptinin hyata keçirilmsindn sonra onun effektivliyini v keyfiyytini yoxlamaq v yaxşlaşdırmaq üçün monitorinq v qiymtlndirm faliyytlrinin aparılmasını ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir.
              • -
              -

              Xülas

              -septual sənəd kimi başa düşülür. Kurikulum az təhsil prosesinin məqsəd, məzmun, təşkil və qiymətləndirmə aspektlərini əhatə edir. Kurikulum az təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi konseptual sənəddir. Kurikulum az ümumi təhsilin fənn kurikulumlarına əsaslanır. Kurikulum az fənn kurikulumlarının mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir. Kurikulum az illik və gündəlik planlaşdırma, interaktiv dersin təşkili, motivasiya və refleksiya kimi taktiki elementləri nəzərə alır. Kurikulum az nümunlri, kurikulum az konseptinin praktiki tminatını göstrn sənədlrdir. Kurikulum az nümunlri ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Kurikulum azın faydaları, kurikulum az konseptinin tehsl sistemin ümumi inkişafına v tlbinin keyfiyytli tehsl almasına nisbt olaraq göstrdiyi müsbtl nticlrdir. Kurikulum azın çatdırılması, kurikulum az konseptinin tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmisinin tminatını ifad edir.

              -

              FAQ

              -

              Aşağıda kurikulum az il bağlı bzi suallara cavablar verilmişdir:

              -
                -
              1. Niç bunun adı kurikulum az?
              2. -
              3. Kurikulum az adı Azrbaycanın milli identitetini v texnoloji inkişafını ks etdirn bir ad seçilmişdir. Kurikulum sözü latın dilindn gldiyi üçün universallıq dyrinini ifad edir. Az is qısaltması is Azrbaycanın milli kodunu v texnoloji inkişafının simvolunu ifad edir.
              4. -
              5. Kurikulum az kimlr üçün hazırlanmışdır?
              6. -
              7. Kurikulum az ümumi tehslin bütün iştirakçıları üçün hazırlanmışdır. Bu iştirakçılar tlblrin özü, müllimlr, rhtorlar, validlri, tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrini əhat edir.
              8. -
              9. Kurikulum az nec bunun istifad olunmalıdır?
              10. -
              11. Kurikulum az bir rsmi snd kimi istifad olunmalıdır. Bu snd tlbinin öyrnm prosesind rahat v effektiv iştirak etmsini tminaltmaq üçün vacib olan bütün mündricatı, standartları, tlim nticlrini v öyrnm faliyytlrini ks etdirir.
              12. -
              13. Kurikulum az nec bunun hyata keçirilmldir?
              14. -
              15. Kurikulum az bir rsmi snd kimi hyata keçirilmldir. Bu snd tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmldir. Bu hyata keçirm tlbinin bilik, bacarıq, dyr v drc lrinin inkişafına kömklü olur.
              16. -
              17. Kurikulum az nec bunun monitorinqi v qiymtlndirm olunmalıdır?
              18. -imi monitorinqi v qiymtlndirm olunmalıdır. Bu monitorinq v qiymtlndirm faliyytlri kurikulum az konseptinin hyata keçirilmsindn sonra onun effektivliyini v keyfiyytini yoxlamaq v yaxşlaşdırmaq üçün vacibdir. Bu monitorinq v qiymtlndirm faliyytlri tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir. -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md b/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md deleted file mode 100644 index 3df5b3d43d4822958c0e8d02132d228bb9b61ba7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md +++ /dev/null @@ -1,172 +0,0 @@ -
              -

              How to Download and Play Naruto Ultimate Ninja Storm 3 on PPSSPP Emulator

              -

              If you are a fan of Naruto anime and manga, you might have heard of Naruto Ultimate Ninja Storm 3, one of the most popular games based on the series. This game was originally released for PlayStation 3, Xbox 360, and PC in 2013, but you can also play it on your Android or PC device using a PSP emulator called PPSSPP. In this article, we will show you how to download and play Naruto Ultimate Ninja Storm 3 on PPSSPP emulator using a file hosting service called MediaFire.

              -

              Introduction

              -

              Naruto Ultimate Ninja Storm 3 is a fighting game that follows the events of the Fourth Shinobi World War arc in the Naruto Shippuden anime. You can play as over 80 characters from the series, including Naruto, Sasuke, Sakura, Kakashi, Madara, Itachi, and more. You can also experience epic boss battles, stunning cinematics, and immersive environments in this game.

              -

              naruto ultimate ninja storm 3 ppsspp file download mediafıre


              Download 🔗 https://urlca.com/2uOgfS



              -

              PPSSPP is an emulator that allows you to run PSP games on your Android or PC device. It has many features that enhance the graphics, sound, and performance of the games. You can also customize the controls, save states, cheats, and more with this emulator.

              -

              MediaFire is a file hosting service that lets you upload, store, and share files online. You can access your files from any device with an internet connection. You can also download files from other users with a link. MediaFire offers up to 50 GB of free storage space and unlimited downloads.

              -

              Requirements

              -

              Before you download and play Naruto Ultimate Ninja Storm 3 on PPSSPP emulator, you need to make sure that your device meets the minimum or recommended specifications. Here are the requirements for Android and PC devices:

              -
                -
              • Android:
                  -
                • Minimum: Android 4.0 or higher, 1 GB RAM, OpenGL ES 2.0 support
                • -
                • Recommended: Android 5.0 or higher, 2 GB RAM or more, OpenGL ES 3.0 support or higher
                • -
                -
              • -
              • PC:
                  -
                • Minimum: Windows XP or higher, Intel Core 2 Duo or equivalent CPU, 512 MB RAM, DirectX 9.0c support
                • -
                • Recommended: Windows 7 or higher, Intel Core i5 or equivalent CPU, 2 GB RAM or more, DirectX 11 support or higher
                • -
                -
              • -
              -

              You also need to download PPSSPP emulator for your device. You can get it from these links:

              - -

              Finally, you need to download Naruto Ultimate Ninja Storm 3 ISO file from MediaFire. You can get it from this link:

              - -

              Installation and Configuration

              -

              After you have downloaded the PPSSPP emulator and the Naruto Ultimate Ninja Storm 3 ISO file, you need to install and configure them on your device. Here are the steps to do so:

              -

              naruto shippuden ultimate ninja storm 3 full burst ppsspp iso download mediafıre
              -how to download naruto ultimate ninja storm 3 for ppsspp android from mediafıre
              -naruto ultimate ninja storm 3 ppsspp highly compressed file download mediafıre
              -naruto ultimate ninja storm 3 ppsspp mod file download mediafıre
              -naruto ultimate ninja storm 3 ppsspp save data file download mediafıre
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre for pc
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre offline
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre latest version
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre with cheats
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre no password
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre free
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre apk
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre zip
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre rar
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre cso
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre obb
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre emulator
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre english
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre gameplay
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre review
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre link
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre direct
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre fast
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre working
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre updated
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre best settings
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre tutorial
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre guide
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre tips and tricks
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre online multiplayer
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all characters unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all costumes unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all jutsus unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all stages unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all modes unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre all features unlocked
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre unlimited money and coins
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre unlimited health and chakra
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre unlimited items and weapons
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre unlimited skills and abilities
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre unlimited fun and enjoyment
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre original and authentic
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre safe and secure
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre virus and malware free
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre error and bug free
              -naruto ultimate ninja storm 3 ppsspp file download mediafıre compatible and supported

              -
                -
              1. Install PPSSPP emulator on your device by following the instructions on the screen.
              2. -
              3. Extract the Naruto Ultimate Ninja Storm 3 ISO file from the MediaFire link using a file manager app or a zip extractor app. You will get a file named NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso.
              4. -
              5. Copy or move the NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso file to a folder of your choice on your device. You can use the default PSP folder or create a new one.
              6. -
              7. Open PPSSPP emulator and tap on the Games tab. Navigate to the folder where you saved the NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso file and tap on it to load the game.
              8. -
              9. Before you start playing, you may want to adjust some settings on PPSSPP emulator to improve the performance and quality of the game. Here are some recommended settings:
              10. -
                  -
                • Graphics:
                    -
                  • Mode: Buffered rendering
                  • -
                  • Frameskipping: Off or 1
                  • -
                  • Rendering resolution: 2x PSP or higher
                  • -
                  • Texture filtering: Linear or Anisotropic
                  • -
                  • Texture scaling: Off or xBRZ
                  • -
                  • Hardware transform: On
                  • -
                  • Software skinning: On
                  • -
                  • Mipmapping: On
                  • -
                  • VSync: On
                  • -
                  -
                • -
                • Audio:
                    -
                  • Enable sound: On
                  • -
                  • Audio latency: Low or Medium
                  • -
                  -
                • -
                • System:
                    -
                  • Fast memory: On
                  • -
                  • Multithreaded: On
                  • -
                  • I/O timing method: Fast or Host
                  • -
                  -
                • -
                • Controls:
                    -
                  • Edit touch control layout: Adjust the size and position of the buttons according to your preference
                  • -
                  • Edit gamepad mappings: Map the buttons of your external controller if you have one
                  • -
                  -
                • -
                -
              -

              Gameplay and Features

              -

              Naruto Ultimate Ninja Storm 3 is a fun and exciting game that lets you experience the thrilling battles and adventures of Naruto and his friends. Here are some of the gameplay and features of the game:

              -
                -
              • The game has a story mode that follows the events of the Fourth Shinobi World War arc, from the Five Kage Summit to the final showdown between Naruto and Sasuke. You can also play as different characters in different scenarios, such as Sasuke vs Itachi, Naruto vs Pain, and more.
              • -
              • The game has a free-roaming mode that allows you to explore various locations in the Naruto world, such as Konoha, Suna, Kumo, and more. You can also interact with other characters, collect items, complete missions, and unlock secrets.
              • -
              • The game has a versus mode that lets you fight against other players or AI opponents in various stages and settings. You can choose from over 80 characters, each with their own unique moves, combos, jutsus, and awakenings. You can also customize your character's appearance, skills, items, and support characters.
              • -
              • The game has a Full Burst HD version that adds some enhancements and extras to the original PSP version. Here is a table comparing the differences between the two versions:

                - - - - - - - -
                PSP VersionFull Burst HD Version
                - Has lower resolution and graphics quality- Has higher resolution and graphics quality
                - Has fewer characters and costumes- Has more characters and costumes, such as Kabuto (Sage Mode), Naruto (Hokage), Sasuke (Eternal M - Has more characters and costumes, such as Kabuto (Sage Mode), Naruto (Hokage), Sasuke (Eternal Mangekyo Sharingan), and more
                - Has fewer stages and settings- Has more stages and settings, such as the Uchiha Hideout, the Great Ninja War Battlefield, and more
                - Has no online multiplayer mode- Has an online multiplayer mode that lets you play with other players around the world
                - Has no additional content or DLC- Has additional content and DLC, such as the Road to Ninja costumes, the Sage Kabuto chapter, and more
                -

                Some tips and tricks to enhance your gaming experience are:

                -
                  -
                • Use the chakra dash and the substitution jutsu wisely, as they can help you evade or counter your opponent's attacks.
                • -
                • Use the support characters strategically, as they can assist you in offense, defense, or balance.
                • -
                • Use the awakening mode when your health is low, as it can boost your power and abilities.
                • -
                • Collect ryo and ninja tools from the story mode and the free-roaming mode, as they can help you unlock and upgrade your character's skills and items.
                • -
                • Complete the ninja world timeline and the ultimate decision events in the story mode, as they can unlock alternative scenarios and endings.
                • -
                -

                Conclusion

                -

                Naruto Ultimate Ninja Storm 3 is a great game that lets you enjoy the thrilling and emotional story of Naruto and his friends. You can also play it on your Android or PC device using PPSSPP emulator and a file from MediaFire. All you need to do is follow the steps we have provided in this article and you will be ready to play. You can also adjust the settings and customize your character to suit your preferences. We hope you have fun playing this game and reliving the epic moments of the Naruto series.

                -

                If you liked this article, please share it with your friends and leave us a comment below. We would love to hear your feedback and suggestions. Also, if you have any questions or problems regarding the game or the emulator, feel free to ask us in the comment section. We will try our best to help you out.

                -

                Thank you for reading this article and happy gaming!

                -

                FAQs

                -

                Here are some frequently asked questions and answers related to the topic:

                -
                  -
                1. Q: Is Naruto Ultimate Ninja Storm 3 legal to download and play?
                2. -
                3. A: Yes, as long as you own a copy of the original game for PSP or PS3. Downloading and playing a backup copy of a game that you own is legal in most countries. However, downloading and playing a pirated copy of a game that you do not own is illegal and we do not condone it.
                4. -
                5. Q: Is Naruto Ultimate Ninja Storm 3 safe to download from MediaFire?
                6. -
                7. A: Yes, as long as you download it from a trusted source. The link we have provided in this article is safe and verified by us. However, be careful of other links that may contain viruses or malware. Always scan your files before opening them.
                8. -
                9. Q: How can I play Naruto Ultimate Ninja Storm 3 with my friends online?
                10. -
                11. A: You can play Naruto Ultimate Ninja Storm 3 with your friends online using PPSSPP's built-in online multiplayer mode. You need to have a stable internet connection and a valid IP address. You also need to create or join a room with your friends using PPSSPP's network settings. For more details, please refer to this guide: How to play PSP games online with PPSSPP.
                12. -
                13. Q: How can I fix Naruto Ultimate Ninja Storm 3 lagging or crashing on PPSSPP?
                14. -
                15. A: There are several possible reasons why Naruto Ultimate Ninja Storm 3 may lag or crash on PPSSPP. Some of them are:
                16. -
                    -
                  • Your device does not meet the minimum or recommended specifications to run the game smoothly.
                  • -
                  • Your PPSSPP settings are not optimized for the game.
                  • -
                  • Your Naruto Ultimate Ninja Storm 3 ISO file is corrupted or incomplete.
                  • -
                  • Your PPSSPP emulator is outdated or incompatible with the game.
                  • -
                  -
                17. To fix these issues, you can try the following solutions:
                18. -
                    -
                  • Upgrade your device's hardware or software if possible.
                  • -
                  • Adjust your PPSS Adjust your PPSSPP settings according to the recommendations we have given in this article or experiment with different options until you find the best ones for your device and game.
                  • -
                  • Redownload the Naruto Ultimate Ninja Storm 3 ISO file from the MediaFire link we have provided in this article or from another trusted source. Make sure the file is complete and not corrupted.
                  • -
                  • Update your PPSSPP emulator to the latest version or try a different version that is compatible with the game.
                  • -
                  -
                19. Q: How can I get more games for PPSSPP emulator?
                20. -
                21. A: You can get more games for PPSSPP emulator by downloading ISO or CSO files from various sources online. However, you should only download games that you own legally and that are safe and virus-free. You can also rip your own PSP games using a PSP console and a USB cable. For more details, please refer to this guide: How to get PSP games for PPSSPP.
                22. -

                401be4b1e0
                -
                -
                \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md deleted file mode 100644 index 3918c71f6b8d44a410b31a7c5bccd93dc68bbf62..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md +++ /dev/null @@ -1,132 +0,0 @@ - - - - - - - -
                -

                How to Play Soul Knight Online Co-op on Android

                -

                Do you love shooting games with pixel graphics, rogue-like elements, and tons of weapons? If so, you might want to check out Soul Knight, a shooter game that features extremely easy and intuitive control, super smooth and enjoyable gameplay, and random dungeons full of alien minions. But what if you want to share this fun experience with your friends online? Well, you can do that too! In this article, we will show you how to play Soul Knight online co-op on Android with some simple steps.

                -

                What You Need to Play Soul Knight Online Co-op on Android

                -

                A compatible Android device

                -

                First of all, you need an Android device that can run Soul Knight smoothly. The game requires Android 4.4 or higher and at least 100 MB of free storage space. You also need a stable internet connection for online co-op mode.

                -

                soul knight online co op android download


                Download File >>> https://urlca.com/2uOfs6



                -

                The Soul Knight app from Google Play Store

                -

                Next, you need to download and install the Soul Knight app from Google Play Store. It's free to play with some optional in-app purchases. You can find it here: Soul Knight - Apps on Google Play

                -

                A VPN app and a VPN profile file

                -

                Since Soul Knight does not have an official online co-op mode, you need to use a VPN (virtual private network) app and a VPN profile file to connect with other players online. A VPN app allows you to create a secure connection to another network over the internet, while a VPN profile file contains the settings and information for the VPN connection. You can download and install any VPN app that supports OpenVPN protocol from Google Play Store, such as OpenVPN Connect - Fast & Safe SSL VPN Client - Apps on Google Play or Turbo VPN- Free VPN Proxy Server & Secure Service - Apps on Google Play. You also need to download and install a VPN profile file from our Discord server, which you can find here: Soul Knight - Discord. The VPN profile file will allow you to join the Soul Knight online co-op network and play with other players.

                -

                soul knight multiplayer online android apk
                -soul knight co op mode android free download
                -soul knight online co op adventure game android
                -soul knight android online co op with friends
                -soul knight online co op roguelike shooter android
                -soul knight online co op pixel dungeon android
                -soul knight android download online co op gameplay
                -soul knight online co op offline lan game android
                -soul knight online co op assorted game modes android
                -soul knight online co op controller supported android
                -soul knight online co op unique heroes android
                -soul knight online co op 400+ weapons android
                -soul knight online co op randomly generated dungeons android
                -soul knight online co op super intuitive control android
                -soul knight online co op super smooth gameplay android
                -soul knight online co op action and survival android
                -soul knight online co op tower defense mode android
                -soul knight online co op editors' choice android
                -soul knight online co op chillyroom game android
                -soul knight online co op magical stone quest android
                -soul knight online co op alien minions shooting android
                -soul knight online co op strategy and skill android
                -soul knight online co op pixel art style android
                -soul knight online co op dark forest and chateau android
                -soul knight online co op raid monster dens android
                -soul knight online co op loot treasures and npcs android
                -soul knight online co op dodge fire cast skill android
                -soul knight online co op score super combos android
                -soul knight online co op pick up the gun android
                -soul knight online co op start your dungeon adventure android
                -how to play soul knight online co op on android
                -best tips for soul knight online co op on android
                -download and install soul knight online co op on android
                -review of soul knight online co op on android
                -is soul knight online co op available on android
                -why play soul knight online co op on android
                -what is new in soul knight online co op on android
                -where to find soul knight online co op on android
                -who made soul knight online co op for android
                -when was soul knight online co op released on android

                -

                The Soul Knight Host app (if you want to host a game)

                -

                If you want to host a game and invite other players to join, you need to download and install the Soul Knight Host app from our Discord server. The Soul Knight Host app is a tool that helps you create and manage your online co-op game. You can find it here: Soul Knight - Discord. The Soul Knight Host app will ask you for the private IPs of the players who want to join your game, and then start the game for you.

                -

                How to Set Up Soul Knight Online Co-op on Android

                -

                Download and install the Soul Knight app from Google Play Store

                -

                The first step is to download and install the Soul Knight app from Google Play Store. You can do this by following this link: Soul Knight - Apps on Google Play. Once you have installed the app, open it and grant it the necessary permissions.

                -

                Download and install a VPN app and a VPN profile file from our Discord server

                -

                The next step is to download and install a VPN app and a VPN profile file from our Discord server. You can do this by following these steps:

                - -

                Download and install the Soul Knight Host app from our Discord server (if you want to host a game)

                -

                If you want to host a game and invite other players to join, you need to download and install the Soul Knight Host app from our Discord server. You can do this by following these steps:

                -
                  -
                • Go to the #soul-knight-host channel on our Discord server and download the latest Soul Knight Host app.
                • -
                • Install the Soul Knight Host app on your Android device.
                • -
                • Open the Soul Knight Host app and grant it the necessary permissions.
                • -

                Connect to the VPN using the VPN profile file

                -

                After you have downloaded and installed the VPN app and the VPN profile file, you need to connect to the VPN using the VPN profile file. You can do this by following these steps:

                -
                  -
                • Open the VPN app and select the VPN profile file that you imported.
                • -
                • Tap on the connect button and wait for the connection to be established.
                • -
                • You should see a notification that says you are connected to the VPN.
                • -
                • You can also check your IP address and location on the VPN app or on any online IP checker website.
                • -
                -

                Start the Soul Knight app and go to Multiplayer mode

                -

                Now that you are connected to the VPN, you can start the Soul Knight app and go to Multiplayer mode. You can do this by following these steps:

                -
                  -
                • Open the Soul Knight app and tap on the start button.
                • -
                • Select Multiplayer mode and choose either Local or Online.
                • -
                • If you choose Local, you can play with other players who are connected to the same VPN network as you.
                • -
                • If you choose Online, you can play with other players who are online on any VPN network.
                • -
                -

                If you are hosting a game, input the private IPs of the players who want to join in the Soul Knight Host app and press start

                -

                If you are hosting a game and want to invite other players to join, you need to input the private IPs of the players who want to join in the Soul Knight Host app and press start. You can do this by following these steps:

                -
                  -
                • Open the Soul Knight Host app and tap on the host button.
                • -
                • Enter your name and select your hero.
                • -
                • Input the private IPs of the players who want to join your game. You can find their private IPs on their VPN apps or on our Discord server.
                • -
                • Press start and wait for the game to load.
                • -
                -

                If you are joining a game, tell your private IP to the host and wait for them to start the game

                -

                If you are joining a game that is hosted by another player, you need to tell your private IP to the host and wait for them to start the game. You can do this by following these steps:

                -
                  -
                • Open your VPN app and find your private IP address. It should be something like 10.x.x.x.
                • -
                • Tell your private IP address to the host of the game. You can do this through voice chat, text chat, or our Discord server.
                • -
                • Wait for the host to input your private IP in their Soul Knight Host app and press start.
                • -
                • You should see a notification that says you are joining a game.
                • -

                How to Play Soul Knight Online Co-op on Android

                -

                Enjoy shooting some alien minions with your friends online!

                -

                Congratulations! You have successfully set up Soul Knight online co-op on Android. Now you can enjoy shooting some alien minions with your friends online. Soul Knight is a fun and addictive shooter game that will keep you entertained for hours. You can choose from different heroes, each with their own unique skills and abilities. You can also collect and use hundreds of weapons, from pistols and rifles to lasers and swords. You can also upgrade your weapons and skills with gems and coins that you earn from killing enemies and completing dungeons.

                -

                Use different heroes, weapons, skills, and strategies to survive the randomly generated dungeons

                -

                One of the best features of Soul Knight is that it has randomly generated dungeons, which means that every time you play, you will encounter different enemies, traps, treasures, and bosses. This adds a lot of variety and challenge to the game, as you never know what to expect. You will need to use different heroes, weapons, skills, and strategies to survive the dungeons and reach the final boss. You can also customize your hero's appearance and stats with skins and buffs that you can buy or unlock.

                -

                Explore different game modes and features such as tower defense, boss rush, daily challenges, etc.

                -

                Soul Knight also has different game modes and features that you can explore and enjoy. For example, you can play tower defense mode, where you have to defend your base from waves of enemies. You can also play boss rush mode, where you have to fight against multiple bosses in a row. You can also play daily challenges, where you have to complete specific tasks or objectives within a time limit. You can also unlock achievements, collect pets, craft items, and more.

                -

                Communicate with your teammates using voice chat or text chat

                -

                Another great feature of Soul Knight online co-op is that you can communicate with your teammates using voice chat or text chat. This makes the game more fun and social, as you can coordinate your actions, share tips, or just chat with your friends. You can also use emojis and stickers to express yourself. To use voice chat or text chat, you need to tap on the microphone or keyboard icon on the top right corner of the screen.

                -

                Be aware of the latency, data usage, and security issues that may arise from using a public VPN

                -

                While playing Soul Knight online co-op on Android is a lot of fun, you should also be aware of some potential issues that may arise from using a public VPN. For example, you may experience latency or lag due to the distance between you and the VPN server. This may affect your gameplay performance and enjoyment. You may also consume more data than usual due to the encryption and decryption process of the VPN. This may affect your data plan or cost. You may also expose yourself to security risks such as hackers or malware that may try to access your device or data through the VPN network. Therefore, you should always use a trusted VPN app and profile file, and avoid using public Wi-Fi or unsecured networks when playing Soul Knight online co-op on Android.

                -

                Conclusion

                -

                Soul Knight is an amazing shooter game that you can play online co-op on Android with your friends. It has easy and intuitive controls, smooth and enjoyable gameplay, random dungeons full of alien minions, different heroes, weapons, skills, strategies, game modes, features, and more. To play Soul Knight online co-op on Android, you need a compatible Android device, the Soul Knight app from Google Play Store, a VPN app and a VPN profile file from our Discord server, and the Soul Knight Host app (if you want to host a game). You also need to follow some simple steps to set up Soul Knight online co-op on Android. However, you should also be aware of some potential issues that may arise from using a public VPN such as latency, data usage, and security risks. We hope this article has helped you learn how to play Soul Knight online co-op on Android. Now go ahead and have some fun shooting some alien minions with your friends online!

                -

                Frequently Asked Questions

                -

                Q: How many players can play Soul Knight online co-op on Android?

                -

                A: Soul Knight online co-op on Android supports up to 4 players per game.

                -

                Q: Can I play Soul Knight online co-op on Android with players who use iOS devices?

                -

                A: No, Soul Knight online co-op on Android is only compatible with other Android devices.

                -

                Q: Can I play Soul Knight online co-op on Android without using a VPN?

                -

                A: No, Soul Knight online co-op on Android

                A: No, Soul Knight online co-op on Android requires a VPN to connect with other players online.

                -

                Q: Which VPN app and profile file should I use to play Soul Knight online co-op on Android?

                -

                A: You should use the VPN app and profile file that are provided by our Discord server. You can find them here: Soul Knight - Discord. Do not use any other VPN app or profile file, as they may not work or may cause problems.

                -

                Q: How can I join the Soul Knight Discord server?

                -

                A: You can join the Soul Knight Discord server by following this link: Soul Knight - Discord. You will need a Discord account to join. The Soul Knight Discord server is a friendly and helpful community of Soul Knight players who chat, play together, and have fun.

                197e85843d
                -
                -
                \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md b/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md deleted file mode 100644 index 2f7d1c9dce60f053fb4f1d0dec1f9df199b23ed6..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md +++ /dev/null @@ -1,122 +0,0 @@ -
                -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated: How to Get the Best of Adobe Creative Suite 5

                - -

                If you are looking for a way to unleash your creativity and design stunning digital content, you might be interested in ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated. This is a software package that includes all the tools you need to create amazing graphics, videos, web pages, and more. You can get access to Photoshop, Illustrator, InDesign, Dreamweaver, Premiere Pro, After Effects, and many other applications that will help you bring your ideas to life.

                - -

                However, you might also be wondering how to get this software for free. After all, ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is not cheap, and you might not have the budget to afford it. Fortunately, there is a way to get it without paying a dime. All you need is a keygen that will generate a valid serial number for you.

                -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated


                DOWNLOADhttps://ssurll.com/2uzyLW



                - -

                What is a keygen and how does it work?

                - -

                A keygen is a program that can create unique codes that can activate a software product. It works by using an algorithm that mimics the one used by the original manufacturer. By entering the code generated by the keygen, you can bypass the activation process and use the software as if you bought it legally.

                - -

                However, not all keygens are reliable and safe. Some of them might contain viruses or malware that can harm your computer or steal your personal information. Some of them might also generate invalid codes that will not work or will be detected by the software as fraudulent. That's why you need to be careful when choosing a keygen and only download it from trusted sources.

                - -

                How to use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                If you want to use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you need to follow these steps:

                - -
                  -
                1. Download the keygen from a reputable website. You can find one by searching online or by following the links provided by some of the web pages that offer this software for free.
                2. -
                3. Extract the keygen from the zip file and run it as administrator.
                4. -
                5. Select the product you want to activate from the drop-down menu.
                6. -
                7. Click on Generate and copy the serial number that appears.
                8. -
                9. Download the trial version of ADOBE CS5.5 MASTER COLLECTION from the official website or from another source.
                10. -
                11. Install the software and choose the option to enter a serial number.
                12. -
                13. Paste the serial number that you copied from the keygen and complete the installation.
                14. -
                15. Patch your hosts file to prevent the software from connecting to the internet and verifying your activation status. You can do this by following the instructions provided by the keygen or by editing the file manually.
                16. -
                17. Enjoy using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated for free!
                18. -
                - -

                What are the benefits of using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                By using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you can enjoy many benefits, such as:

                - -
                  -
                • You can save money by not having to buy the software.
                • -
                • You can access all the features and functions of ADOBE CS5.5 MASTER COLLECTION without any limitations or restrictions.
                • -
                • You can create professional-quality digital content for personal or commercial purposes.
                • -
                • You can update your skills and learn new techniques with the latest tools and technologies.
                • -
                • You can impress your clients, colleagues, friends, or family with your amazing creations.
                • -
                - -

                What are the risks of using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                However, using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated also comes with some risks, such as:

                -

                - -
                  -
                • You might violate the terms and conditions of Adobe and face legal consequences.
                • -
                • You might expose your computer to viruses or malware that can damage your system or compromise your security.
                • -
                • You might get invalid or blacklisted serial numbers that will not work or will cause errors or crashes.
                • -
                • You might lose access to customer support or updates from Adobe.
                • -
                • You might experience compatibility issues or bugs with some of the software components.
                • -
                - -

                Conclusion

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a great way to get access to one of the most powerful and versatile software packages for digital content creation. However, it also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.

                -

                What are the features of ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is not just a simple software package. It is a comprehensive solution that offers you a wide range of features and benefits, such as:

                - -
                  -
                • You can work with different media formats and platforms, including print, web, mobile, video, and interactive.
                • -
                • You can use the latest technologies and standards, such as HTML5, CSS3, jQuery, Flash, and AIR.
                • -
                • You can integrate your workflow with other Adobe products and services, such as Photoshop, Illustrator, InDesign, Acrobat, Bridge, Device Central, and Creative Cloud.
                • -
                • You can enhance your productivity and efficiency with advanced tools and features, such as content-aware fill, puppet warp, perspective drawing, multiple artboards, video editing, animation, and 3D effects.
                • -
                • You can express your creativity and vision with unlimited possibilities and options, such as custom brushes, gradients, patterns, filters, effects, styles, and fonts.
                • -
                - -

                How to get the most out of ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a powerful and versatile software package that can help you create amazing digital content. However, to get the most out of it, you need to follow some tips and tricks, such as:

                - -
                  -
                1. Learn the basics of each application and how they work together. You can find tutorials and guides on the official website or on other online resources.
                2. -
                3. Explore the different features and functions of each application and experiment with different settings and options. You can find inspiration and examples on the official website or on other online resources.
                4. -
                5. Use the best practices and standards for each media format and platform. You can find recommendations and guidelines on the official website or on other online resources.
                6. -
                7. Optimize your performance and quality by using the appropriate tools and techniques for each task. You can find tips and tricks on the official website or on other online resources.
                8. -
                9. Keep your software updated and secure by downloading the latest patches and updates from the official website or from other online resources.
                10. -
                - -

                Conclusion

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. It offers you a wide range of features and benefits that can enhance your creativity and productivity. However, it also requires some skills and knowledge to use it effectively and safely. If you want to use it for free, you need to use a keygen that can generate a valid serial number for you. However, this also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.

                -

                How to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. However, it might also encounter some problems or issues that can affect your work or experience. Some of the common problems or issues are:

                - -
                  -
                • The software does not install or run properly.
                • -
                • The software does not accept or recognize the serial number generated by the keygen.
                • -
                • The software crashes or freezes frequently.
                • -
                • The software displays errors or warnings.
                • -
                • The software performs slowly or poorly.
                • -
                • The software does not work with some of the media formats or platforms.
                • -
                - -

                If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps, such as:

                - -
                  -
                1. Check your system requirements and compatibility. Make sure your computer meets the minimum requirements and supports the software and its components.
                2. -
                3. Check your internet connection and firewall settings. Make sure your internet connection is stable and secure and your firewall does not block the software or its components.
                4. -
                5. Check your antivirus and malware protection. Make sure your antivirus and malware protection does not interfere with the software or its components.
                6. -
                7. Check your hosts file and activation status. Make sure your hosts file is patched correctly and your activation status is valid and verified.
                8. -
                9. Check your software settings and preferences. Make sure your software settings and preferences are configured correctly and suitably for your work and experience.
                10. -
                11. Check your software updates and patches. Make sure your software is updated and patched to the latest version and has no bugs or glitches.
                12. -
                13. Check your online resources and support. Make sure you have access to online resources and support that can help you with your problems or issues.
                14. -
                - -

                Conclusion

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. However, it might also encounter some problems or issues that can affect your work or experience. If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps that can help you check and resolve your problems or issues. However, if you still cannot fix them, you might need to contact Adobe customer support or seek professional help.

                -

                Conclusion

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. It offers you a wide range of features and benefits that can enhance your creativity and productivity. However, it also requires some skills and knowledge to use it effectively and safely. If you want to use it for free, you need to use a keygen that can generate a valid serial number for you. However, this also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.

                - -

                If you use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you might also encounter some problems or issues that can affect your work or experience. If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps that can help you check and resolve your problems or issues. However, if you still cannot fix them, you might need to contact Adobe customer support or seek professional help.

                - -

                ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a great way to get access to one of the most powerful and versatile software packages for digital content creation. However, it also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md b/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md deleted file mode 100644 index 59658e15067eb3ab5139fa6f1ca75c0dec1716c9..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Cdbf - Dbf Viewer And Editor 2.20 Crack


                Download Filehttps://ssurll.com/2uzyMG



                - - aaccfb2cb3
                -
                -
                -

                diff --git a/spaces/contluForse/HuggingGPT/assets/Download Goldwave 567 Full Crack Hit.md b/spaces/contluForse/HuggingGPT/assets/Download Goldwave 567 Full Crack Hit.md deleted file mode 100644 index d3bd459de6476eaa25f421eb560fb2f86cee9078..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Goldwave 567 Full Crack Hit.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Download Goldwave 567 Full Crack Hit


                Download 🆗 https://ssurll.com/2uzxuE



                - -Arrowbridge ii 1.14a+ BBS Name: PC97 Sysop Name: Riz la+ Serial #1: ... Data detective pc v1.0 7101-3D91-0861 (then hit ";OK";, not [Enter]) ... Download butler CORE/JES Key: 3ed95671-111 Extra: 1 ... Gold wave version full First=CHUL IN Last=HOM Password=LCOKOEB ... Gwd text editor 1.5 GTE12/123-567. 1fdad05405
                -
                -
                -

                diff --git a/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md b/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md deleted file mode 100644 index e300f020e34de27adbf9d1c20a8f79eed6a73886..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md +++ /dev/null @@ -1,6 +0,0 @@ -

                EaseUS Data Recovery Wizard Free lets you recover lost or deleted data


                Download Zip ✪✪✪ https://ssurll.com/2uzxrJ



                - - d5da3c52bf
                -
                -
                -

                diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/drop.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/drop.py deleted file mode 100644 index 465ed38339fe64dde8cdc959451b1236a3a55b95..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from annotator.mmpkg.mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/spaces/cormerod/gaime/app.py b/spaces/cormerod/gaime/app.py deleted file mode 100644 index 0d4a42212d23a871ee93fb74bbecfa2b74544ca1..0000000000000000000000000000000000000000 --- a/spaces/cormerod/gaime/app.py +++ /dev/null @@ -1,257 +0,0 @@ -import os - -import gradio as gr - -from ctransformers import AutoModelForCausalLM - -defaults = {"pretrained_model":"TheBloke/MPT-7B-Instruct-GGML", - "model_file":"mpt-7b-instruct.ggmlv3.q4_0.bin"} - -model = AutoModelForCausalLM.from_pretrained(model_path_or_repo_id = defaults['pretrained_model'], - model_file = defaults['model_file']) - - -def get_total_inputs(inputs, chatbot, preprompt, user_name, assistant_name, sep): - past = [] - for data in chatbot: - user_data, model_data = data - - if not user_data.startswith(user_name): - user_data = user_name + user_data - if not model_data.startswith(sep + assistant_name): - model_data = sep + assistant_name + model_data - - past.append(user_data + model_data.rstrip() + sep) - - if not inputs.startswith(user_name): - inputs = user_name + inputs - - total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip() - - return total_inputs - - -def has_no_history(chatbot, history): - return not chatbot and not history - - -header = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n" -prompt_template = "###Instruction:\n{query}\n### Response:\n{response}" - -def generate( - user_message, - chatbot, - history, - temperature, - top_p, - max_new_tokens, - repetition_penalty, -): - # import pdb - # pdb.set_trace() - # Don't return meaningless message when the input is empty - if not user_message: - print("Empty input") - - history.append(user_message) - - past_messages = [] - if chatbot is not None: - for data in chatbot: - user_data, model_data = data - - past_messages.extend( - [{"role": "user", "content": user_data}, {"role": "assistant", "content": model_data.rstrip()}] - ) - - if len(past_messages) < 1: - prompt = header + prompt_template.format(query=user_message, response="") - else: - prompt = header - for i in range(0, len(past_messages), 2): - intermediate_prompt = prompt_template.format(query=past_messages[i]["content"], response=past_messages[i+1]["content"]) - print("intermediate: ", intermediate_prompt) - prompt = prompt + '\n' + intermediate_prompt - - prompt = prompt + prompt_template.format(query=user_message, response="") - - - generate_kwargs = { - "temperature": temperature, - "top_p": top_p, - "max_new_tokens": max_new_tokens, - } - - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - generate_kwargs = dict( - temperature=temperature, - top_p=top_p, - repetition_penalty=repetition_penalty, - seed=42, - ) - - stream = model.generate(model.tokenize(prompt), **generate_kwargs) - - output = "" - for idx, response in enumerate(stream): - output += model.detokenize(response.numerator) - if idx == 0: - history.append(" " + output) - else: - history[-1] = output - - chat = [(history[i].strip(), history[i + 1].strip()) for i in range(0, len(history) - 1, 2)] - - yield chat, history, user_message, "" - - return chat, history, user_message, "" - - -examples = [ - "Write a poem about the sky.", - "Write an essay on the fall of Rome.", - """ "Write an email on advice I would give to my younger self." """.strip() -] - - -def clear_chat(): - return [], [] - - -def process_example(args): - for [x, y] in generate(args): - pass - return [x, y] - - -title = """

                MPT Playground

                """ -custom_css = """ -#banner-image { - display: block; - margin-left: auto; - margin-right: auto; -} -#chat-message { - font-size: 14px; - min-height: 300px; -} -""" - -with gr.Blocks(analytics_enabled=False, css=custom_css) as demo: - gr.HTML(title) - - with gr.Row(): - with gr.Column(): - gr.Markdown( - """ - This demo showcases a simple MPT model - """ - ) - - with gr.Row(): - with gr.Box(): - output = gr.Markdown() - chatbot = gr.Chatbot(elem_id="chat-message", label="Chat") - - with gr.Row(): - with gr.Column(scale=3): - user_message = gr.Textbox(placeholder="Enter your message here", show_label=False, elem_id="q-input") - with gr.Row(): - send_button = gr.Button("Send", elem_id="send-btn", visible=True) - - clear_chat_button = gr.Button("Clear chat", elem_id="clear-btn", visible=True) - - with gr.Accordion(label="Parameters", open=False, elem_id="parameters-accordion"): - temperature = gr.Slider( - label="Temperature", - value=0.7, - minimum=0.0, - maximum=1.0, - step=0.1, - interactive=True, - info="Higher values produce more diverse outputs", - ) - top_p = gr.Slider( - label="Top-p (nucleus sampling)", - value=0.9, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ) - max_new_tokens = gr.Slider( - label="Max new tokens", - value=1024, - minimum=0, - maximum=2048, - step=4, - interactive=True, - info="The maximum numbers of new tokens", - ) - repetition_penalty = gr.Slider( - label="Repetition Penalty", - value=1.2, - minimum=0.0, - maximum=10, - step=0.1, - interactive=True, - info="The parameter for repetition penalty. 1.0 means no penalty.", - ) - with gr.Row(): - gr.Examples( - examples=examples, - inputs=[user_message], - cache_examples=False, - fn=process_example, - outputs=[output], - ) - - with gr.Row(): - gr.Markdown( - "Disclaimer: The model can produce factually incorrect output, and should not be relied on to produce " - "factually accurate information. The model was trained on various public datasets; while great efforts " - "have been taken to clean the pretraining data, it is possible that this model could generate lewd, " - "biased, or otherwise offensive outputs.", - elem_classes=["disclaimer"], - ) - - - history = gr.State([]) - last_user_message = gr.State("") - - user_message.submit( - generate, - inputs=[ - user_message, - chatbot, - history, - temperature, - top_p, - max_new_tokens, - repetition_penalty, - ], - outputs=[chatbot, history, last_user_message, user_message], - ) - - send_button.click( - generate, - inputs=[ - user_message, - chatbot, - history, - temperature, - top_p, - max_new_tokens, - repetition_penalty, - ], - outputs=[chatbot, history, last_user_message, user_message], - ) - - clear_chat_button.click(clear_chat, outputs=[chatbot, history]) - -demo.queue(concurrency_count=16).launch(debug=True) diff --git a/spaces/cozyanduofen/bingo/src/components/welcome-screen.tsx b/spaces/cozyanduofen/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
                - {exampleMessages.map(example => ( - - ))} -
                - ) -} diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/misc.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/misc.py deleted file mode 100644 index 31d8f38352e40cdf7adc80b86eb25dcb648b3e19..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/misc.py +++ /dev/null @@ -1,230 +0,0 @@ -"""Miscellaneous utils.""" -from collections import OrderedDict - -import numpy as np -import torch -import torch.nn.functional as F -from scipy.stats import truncnorm -from torch._six import container_abcs, string_classes - - -def split_labels(labels, label_lengths): - r"""Split concatenated labels into their parts. - - Args: - labels (torch.Tensor): Labels obtained through concatenation. - label_lengths (OrderedDict): Containing order of labels & their lengths. - - Returns: - - """ - assert isinstance(label_lengths, OrderedDict) - start = 0 - outputs = {} - for data_type, length in label_lengths.items(): - end = start + length - if labels.dim() == 5: - outputs[data_type] = labels[:, :, start:end] - elif labels.dim() == 4: - outputs[data_type] = labels[:, start:end] - elif labels.dim() == 3: - outputs[data_type] = labels[start:end] - start = end - return outputs - - -def requires_grad(model, require=True): - r""" Set a model to require gradient or not. - - Args: - model (nn.Module): Neural network model. - require (bool): Whether the network requires gradient or not. - - Returns: - - """ - for p in model.parameters(): - p.requires_grad = require - - -def to_device(data, device): - r"""Move all tensors inside data to device. - - Args: - data (dict, list, or tensor): Input data. - device (str): 'cpu' or 'cuda'. - """ - assert device in ['cpu', 'cuda'] - if isinstance(data, torch.Tensor): - data = data.to(torch.device(device)) - return data - elif isinstance(data, container_abcs.Mapping): - return {key: to_device(data[key], device) for key in data} - elif isinstance(data, container_abcs.Sequence) and \ - not isinstance(data, string_classes): - return [to_device(d, device) for d in data] - else: - return data - - -def to_cuda(data): - r"""Move all tensors inside data to gpu. - - Args: - data (dict, list, or tensor): Input data. - """ - return to_device(data, 'cuda') - - -def to_cpu(data): - r"""Move all tensors inside data to cpu. - - Args: - data (dict, list, or tensor): Input data. - """ - return to_device(data, 'cpu') - - -def to_half(data): - r"""Move all floats to half. - - Args: - data (dict, list or tensor): Input data. - """ - if isinstance(data, torch.Tensor) and torch.is_floating_point(data): - data = data.half() - return data - elif isinstance(data, container_abcs.Mapping): - return {key: to_half(data[key]) for key in data} - elif isinstance(data, container_abcs.Sequence) and \ - not isinstance(data, string_classes): - return [to_half(d) for d in data] - else: - return data - - -def to_float(data): - r"""Move all halfs to float. - - Args: - data (dict, list or tensor): Input data. - """ - if isinstance(data, torch.Tensor) and torch.is_floating_point(data): - data = data.float() - return data - elif isinstance(data, container_abcs.Mapping): - return {key: to_float(data[key]) for key in data} - elif isinstance(data, container_abcs.Sequence) and \ - not isinstance(data, string_classes): - return [to_float(d) for d in data] - else: - return data - - -def get_and_setattr(cfg, name, default): - r"""Get attribute with default choice. If attribute does not exist, set it - using the default value. - - Args: - cfg (obj) : Config options. - name (str) : Attribute name. - default (obj) : Default attribute. - - Returns: - (obj) : Desired attribute. - """ - if not hasattr(cfg, name) or name not in cfg.__dict__: - setattr(cfg, name, default) - return getattr(cfg, name) - - -def get_nested_attr(cfg, attr_name, default): - r"""Iteratively try to get the attribute from cfg. If not found, return - default. - - Args: - cfg (obj): Config file. - attr_name (str): Attribute name (e.g. XXX.YYY.ZZZ). - default (obj): Default return value for the attribute. - - Returns: - (obj): Attribute value. - """ - names = attr_name.split('.') - atr = cfg - for name in names: - if not hasattr(atr, name): - return default - atr = getattr(atr, name) - return atr - - -def gradient_norm(model): - r"""Return the gradient norm of model. - - Args: - model (PyTorch module): Your network. - - """ - total_norm = 0 - for p in model.parameters(): - if p.grad is not None: - param_norm = p.grad.norm(2) - total_norm += param_norm.item() ** 2 - return total_norm ** (1. / 2) - - -def random_shift(x, offset=0.05, mode='bilinear', padding_mode='reflection'): - r"""Randomly shift the input tensor. - - Args: - x (4D tensor): The input batch of images. - offset (int): The maximum offset ratio that is between [0, 1]. - The maximum shift is offset * image_size for each direction. - mode (str): The resample mode for 'F.grid_sample'. - padding_mode (str): The padding mode for 'F.grid_sample'. - - Returns: - x (4D tensor) : The randomly shifted image. - """ - assert x.dim() == 4, "Input must be a 4D tensor." - batch_size = x.size(0) - theta = torch.eye(2, 3, device=x.device).unsqueeze(0).repeat( - batch_size, 1, 1) - theta[:, :, 2] = 2 * offset * torch.rand(batch_size, 2) - offset - grid = F.affine_grid(theta, x.size()) - x = F.grid_sample(x, grid, mode=mode, padding_mode=padding_mode) - return x - - -def truncated_gaussian(threshold, size, seed=None, device=None): - r"""Apply the truncated gaussian trick to trade diversity for quality - - Args: - threshold (float): Truncation threshold. - size (list of integer): Tensor size. - seed (int): Random seed. - device: - """ - state = None if seed is None else np.random.RandomState(seed) - values = truncnorm.rvs(-threshold, threshold, - size=size, random_state=state) - return torch.tensor(values, device=device).float() - - -def apply_imagenet_normalization(input): - r"""Normalize using ImageNet mean and std. - - Args: - input (4D tensor NxCxHxW): The input images, assuming to be [-1, 1]. - - Returns: - Normalized inputs using the ImageNet normalization. - """ - # normalize the input back to [0, 1] - normalized_input = (input + 1) / 2 - # normalize the input using the ImageNet mean and std - mean = normalized_input.new_tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1) - std = normalized_input.new_tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1) - output = (normalized_input - mean) / std - return output diff --git a/spaces/dawdqd/ChuanhuChatGPT/modules/models/midjourney.py b/spaces/dawdqd/ChuanhuChatGPT/modules/models/midjourney.py deleted file mode 100644 index 65a560fc2427aad735d227d4d25b61b72f3ace5a..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/modules/models/midjourney.py +++ /dev/null @@ -1,385 +0,0 @@ -import base64 -import io -import json -import logging -import pathlib -import time -import tempfile -import os - -from datetime import datetime - -import requests -import tiktoken -from PIL import Image - -from modules.config import retrieve_proxy -from modules.models.models import XMChat - -mj_proxy_api_base = os.getenv("MIDJOURNEY_PROXY_API_BASE") -mj_discord_proxy_url = os.getenv("MIDJOURNEY_DISCORD_PROXY_URL") -mj_temp_folder = os.getenv("MIDJOURNEY_TEMP_FOLDER") - - -class Midjourney_Client(XMChat): - - class FetchDataPack: - """ - A class to store data for current fetching data from Midjourney API - """ - - action: str # current action, e.g. "IMAGINE", "UPSCALE", "VARIATION" - prefix_content: str # prefix content, task description and process hint - task_id: str # task id - start_time: float # task start timestamp - timeout: int # task timeout in seconds - finished: bool # whether the task is finished - prompt: str # prompt for the task - - def __init__(self, action, prefix_content, task_id, timeout=900): - self.action = action - self.prefix_content = prefix_content - self.task_id = task_id - self.start_time = time.time() - self.timeout = timeout - self.finished = False - - def __init__(self, model_name, api_key, user_name=""): - super().__init__(api_key, user_name) - self.model_name = model_name - self.history = [] - self.api_key = api_key - self.headers = { - "Content-Type": "application/json", - "mj-api-secret": f"{api_key}" - } - self.proxy_url = mj_proxy_api_base - self.command_splitter = "::" - - if mj_temp_folder: - temp = "./tmp" - if user_name: - temp = os.path.join(temp, user_name) - if not os.path.exists(temp): - os.makedirs(temp) - self.temp_path = tempfile.mkdtemp(dir=temp) - logging.info("mj temp folder: " + self.temp_path) - else: - self.temp_path = None - - def use_mj_self_proxy_url(self, img_url): - """ - replace discord cdn url with mj self proxy url - """ - return img_url.replace( - "https://cdn.discordapp.com/", - mj_discord_proxy_url and mj_discord_proxy_url or "https://cdn.discordapp.com/" - ) - - def split_image(self, image_url): - """ - when enabling temp dir, split image into 4 parts - """ - with retrieve_proxy(): - image_bytes = requests.get(image_url).content - img = Image.open(io.BytesIO(image_bytes)) - width, height = img.size - # calculate half width and height - half_width = width // 2 - half_height = height // 2 - # create coordinates (top-left x, top-left y, bottom-right x, bottom-right y) - coordinates = [(0, 0, half_width, half_height), - (half_width, 0, width, half_height), - (0, half_height, half_width, height), - (half_width, half_height, width, height)] - - images = [img.crop(c) for c in coordinates] - return images - - def auth_mj(self): - """ - auth midjourney api - """ - # TODO: check if secret is valid - return {'status': 'ok'} - - def request_mj(self, path: str, action: str, data: str, retries=3): - """ - request midjourney api - """ - mj_proxy_url = self.proxy_url - if mj_proxy_url is None or not (mj_proxy_url.startswith("http://") or mj_proxy_url.startswith("https://")): - raise Exception('please set MIDJOURNEY_PROXY_API_BASE in ENV or in config.json') - - auth_ = self.auth_mj() - if auth_.get('error'): - raise Exception('auth not set') - - fetch_url = f"{mj_proxy_url}/{path}" - # logging.info(f"[MJ Proxy] {action} {fetch_url} params: {data}") - - for _ in range(retries): - try: - with retrieve_proxy(): - res = requests.request(method=action, url=fetch_url, headers=self.headers, data=data) - break - except Exception as e: - print(e) - - if res.status_code != 200: - raise Exception(f'{res.status_code} - {res.content}') - - return res - - def fetch_status(self, fetch_data: FetchDataPack): - """ - fetch status of current task - """ - if fetch_data.start_time + fetch_data.timeout < time.time(): - fetch_data.finished = True - return "任务超时,请检查 dc 输出。描述:" + fetch_data.prompt - - time.sleep(3) - status_res = self.request_mj(f"task/{fetch_data.task_id}/fetch", "GET", '') - status_res_json = status_res.json() - if not (200 <= status_res.status_code < 300): - raise Exception("任务状态获取失败:" + status_res_json.get( - 'error') or status_res_json.get('description') or '未知错误') - else: - fetch_data.finished = False - if status_res_json['status'] == "SUCCESS": - content = status_res_json['imageUrl'] - fetch_data.finished = True - elif status_res_json['status'] == "FAILED": - content = status_res_json['failReason'] or '未知原因' - fetch_data.finished = True - elif status_res_json['status'] == "NOT_START": - content = f'任务未开始,已等待 {time.time() - fetch_data.start_time:.2f} 秒' - elif status_res_json['status'] == "IN_PROGRESS": - content = '任务正在运行' - if status_res_json.get('progress'): - content += f",进度:{status_res_json['progress']}" - elif status_res_json['status'] == "SUBMITTED": - content = '任务已提交处理' - elif status_res_json['status'] == "FAILURE": - fetch_data.finished = True - return "任务处理失败,原因:" + status_res_json['failReason'] or '未知原因' - else: - content = status_res_json['status'] - if fetch_data.finished: - img_url = self.use_mj_self_proxy_url(status_res_json['imageUrl']) - if fetch_data.action == "DESCRIBE": - return f"\n{status_res_json['prompt']}" - time_cost_str = f"\n\n{fetch_data.action} 花费时间:{time.time() - fetch_data.start_time:.2f} 秒" - upscale_str = "" - variation_str = "" - if fetch_data.action in ["IMAGINE", "UPSCALE", "VARIATION"]: - upscale = [f'/mj UPSCALE{self.command_splitter}{i+1}{self.command_splitter}{fetch_data.task_id}' - for i in range(4)] - upscale_str = '\n放大图片:\n\n' + '\n\n'.join(upscale) - variation = [f'/mj VARIATION{self.command_splitter}{i+1}{self.command_splitter}{fetch_data.task_id}' - for i in range(4)] - variation_str = '\n图片变体:\n\n' + '\n\n'.join(variation) - if self.temp_path and fetch_data.action in ["IMAGINE", "VARIATION"]: - try: - images = self.split_image(img_url) - # save images to temp path - for i in range(4): - images[i].save(pathlib.Path(self.temp_path) / f"{fetch_data.task_id}_{i}.png") - img_str = '\n'.join( - [f"![{fetch_data.task_id}](/file={self.temp_path}/{fetch_data.task_id}_{i}.png)" - for i in range(4)]) - return fetch_data.prefix_content + f"{time_cost_str}\n\n{img_str}{upscale_str}{variation_str}" - except Exception as e: - logging.error(e) - return fetch_data.prefix_content + \ - f"{time_cost_str}[![{fetch_data.task_id}]({img_url})]({img_url}){upscale_str}{variation_str}" - else: - content = f"**任务状态:** [{(datetime.now()).strftime('%Y-%m-%d %H:%M:%S')}] - {content}" - content += f"\n\n花费时间:{time.time() - fetch_data.start_time:.2f} 秒" - if status_res_json['status'] == 'IN_PROGRESS' and status_res_json.get('imageUrl'): - img_url = status_res_json.get('imageUrl') - return f"{content}\n[![{fetch_data.task_id}]({img_url})]({img_url})" - return content - return None - - def handle_file_upload(self, files, chatbot, language): - """ - handle file upload - """ - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - return None, chatbot, None - - def reset(self): - self.image_bytes = None - self.image_path = None - return [], "已重置" - - def get_answer_at_once(self): - content = self.history[-1]['content'] - answer = self.get_help() - - if not content.lower().startswith("/mj"): - return answer, len(content) - - prompt = content[3:].strip() - action = "IMAGINE" - first_split_index = prompt.find(self.command_splitter) - if first_split_index > 0: - action = prompt[:first_split_index] - if action not in ["IMAGINE", "DESCRIBE", "UPSCALE", - # "VARIATION", "BLEND", "REROLL" - ]: - raise Exception("任务提交失败:未知的任务类型") - else: - action_index = None - action_use_task_id = None - if action in ["VARIATION", "UPSCALE", "REROLL"]: - action_index = int(prompt[first_split_index + 2:first_split_index + 3]) - action_use_task_id = prompt[first_split_index + 5:] - - try: - res = None - if action == "IMAGINE": - data = { - "prompt": prompt - } - if self.image_bytes is not None: - data["base64"] = 'data:image/png;base64,' + self.image_bytes - res = self.request_mj("submit/imagine", "POST", - json.dumps(data)) - elif action == "DESCRIBE": - res = self.request_mj("submit/describe", "POST", - json.dumps({"base64": 'data:image/png;base64,' + self.image_bytes})) - elif action == "BLEND": - res = self.request_mj("submit/blend", "POST", json.dumps( - {"base64Array": [self.image_bytes, self.image_bytes]})) - elif action in ["UPSCALE", "VARIATION", "REROLL"]: - res = self.request_mj( - "submit/change", "POST", - json.dumps({"action": action, "index": action_index, "taskId": action_use_task_id})) - res_json = res.json() - if not (200 <= res.status_code < 300) or (res_json['code'] not in [1, 22]): - answer = "任务提交失败:" + res_json.get('error', res_json.get('description', '未知错误')) - else: - task_id = res_json['result'] - prefix_content = f"**画面描述:** {prompt}\n**任务ID:** {task_id}\n" - - fetch_data = Midjourney_Client.FetchDataPack( - action=action, - prefix_content=prefix_content, - task_id=task_id, - ) - fetch_data.prompt = prompt - while not fetch_data.finished: - answer = self.fetch_status(fetch_data) - except Exception as e: - logging.error("submit failed", e) - answer = "任务提交错误:" + str(e.args[0]) if e.args else '未知错误' - - return answer, tiktoken.get_encoding("cl100k_base").encode(content) - - def get_answer_stream_iter(self): - content = self.history[-1]['content'] - answer = self.get_help() - - if not content.lower().startswith("/mj"): - yield answer - return - - prompt = content[3:].strip() - action = "IMAGINE" - first_split_index = prompt.find(self.command_splitter) - if first_split_index > 0: - action = prompt[:first_split_index] - if action not in ["IMAGINE", "DESCRIBE", "UPSCALE", - "VARIATION", "BLEND", "REROLL" - ]: - yield "任务提交失败:未知的任务类型" - return - - action_index = None - action_use_task_id = None - if action in ["VARIATION", "UPSCALE", "REROLL"]: - action_index = int(prompt[first_split_index + 2:first_split_index + 3]) - action_use_task_id = prompt[first_split_index + 5:] - - try: - res = None - if action == "IMAGINE": - data = { - "prompt": prompt - } - if self.image_bytes is not None: - data["base64"] = 'data:image/png;base64,' + self.image_bytes - res = self.request_mj("submit/imagine", "POST", - json.dumps(data)) - elif action == "DESCRIBE": - res = self.request_mj("submit/describe", "POST", json.dumps( - {"base64": 'data:image/png;base64,' + self.image_bytes})) - elif action == "BLEND": - res = self.request_mj("submit/blend", "POST", json.dumps( - {"base64Array": [self.image_bytes, self.image_bytes]})) - elif action in ["UPSCALE", "VARIATION", "REROLL"]: - res = self.request_mj( - "submit/change", "POST", - json.dumps({"action": action, "index": action_index, "taskId": action_use_task_id})) - res_json = res.json() - if not (200 <= res.status_code < 300) or (res_json['code'] not in [1, 22]): - yield "任务提交失败:" + res_json.get('error', res_json.get('description', '未知错误')) - else: - task_id = res_json['result'] - prefix_content = f"**画面描述:** {prompt}\n**任务ID:** {task_id}\n" - content = f"[{(datetime.now()).strftime('%Y-%m-%d %H:%M:%S')}] - 任务提交成功:" + \ - res_json.get('description') or '请稍等片刻' - yield content - - fetch_data = Midjourney_Client.FetchDataPack( - action=action, - prefix_content=prefix_content, - task_id=task_id, - ) - while not fetch_data.finished: - yield self.fetch_status(fetch_data) - except Exception as e: - logging.error('submit failed', e) - yield "任务提交错误:" + str(e.args[0]) if e.args else '未知错误' - - def get_help(self): - return """``` -【绘图帮助】 -所有命令都需要以 /mj 开头,如:/mj a dog -IMAGINE - 绘图,可以省略该命令,后面跟上绘图内容 - /mj a dog - /mj IMAGINE::a cat -DESCRIBE - 描述图片,需要在右下角上传需要描述的图片内容 - /mj DESCRIBE:: -UPSCALE - 确认后放大图片,第一个数值为需要放大的图片(1~4),第二参数为任务ID - /mj UPSCALE::1::123456789 - 请使用SD进行UPSCALE -VARIATION - 图片变体,第一个数值为需要放大的图片(1~4),第二参数为任务ID - /mj VARIATION::1::123456789 - -【绘图参数】 -所有命令默认会带上参数--v 5.2 -其他参数参照 https://docs.midjourney.com/docs/parameter-list -长宽比 --aspect/--ar - --ar 1:2 - --ar 16:9 -负面tag --no - --no plants - --no hands -随机种子 --seed - --seed 1 -生成动漫风格(NijiJourney) --niji - --niji -``` -""" diff --git a/spaces/dawdqd/ChuanhuChatGPT/web_assets/javascript/sliders.js b/spaces/dawdqd/ChuanhuChatGPT/web_assets/javascript/sliders.js deleted file mode 100644 index 1351f3ae3902c374b3f5f73b2787c5ec1989bafd..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/web_assets/javascript/sliders.js +++ /dev/null @@ -1,22 +0,0 @@ - -var rangeInputs = null; -var numberInputs = null; - - -function setSlider() { - function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); - } - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} diff --git a/spaces/dblitzz21/food-spoonycal/app.py b/spaces/dblitzz21/food-spoonycal/app.py deleted file mode 100644 index 0378f80ff14d2831ddb993b9aa1988b100f52cb1..0000000000000000000000000000000000000000 --- a/spaces/dblitzz21/food-spoonycal/app.py +++ /dev/null @@ -1,47 +0,0 @@ -from gradio.outputs import Label -from icevision.all import * -from icevision.models.checkpoint import * -import PIL -import gradio as gr -import os - -# Load model -checkpoint_path = "model_checkpoint (1).pth" -checkpoint_and_model = model_from_checkpoint(checkpoint_path) -model = checkpoint_and_model["model"] -model_type = checkpoint_and_model["model_type"] -class_map = checkpoint_and_model["class_map"] - -# Transforms -img_size = checkpoint_and_model["img_size"] -valid_tfms = tfms.A.Adapter([*tfms.A.resize_and_pad(img_size), tfms.A.Normalize()]) - -# Populate examples in Gradio interface -examples = [ - ['./1.jpg'], - ['./2.jpg'], - ['./3.jpg'] -] - -def show_preds(input_image): - img = PIL.Image.fromarray(input_image, "RGB") - pred_dict = model_type.end2end_detect(img, valid_tfms, model, - class_map=class_map, - detection_threshold=0.5, - display_label=True, - display_bbox=True, - return_img=True, - font_size=35, - label_color="#FF59D6") - return pred_dict["img"] - - -gr_interface = gr.Interface( - fn=show_preds, - inputs=["image"], - outputs=[gr.outputs.Image(type="pil", label="Hasil Deteksi")], - title="Pendeteksi Makanan", - description="Silahkan masukkan gambar makanan anda pada section berikut:", - examples=examples -) -gr_interface.launch(inline=False, share=False, debug=True) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attrs/filters.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attrs/filters.py deleted file mode 100644 index 52959005b088f0e5116c8b6acdbcc5937bbaacc8..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attrs/filters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.filters import * # noqa diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py deleted file mode 100644 index 573b3f9c3970766ea817994509f4939ef4f70f0c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_C_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_T_S_I_C_(BaseTTXConverter): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/smb.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/smb.py deleted file mode 100644 index 9892f469d563fec7041a2abc68416a19fd96888c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/smb.py +++ /dev/null @@ -1,309 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module contains SMBFileSystem class responsible for handling access to -Windows Samba network shares by using package smbprotocol -""" - -import datetime -import uuid -from stat import S_ISDIR, S_ISLNK - -import smbclient - -from .. import AbstractFileSystem -from ..utils import infer_storage_options - -# ! pylint: disable=bad-continuation - - -class SMBFileSystem(AbstractFileSystem): - """Allow reading and writing to Windows and Samba network shares. - - When using `fsspec.open()` for getting a file-like object the URI - should be specified as this format: - ``smb://workgroup;user:password@server:port/share/folder/file.csv``. - - Example:: - - >>> import fsspec - >>> with fsspec.open( - ... 'smb://myuser:mypassword@myserver.com/' 'share/folder/file.csv' - ... ) as smbfile: - ... df = pd.read_csv(smbfile, sep='|', header=None) - - Note that you need to pass in a valid hostname or IP address for the host - component of the URL. Do not use the Windows/NetBIOS machine name for the - host component. - - The first component of the path in the URL points to the name of the shared - folder. Subsequent path components will point to the directory/folder/file. - - The URL components ``workgroup`` , ``user``, ``password`` and ``port`` may be - optional. - - .. note:: - - For working this source require `smbprotocol`_ to be installed, e.g.:: - - $ pip install smbprotocol - # or - # pip install smbprotocol[kerberos] - - .. _smbprotocol: https://github.com/jborean93/smbprotocol#requirements - - Note: if using this with the ``open`` or ``open_files``, with full URLs, - there is no way to tell if a path is relative, so all paths are assumed - to be absolute. - """ - - protocol = "smb" - - # pylint: disable=too-many-arguments - def __init__( - self, - host, - port=None, - username=None, - password=None, - timeout=60, - encrypt=None, - share_access=None, - **kwargs, - ): - """ - You can use _get_kwargs_from_urls to get some kwargs from - a reasonable SMB url. - - Authentication will be anonymous or integrated if username/password are not - given. - - Parameters - ---------- - host: str - The remote server name/ip to connect to - port: int - Port to connect with. Usually 445, sometimes 139. - username: str or None - Username to connect with. Required if Kerberos auth is not being used. - password: str or None - User's password on the server, if using username - timeout: int - Connection timeout in seconds - encrypt: bool - Whether to force encryption or not, once this has been set to True - the session cannot be changed back to False. - share_access: str or None - Specifies the default access applied to file open operations - performed with this file system object. - This affects whether other processes can concurrently open a handle - to the same file. - - - None (the default): exclusively locks the file until closed. - - 'r': Allow other handles to be opened with read access. - - 'w': Allow other handles to be opened with write access. - - 'd': Allow other handles to be opened with delete access. - """ - super(SMBFileSystem, self).__init__(**kwargs) - self.host = host - self.port = port - self.username = username - self.password = password - self.timeout = timeout - self.encrypt = encrypt - self.temppath = kwargs.pop("temppath", "") - self.share_access = share_access - self._connect() - - def _connect(self): - smbclient.register_session( - self.host, - username=self.username, - password=self.password, - port=445 if self.port is None else self.port, - encrypt=self.encrypt, - connection_timeout=self.timeout, - ) - - @classmethod - def _strip_protocol(cls, path): - return infer_storage_options(path)["path"] - - @staticmethod - def _get_kwargs_from_urls(path): - # smb://workgroup;user:password@host:port/share/folder/file.csv - out = infer_storage_options(path) - out.pop("path", None) - out.pop("protocol", None) - return out - - def mkdir(self, path, create_parents=True, **kwargs): - wpath = _as_unc_path(self.host, path) - if create_parents: - smbclient.makedirs(wpath, exist_ok=False, **kwargs) - else: - smbclient.mkdir(wpath, **kwargs) - - def makedirs(self, path, exist_ok=False): - if _share_has_path(path): - wpath = _as_unc_path(self.host, path) - smbclient.makedirs(wpath, exist_ok=exist_ok) - - def rmdir(self, path): - if _share_has_path(path): - wpath = _as_unc_path(self.host, path) - smbclient.rmdir(wpath) - - def info(self, path, **kwargs): - wpath = _as_unc_path(self.host, path) - stats = smbclient.stat(wpath, **kwargs) - if S_ISDIR(stats.st_mode): - stype = "directory" - elif S_ISLNK(stats.st_mode): - stype = "link" - else: - stype = "file" - res = { - "name": path + "/" if stype == "directory" else path, - "size": stats.st_size, - "type": stype, - "uid": stats.st_uid, - "gid": stats.st_gid, - "time": stats.st_atime, - "mtime": stats.st_mtime, - } - return res - - def created(self, path): - """Return the created timestamp of a file as a datetime.datetime""" - wpath = _as_unc_path(self.host, path) - stats = smbclient.stat(wpath) - return datetime.datetime.utcfromtimestamp(stats.st_ctime) - - def modified(self, path): - """Return the modified timestamp of a file as a datetime.datetime""" - wpath = _as_unc_path(self.host, path) - stats = smbclient.stat(wpath) - return datetime.datetime.utcfromtimestamp(stats.st_mtime) - - def ls(self, path, detail=True, **kwargs): - unc = _as_unc_path(self.host, path) - listed = smbclient.listdir(unc, **kwargs) - dirs = ["/".join([path.rstrip("/"), p]) for p in listed] - if detail: - dirs = [self.info(d) for d in dirs] - return dirs - - # pylint: disable=too-many-arguments - def _open( - self, - path, - mode="rb", - block_size=-1, - autocommit=True, - cache_options=None, - **kwargs, - ): - """ - block_size: int or None - If 0, no buffering, 1, line buffering, >1, buffer that many bytes - - Notes - ----- - By specifying 'share_access' in 'kwargs' it is possible to override the - default shared access setting applied in the constructor of this object. - """ - bls = block_size if block_size is not None and block_size >= 0 else -1 - wpath = _as_unc_path(self.host, path) - share_access = kwargs.pop("share_access", self.share_access) - if "w" in mode and autocommit is False: - temp = _as_temp_path(self.host, path, self.temppath) - return SMBFileOpener(wpath, temp, mode, block_size=bls, **kwargs) - return smbclient.open_file( - wpath, mode, buffering=bls, share_access=share_access, **kwargs - ) - - def copy(self, path1, path2, **kwargs): - """Copy within two locations in the same filesystem""" - wpath1 = _as_unc_path(self.host, path1) - wpath2 = _as_unc_path(self.host, path2) - smbclient.copyfile(wpath1, wpath2, **kwargs) - - def _rm(self, path): - if _share_has_path(path): - wpath = _as_unc_path(self.host, path) - stats = smbclient.stat(wpath) - if S_ISDIR(stats.st_mode): - smbclient.rmdir(wpath) - else: - smbclient.remove(wpath) - - def mv(self, path1, path2, **kwargs): - wpath1 = _as_unc_path(self.host, path1) - wpath2 = _as_unc_path(self.host, path2) - smbclient.rename(wpath1, wpath2, **kwargs) - - -def _as_unc_path(host, path): - rpath = path.replace("/", "\\") - unc = "\\\\{}{}".format(host, rpath) - return unc - - -def _as_temp_path(host, path, temppath): - share = path.split("/")[1] - temp_file = "/{}{}/{}".format(share, temppath, uuid.uuid4()) - unc = _as_unc_path(host, temp_file) - return unc - - -def _share_has_path(path): - parts = path.count("/") - if path.endswith("/"): - return parts > 2 - return parts > 1 - - -class SMBFileOpener(object): - """writes to remote temporary file, move on commit""" - - def __init__(self, path, temp, mode, block_size=-1, **kwargs): - self.path = path - self.temp = temp - self.mode = mode - self.block_size = block_size - self.kwargs = kwargs - self.smbfile = None - self._incontext = False - self._open() - - def _open(self): - if self.smbfile is None or self.smbfile.closed: - self.smbfile = smbclient.open_file( - self.temp, self.mode, buffering=self.block_size, **self.kwargs - ) - - def commit(self): - """Move temp file to definitive on success.""" - # TODO: use transaction support in SMB protocol - smbclient.replace(self.temp, self.path) - - def discard(self): - """Remove the temp file on failure.""" - smbclient.remove(self.temp) - - def __fspath__(self): - return self.path - - def __iter__(self): - return self.smbfile.__iter__() - - def __getattr__(self, item): - return getattr(self.smbfile, item) - - def __enter__(self): - self._incontext = True - return self.smbfile.__enter__() - - def __exit__(self, exc_type, exc_value, traceback): - self._incontext = False - self.smbfile.__exit__(exc_type, exc_value, traceback) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e60153e4.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e60153e4.js deleted file mode 100644 index fd83c35476fd3f10eb828015bd5c9fd13fa1b891..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e60153e4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{C as ge,E as q,L as Pe}from"./index-604e6cf5.js";import{s as Te,t as S,p as be,L as Ve,i as xe,f as _e,u as ye,b as ve,v as qe,h as z,E as G}from"./index-ba0b23cc.js";import{cssLanguage as F,css as $e}from"./index-8a158e07.js";import{typescriptLanguage as we,jsxLanguage as Ce,tsxLanguage as Qe,javascriptLanguage as K,javascript as Ae}from"./index-0940a57e.js";import"./index-39fce9e2.js";import"./Button-79f6e3bf.js";import"./Copy-77b3f70c.js";import"./Download-0afd7f1a.js";import"./BlockLabel-b1428685.js";import"./Empty-16d6169a.js";const Xe=54,ke=1,Ye=55,Me=2,Be=56,Ee=3,D=4,Ge=5,y=6,ee=7,te=8,ae=9,le=10,De=11,Re=12,Ze=13,w=57,Ne=14,R=58,We=20,He=22,re=23,Ie=24,k=26,ne=27,Ue=28,je=31,Je=34,se=36,Le=37,ze=0,Fe=1,Ke={area:!0,base:!0,br:!0,col:!0,command:!0,embed:!0,frame:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0,menuitem:!0},et={dd:!0,li:!0,optgroup:!0,option:!0,p:!0,rp:!0,rt:!0,tbody:!0,td:!0,tfoot:!0,th:!0,tr:!0},Z={dd:{dd:!0,dt:!0},dt:{dd:!0,dt:!0},li:{li:!0},option:{option:!0,optgroup:!0},optgroup:{optgroup:!0},p:{address:!0,article:!0,aside:!0,blockquote:!0,dir:!0,div:!0,dl:!0,fieldset:!0,footer:!0,form:!0,h1:!0,h2:!0,h3:!0,h4:!0,h5:!0,h6:!0,header:!0,hgroup:!0,hr:!0,menu:!0,nav:!0,ol:!0,p:!0,pre:!0,section:!0,table:!0,ul:!0},rp:{rp:!0,rt:!0},rt:{rp:!0,rt:!0},tbody:{tbody:!0,tfoot:!0},td:{td:!0,th:!0},tfoot:{tbody:!0},th:{td:!0,th:!0},thead:{tbody:!0,tfoot:!0},tr:{tr:!0}};function tt(e){return e==45||e==46||e==58||e>=65&&e<=90||e==95||e>=97&&e<=122||e>=161}function oe(e){return e==9||e==10||e==13||e==32}let N=null,W=null,H=0;function Y(e,t){let l=e.pos+t;if(H==l&&W==e)return N;let a=e.peek(t);for(;oe(a);)a=e.peek(++t);let r="";for(;tt(a);)r+=String.fromCharCode(a),a=e.peek(++t);return W=e,H=l,N=r?r.toLowerCase():a==at||a==lt?void 0:null}const Oe=60,v=62,M=47,at=63,lt=33,rt=45;function I(e,t){this.name=e,this.parent=t,this.hash=t?t.hash:0;for(let l=0;l-1?new I(Y(a,1)||"",e):e},reduce(e,t){return t==We&&e?e.parent:e},reuse(e,t,l,a){let r=t.type.id;return r==y||r==se?new I(Y(a,1)||"",e):e},hash(e){return e?e.hash:0},strict:!1}),ot=new q((e,t)=>{if(e.next!=Oe){e.next<0&&t.context&&e.acceptToken(w);return}e.advance();let l=e.next==M;l&&e.advance();let a=Y(e,0);if(a===void 0)return;if(!a)return e.acceptToken(l?Ne:y);let r=t.context?t.context.name:null;if(l){if(a==r)return e.acceptToken(De);if(r&&et[r])return e.acceptToken(w,-2);if(t.dialectEnabled(ze))return e.acceptToken(Re);for(let n=t.context;n;n=n.parent)if(n.name==a)return;e.acceptToken(Ze)}else{if(a=="script")return e.acceptToken(ee);if(a=="style")return e.acceptToken(te);if(a=="textarea")return e.acceptToken(ae);if(Ke.hasOwnProperty(a))return e.acceptToken(le);r&&Z[r]&&Z[r][a]?e.acceptToken(w,-1):e.acceptToken(y)}},{contextual:!0}),Ot=new q(e=>{for(let t=0,l=0;;l++){if(e.next<0){l&&e.acceptToken(R);break}if(e.next==rt)t++;else if(e.next==v&&t>=2){l>3&&e.acceptToken(R,-2);break}else t=0;e.advance()}});function it(e){for(;e;e=e.parent)if(e.name=="svg"||e.name=="math")return!0;return!1}const ut=new q((e,t)=>{if(e.next==M&&e.peek(1)==v){let l=t.dialectEnabled(Fe)||it(t.context);e.acceptToken(l?Ge:D,2)}else e.next==v&&e.acceptToken(D,1)});function B(e,t,l){let a=2+e.length;return new q(r=>{for(let n=0,o=0,O=0;;O++){if(r.next<0){O&&r.acceptToken(t);break}if(n==0&&r.next==Oe||n==1&&r.next==M||n>=2&&no?r.acceptToken(t,-o):r.acceptToken(l,-(o-2));break}else if((r.next==10||r.next==13)&&O){r.acceptToken(t,1);break}else n=o=0;r.advance()}})}const pt=B("script",Xe,ke),ct=B("style",Ye,Me),dt=B("textarea",Be,Ee),ft=Te({"Text RawText":S.content,"StartTag StartCloseTag SelfClosingEndTag EndTag":S.angleBracket,TagName:S.tagName,"MismatchedCloseTag/TagName":[S.tagName,S.invalid],AttributeName:S.attributeName,"AttributeValue UnquotedAttributeValue":S.attributeValue,Is:S.definitionOperator,"EntityReference CharacterReference":S.character,Comment:S.blockComment,ProcessingInst:S.processingInstruction,DoctypeDecl:S.documentMeta}),ht=Pe.deserialize({version:14,states:",xOVO!rOOO!WQ#tO'#CqO!]Q#tO'#CzO!bQ#tO'#C}O!gQ#tO'#DQO!lQ#tO'#DSO!qOaO'#CpO!|ObO'#CpO#XOdO'#CpO$eO!rO'#CpOOO`'#Cp'#CpO$lO$fO'#DTO$tQ#tO'#DVO$yQ#tO'#DWOOO`'#Dk'#DkOOO`'#DY'#DYQVO!rOOO%OQ&rO,59]O%WQ&rO,59fO%`Q&rO,59iO%hQ&rO,59lO%sQ&rO,59nOOOa'#D^'#D^O%{OaO'#CxO&WOaO,59[OOOb'#D_'#D_O&`ObO'#C{O&kObO,59[OOOd'#D`'#D`O&sOdO'#DOO'OOdO,59[OOO`'#Da'#DaO'WO!rO,59[O'_Q#tO'#DROOO`,59[,59[OOOp'#Db'#DbO'dO$fO,59oOOO`,59o,59oO'lQ#|O,59qO'qQ#|O,59rOOO`-E7W-E7WO'vQ&rO'#CsOOQW'#DZ'#DZO(UQ&rO1G.wOOOa1G.w1G.wO(^Q&rO1G/QOOOb1G/Q1G/QO(fQ&rO1G/TOOOd1G/T1G/TO(nQ&rO1G/WOOO`1G/W1G/WOOO`1G/Y1G/YO(yQ&rO1G/YOOOa-E7[-E7[O)RQ#tO'#CyOOO`1G.v1G.vOOOb-E7]-E7]O)WQ#tO'#C|OOOd-E7^-E7^O)]Q#tO'#DPOOO`-E7_-E7_O)bQ#|O,59mOOOp-E7`-E7`OOO`1G/Z1G/ZOOO`1G/]1G/]OOO`1G/^1G/^O)gQ,UO,59_OOQW-E7X-E7XOOOa7+$c7+$cOOOb7+$l7+$lOOOd7+$o7+$oOOO`7+$r7+$rOOO`7+$t7+$tO)rQ#|O,59eO)wQ#|O,59hO)|Q#|O,59kOOO`1G/X1G/XO*RO7[O'#CvO*dOMhO'#CvOOQW1G.y1G.yOOO`1G/P1G/POOO`1G/S1G/SOOO`1G/V1G/VOOOO'#D['#D[O*uO7[O,59bOOQW,59b,59bOOOO'#D]'#D]O+WOMhO,59bOOOO-E7Y-E7YOOQW1G.|1G.|OOOO-E7Z-E7Z",stateData:"+s~O!^OS~OUSOVPOWQOXROYTO[]O][O^^O`^Oa^Ob^Oc^Ox^O{_O!dZO~OfaO~OfbO~OfcO~OfdO~OfeO~O!WfOPlP!ZlP~O!XiOQoP!ZoP~O!YlORrP!ZrP~OUSOVPOWQOXROYTOZqO[]O][O^^O`^Oa^Ob^Oc^Ox^O!dZO~O!ZrO~P#dO![sO!euO~OfvO~OfwO~OS|OhyO~OS!OOhyO~OS!QOhyO~OS!SOT!TOhyO~OS!TOhyO~O!WfOPlX!ZlX~OP!WO!Z!XO~O!XiOQoX!ZoX~OQ!ZO!Z!XO~O!YlORrX!ZrX~OR!]O!Z!XO~O!Z!XO~P#dOf!_O~O![sO!e!aO~OS!bO~OS!cO~Oi!dOSgXhgXTgX~OS!fOhyO~OS!gOhyO~OS!hOhyO~OS!iOT!jOhyO~OS!jOhyO~Of!kO~Of!lO~Of!mO~OS!nO~Ok!qO!`!oO!b!pO~OS!rO~OS!sO~OS!tO~Oa!uOb!uOc!uO!`!wO!a!uO~Oa!xOb!xOc!xO!b!wO!c!xO~Oa!uOb!uOc!uO!`!{O!a!uO~Oa!xOb!xOc!xO!b!{O!c!xO~OT~bac!dx{!d~",goto:"%p!`PPPPPPPPPPPPPPPPPPPP!a!gP!mPP!yP!|#P#S#Y#]#`#f#i#l#r#x!aP!a!aP$O$U$l$r$x%O%U%[%bPPPPPPPP%hX^OX`pXUOX`pezabcde{}!P!R!UR!q!dRhUR!XhXVOX`pRkVR!XkXWOX`pRnWR!XnXXOX`pQrXR!XpXYOX`pQ`ORx`Q{aQ}bQ!PcQ!RdQ!UeZ!e{}!P!R!UQ!v!oR!z!vQ!y!pR!|!yQgUR!VgQjVR!YjQmWR![mQpXR!^pQtZR!`tS_O`ToXp",nodeNames:"⚠ StartCloseTag StartCloseTag StartCloseTag EndTag SelfClosingEndTag StartTag StartTag StartTag StartTag StartTag StartCloseTag StartCloseTag StartCloseTag IncompleteCloseTag Document Text EntityReference CharacterReference InvalidEntity Element OpenTag TagName Attribute AttributeName Is AttributeValue UnquotedAttributeValue ScriptText CloseTag OpenTag StyleText CloseTag OpenTag TextareaText CloseTag OpenTag CloseTag SelfClosingTag Comment ProcessingInst MismatchedCloseTag CloseTag DoctypeDecl",maxTerm:67,context:st,nodeProps:[["closedBy",-10,1,2,3,7,8,9,10,11,12,13,"EndTag",6,"EndTag SelfClosingEndTag",-4,21,30,33,36,"CloseTag"],["openedBy",4,"StartTag StartCloseTag",5,"StartTag",-4,29,32,35,37,"OpenTag"],["group",-9,14,17,18,19,20,39,40,41,42,"Entity",16,"Entity TextContent",-3,28,31,34,"TextContent Entity"]],propSources:[ft],skippedNodes:[0],repeatNodeCount:9,tokenData:"#%g!aR!YOX$qXY,QYZ,QZ[$q[]&X]^,Q^p$qpq,Qqr-_rs4ysv-_vw5iwxJ^x}-_}!OKP!O!P-_!P!Q$q!Q![-_![!]!!O!]!^-_!^!_!&W!_!`#$o!`!a&X!a!c-_!c!}!!O!}#R-_#R#S!!O#S#T3V#T#o!!O#o#s-_#s$f$q$f%W-_%W%o!!O%o%p-_%p&a!!O&a&b-_&b1p!!O1p4U-_4U4d!!O4d4e-_4e$IS!!O$IS$I`-_$I`$Ib!!O$Ib$Kh-_$Kh%#t!!O%#t&/x-_&/x&Et!!O&Et&FV-_&FV;'S!!O;'S;:j!&Q;:j;=`4s<%l?&r-_?&r?Ah!!O?Ah?BY$q?BY?Mn!!O?MnO$q!Z$|c`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr$qrs&}sv$qvw+Pwx(tx!^$q!^!_*V!_!a&X!a#S$q#S#T&X#T;'S$q;'S;=`+z<%lO$q!R&bX`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&Xq'UV`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}P'pT`POv'kw!^'k!_;'S'k;'S;=`(P<%lO'kP(SP;=`<%l'kp([S!cpOv(Vx;'S(V;'S;=`(h<%lO(Vp(kP;=`<%l(Vq(qP;=`<%l&}a({W`P!a`Or(trs'ksv(tw!^(t!^!_)e!_;'S(t;'S;=`*P<%lO(t`)jT!a`Or)esv)ew;'S)e;'S;=`)y<%lO)e`)|P;=`<%l)ea*SP;=`<%l(t!Q*^V!a`!cpOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!Q*vP;=`<%l*V!R*|P;=`<%l&XW+UYkWOX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+PW+wP;=`<%l+P!Z+}P;=`<%l$q!a,]``P!a`!cp!^^OX&XXY,QYZ,QZ]&X]^,Q^p&Xpq,Qqr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X!_-ljhS`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr-_rs&}sv-_vw/^wx(tx!P-_!P!Q$q!Q!^-_!^!_1n!_!a&X!a#S-_#S#T3V#T#s-_#s$f$q$f;'S-_;'S;=`4s<%l?Ah-_?Ah?BY$q?BY?Mn-_?MnO$q[/echSkWOX+PZ[+P^p+Pqr/^sw/^x!P/^!P!Q+P!Q!^/^!^!_0p!a#S/^#S#T0p#T#s/^#s$f+P$f;'S/^;'S;=`1h<%l?Ah/^?Ah?BY+P?BY?Mn/^?MnO+PS0uXhSqr0psw0px!P0p!Q!_0p!a#s0p$f;'S0p;'S;=`1b<%l?Ah0p?BY?Mn0pS1eP;=`<%l0p[1kP;=`<%l/^!U1wbhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!U3SP;=`<%l1n!V3bchS`P!a`!cpOq&Xqr3Vrs&}sv3Vvw0pwx(tx!P3V!P!Q&X!Q!^3V!^!_1n!_!a&X!a#s3V#s$f&X$f;'S3V;'S;=`4m<%l?Ah3V?Ah?BY&X?BY?Mn3V?MnO&X!V4pP;=`<%l3V!_4vP;=`<%l-_!Z5SV!`h`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}!_5rjhSkWc!ROX7dXZ8qZ[7d[^8q^p7dqr:crs8qst@Ttw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^/^!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!Z7ibkWOX7dXZ8qZ[7d[^8q^p7dqr7drs8qst+Ptw7dwx8qx!]7d!]!^9f!^!a8q!a#S7d#S#T8q#T;'S7d;'S;=`:]<%lO7d!R8tVOp8qqs8qt!]8q!]!^9Z!^;'S8q;'S;=`9`<%lO8q!R9`Oa!R!R9cP;=`<%l8q!Z9mYkWa!ROX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+P!Z:`P;=`<%l7d!_:jjhSkWOX7dXZ8qZ[7d[^8q^p7dqr:crs8qst/^tw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^<[!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!_b#d#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!>kdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#V1n#V#W!?y#W#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!@SdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#h1n#h#i!Ab#i#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!AkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#m1n#m#n!By#n#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!CSdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#d1n#d#e!Db#e#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!DkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#X1n#X#Y!5]#Y#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!FSchS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!a!G_!a!b##T!b#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!R!GfY!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!a!G_!a!b!Lv!b;'S!G_;'S;=`!N]<%lO!G_q!HZV!cpOv!HUvx!Hpx!a!HU!a!b!Iq!b;'S!HU;'S;=`!Jp<%lO!HUP!HsTO!a!Hp!a!b!IS!b;'S!Hp;'S;=`!Ik<%lO!HpP!IVTO!`!Hp!`!a!If!a;'S!Hp;'S;=`!Ik<%lO!HpP!IkOxPP!InP;=`<%l!Hpq!IvV!cpOv!HUvx!Hpx!`!HU!`!a!J]!a;'S!HU;'S;=`!Jp<%lO!HUq!JdS!cpxPOv(Vx;'S(V;'S;=`(h<%lO(Vq!JsP;=`<%l!HUa!J{X!a`Or!Jvrs!Hpsv!Jvvw!Hpw!a!Jv!a!b!Kh!b;'S!Jv;'S;=`!Lp<%lO!Jva!KmX!a`Or!Jvrs!Hpsv!Jvvw!Hpw!`!Jv!`!a!LY!a;'S!Jv;'S;=`!Lp<%lO!Jva!LaT!a`xPOr)esv)ew;'S)e;'S;=`)y<%lO)ea!LsP;=`<%l!Jv!R!L}Y!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!`!G_!`!a!Mm!a;'S!G_;'S;=`!N]<%lO!G_!R!MvV!a`!cpxPOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!R!N`P;=`<%l!G_T!NhbhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!a!Hp!a!b# p!b#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT# ubhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!`!Hp!`!a!If!a#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT##QP;=`<%l!Nc!V##^chS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!`!G_!`!a!Mm!a#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!V#$lP;=`<%l!Ey!V#$zXiS`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X",tokenizers:[pt,ct,dt,ut,ot,Ot,0,1,2,3,4,5],topRules:{Document:[0,15]},dialects:{noMatch:0,selfClosing:485},tokenPrec:487});function ie(e,t){let l=Object.create(null);for(let a of e.getChildren(re)){let r=a.getChild(Ie),n=a.getChild(k)||a.getChild(ne);r&&(l[t.read(r.from,r.to)]=n?n.type.id==k?t.read(n.from+1,n.to-1):t.read(n.from,n.to):"")}return l}function U(e,t){let l=e.getChild(He);return l?t.read(l.from,l.to):" "}function C(e,t,l){let a;for(let r of l)if(!r.attrs||r.attrs(a||(a=ie(e.node.parent.firstChild,t))))return{parser:r.parser};return null}function ue(e=[],t=[]){let l=[],a=[],r=[],n=[];for(let O of e)(O.tag=="script"?l:O.tag=="style"?a:O.tag=="textarea"?r:n).push(O);let o=t.length?Object.create(null):null;for(let O of t)(o[O.name]||(o[O.name]=[])).push(O);return be((O,p)=>{let h=O.type.id;if(h==Ue)return C(O,p,l);if(h==je)return C(O,p,a);if(h==Je)return C(O,p,r);if(h==se&&n.length){let i=O.node,u=U(i,p),c;for(let d of n)if(d.tag==u&&(!d.attrs||d.attrs(c||(c=ie(i,p))))){let f=i.parent.lastChild;return{parser:d.parser,overlay:[{from:O.to,to:f.type.id==Le?f.from:i.parent.to}]}}}if(o&&h==re){let i=O.node,u;if(u=i.firstChild){let c=o[p.read(u.from,u.to)];if(c)for(let d of c){if(d.tagName&&d.tagName!=U(i.parent,p))continue;let f=i.lastChild;if(f.type.id==k){let P=f.from+1,T=f.lastChild,x=f.to-(T&&T.isError?0:1);if(x>P)return{parser:d.parser,overlay:[{from:P,to:x}]}}else if(f.type.id==ne)return{parser:d.parser,overlay:[{from:f.from,to:f.to}]}}}}return null})}const b=["_blank","_self","_top","_parent"],Q=["ascii","utf-8","utf-16","latin1","latin1"],A=["get","post","put","delete"],X=["application/x-www-form-urlencoded","multipart/form-data","text/plain"],m=["true","false"],s={},mt={a:{attrs:{href:null,ping:null,type:null,media:null,target:b,hreflang:null}},abbr:s,address:s,area:{attrs:{alt:null,coords:null,href:null,target:null,ping:null,media:null,hreflang:null,type:null,shape:["default","rect","circle","poly"]}},article:s,aside:s,audio:{attrs:{src:null,mediagroup:null,crossorigin:["anonymous","use-credentials"],preload:["none","metadata","auto"],autoplay:["autoplay"],loop:["loop"],controls:["controls"]}},b:s,base:{attrs:{href:null,target:b}},bdi:s,bdo:s,blockquote:{attrs:{cite:null}},body:s,br:s,button:{attrs:{form:null,formaction:null,name:null,value:null,autofocus:["autofocus"],disabled:["autofocus"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,type:["submit","reset","button"]}},canvas:{attrs:{width:null,height:null}},caption:s,center:s,cite:s,code:s,col:{attrs:{span:null}},colgroup:{attrs:{span:null}},command:{attrs:{type:["command","checkbox","radio"],label:null,icon:null,radiogroup:null,command:null,title:null,disabled:["disabled"],checked:["checked"]}},data:{attrs:{value:null}},datagrid:{attrs:{disabled:["disabled"],multiple:["multiple"]}},datalist:{attrs:{data:null}},dd:s,del:{attrs:{cite:null,datetime:null}},details:{attrs:{open:["open"]}},dfn:s,div:s,dl:s,dt:s,em:s,embed:{attrs:{src:null,type:null,width:null,height:null}},eventsource:{attrs:{src:null}},fieldset:{attrs:{disabled:["disabled"],form:null,name:null}},figcaption:s,figure:s,footer:s,form:{attrs:{action:null,name:null,"accept-charset":Q,autocomplete:["on","off"],enctype:X,method:A,novalidate:["novalidate"],target:b}},h1:s,h2:s,h3:s,h4:s,h5:s,h6:s,head:{children:["title","base","link","style","meta","script","noscript","command"]},header:s,hgroup:s,hr:s,html:{attrs:{manifest:null}},i:s,iframe:{attrs:{src:null,srcdoc:null,name:null,width:null,height:null,sandbox:["allow-top-navigation","allow-same-origin","allow-forms","allow-scripts"],seamless:["seamless"]}},img:{attrs:{alt:null,src:null,ismap:null,usemap:null,width:null,height:null,crossorigin:["anonymous","use-credentials"]}},input:{attrs:{alt:null,dirname:null,form:null,formaction:null,height:null,list:null,max:null,maxlength:null,min:null,name:null,pattern:null,placeholder:null,size:null,src:null,step:null,value:null,width:null,accept:["audio/*","video/*","image/*"],autocomplete:["on","off"],autofocus:["autofocus"],checked:["checked"],disabled:["disabled"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,multiple:["multiple"],readonly:["readonly"],required:["required"],type:["hidden","text","search","tel","url","email","password","datetime","date","month","week","time","datetime-local","number","range","color","checkbox","radio","file","submit","image","reset","button"]}},ins:{attrs:{cite:null,datetime:null}},kbd:s,keygen:{attrs:{challenge:null,form:null,name:null,autofocus:["autofocus"],disabled:["disabled"],keytype:["RSA"]}},label:{attrs:{for:null,form:null}},legend:s,li:{attrs:{value:null}},link:{attrs:{href:null,type:null,hreflang:null,media:null,sizes:["all","16x16","16x16 32x32","16x16 32x32 64x64"]}},map:{attrs:{name:null}},mark:s,menu:{attrs:{label:null,type:["list","context","toolbar"]}},meta:{attrs:{content:null,charset:Q,name:["viewport","application-name","author","description","generator","keywords"],"http-equiv":["content-language","content-type","default-style","refresh"]}},meter:{attrs:{value:null,min:null,low:null,high:null,max:null,optimum:null}},nav:s,noscript:s,object:{attrs:{data:null,type:null,name:null,usemap:null,form:null,width:null,height:null,typemustmatch:["typemustmatch"]}},ol:{attrs:{reversed:["reversed"],start:null,type:["1","a","A","i","I"]},children:["li","script","template","ul","ol"]},optgroup:{attrs:{disabled:["disabled"],label:null}},option:{attrs:{disabled:["disabled"],label:null,selected:["selected"],value:null}},output:{attrs:{for:null,form:null,name:null}},p:s,param:{attrs:{name:null,value:null}},pre:s,progress:{attrs:{value:null,max:null}},q:{attrs:{cite:null}},rp:s,rt:s,ruby:s,samp:s,script:{attrs:{type:["text/javascript"],src:null,async:["async"],defer:["defer"],charset:Q}},section:s,select:{attrs:{form:null,name:null,size:null,autofocus:["autofocus"],disabled:["disabled"],multiple:["multiple"]}},slot:{attrs:{name:null}},small:s,source:{attrs:{src:null,type:null,media:null}},span:s,strong:s,style:{attrs:{type:["text/css"],media:null,scoped:null}},sub:s,summary:s,sup:s,table:s,tbody:s,td:{attrs:{colspan:null,rowspan:null,headers:null}},template:s,textarea:{attrs:{dirname:null,form:null,maxlength:null,name:null,placeholder:null,rows:null,cols:null,autofocus:["autofocus"],disabled:["disabled"],readonly:["readonly"],required:["required"],wrap:["soft","hard"]}},tfoot:s,th:{attrs:{colspan:null,rowspan:null,headers:null,scope:["row","col","rowgroup","colgroup"]}},thead:s,time:{attrs:{datetime:null}},title:s,tr:s,track:{attrs:{src:null,label:null,default:null,kind:["subtitles","captions","descriptions","chapters","metadata"],srclang:null}},ul:{children:["li","script","template","ul","ol"]},var:s,video:{attrs:{src:null,poster:null,width:null,height:null,crossorigin:["anonymous","use-credentials"],preload:["auto","metadata","none"],autoplay:["autoplay"],mediagroup:["movie"],muted:["muted"],controls:["controls"]}},wbr:s},pe={accesskey:null,class:null,contenteditable:m,contextmenu:null,dir:["ltr","rtl","auto"],draggable:["true","false","auto"],dropzone:["copy","move","link","string:","file:"],hidden:["hidden"],id:null,inert:["inert"],itemid:null,itemprop:null,itemref:null,itemscope:["itemscope"],itemtype:null,lang:["ar","bn","de","en-GB","en-US","es","fr","hi","id","ja","pa","pt","ru","tr","zh"],spellcheck:m,autocorrect:m,autocapitalize:m,style:null,tabindex:null,title:null,translate:["yes","no"],rel:["stylesheet","alternate","author","bookmark","help","license","next","nofollow","noreferrer","prefetch","prev","search","tag"],role:"alert application article banner button cell checkbox complementary contentinfo dialog document feed figure form grid gridcell heading img list listbox listitem main navigation region row rowgroup search switch tab table tabpanel textbox timer".split(" "),"aria-activedescendant":null,"aria-atomic":m,"aria-autocomplete":["inline","list","both","none"],"aria-busy":m,"aria-checked":["true","false","mixed","undefined"],"aria-controls":null,"aria-describedby":null,"aria-disabled":m,"aria-dropeffect":null,"aria-expanded":["true","false","undefined"],"aria-flowto":null,"aria-grabbed":["true","false","undefined"],"aria-haspopup":m,"aria-hidden":m,"aria-invalid":["true","false","grammar","spelling"],"aria-label":null,"aria-labelledby":null,"aria-level":null,"aria-live":["off","polite","assertive"],"aria-multiline":m,"aria-multiselectable":m,"aria-owns":null,"aria-posinset":null,"aria-pressed":["true","false","mixed","undefined"],"aria-readonly":m,"aria-relevant":null,"aria-required":m,"aria-selected":["true","false","undefined"],"aria-setsize":null,"aria-sort":["ascending","descending","none","other"],"aria-valuemax":null,"aria-valuemin":null,"aria-valuenow":null,"aria-valuetext":null},ce="beforeunload copy cut dragstart dragover dragleave dragenter dragend drag paste focus blur change click load mousedown mouseenter mouseleave mouseup keydown keyup resize scroll unload".split(" ").map(e=>"on"+e);for(let e of ce)pe[e]=null;class V{constructor(t,l){this.tags=Object.assign(Object.assign({},mt),t),this.globalAttrs=Object.assign(Object.assign({},pe),l),this.allTags=Object.keys(this.tags),this.globalAttrNames=Object.keys(this.globalAttrs)}}V.default=new V;function g(e,t,l=e.length){if(!t)return"";let a=t.firstChild,r=a&&a.getChild("TagName");return r?e.sliceString(r.from,Math.min(r.to,l)):""}function $(e,t=!1){for(let l=e.parent;l;l=l.parent)if(l.name=="Element")if(t)t=!1;else return l;return null}function de(e,t,l){let a=l.tags[g(e,$(t,!0))];return a?.children||l.allTags}function E(e,t){let l=[];for(let a=t;a=$(a);){let r=g(e,a);if(r&&a.lastChild.name=="CloseTag")break;r&&l.indexOf(r)<0&&(t.name=="EndTag"||t.from>=a.firstChild.to)&&l.push(r)}return l}const fe=/^[:\-\.\w\u00b7-\uffff]*$/;function j(e,t,l,a,r){let n=/\s*>/.test(e.sliceDoc(r,r+5))?"":">";return{from:a,to:r,options:de(e.doc,l,t).map(o=>({label:o,type:"type"})).concat(E(e.doc,l).map((o,O)=>({label:"/"+o,apply:"/"+o+n,type:"type",boost:99-O}))),validFor:/^\/?[:\-\.\w\u00b7-\uffff]*$/}}function J(e,t,l,a){let r=/\s*>/.test(e.sliceDoc(a,a+5))?"":">";return{from:l,to:a,options:E(e.doc,t).map((n,o)=>({label:n,apply:n+r,type:"type",boost:99-o})),validFor:fe}}function St(e,t,l,a){let r=[],n=0;for(let o of de(e.doc,l,t))r.push({label:"<"+o,type:"type"});for(let o of E(e.doc,l))r.push({label:"",type:"type",boost:99-n++});return{from:a,to:a,options:r,validFor:/^<\/?[:\-\.\w\u00b7-\uffff]*$/}}function gt(e,t,l,a,r){let n=$(l),o=n?t.tags[g(e.doc,n)]:null,O=o&&o.attrs?Object.keys(o.attrs):[],p=o&&o.globalAttrs===!1?O:O.length?O.concat(t.globalAttrNames):t.globalAttrNames;return{from:a,to:r,options:p.map(h=>({label:h,type:"property"})),validFor:fe}}function Pt(e,t,l,a,r){var n;let o=(n=l.parent)===null||n===void 0?void 0:n.getChild("AttributeName"),O=[],p;if(o){let h=e.sliceDoc(o.from,o.to),i=t.globalAttrs[h];if(!i){let u=$(l),c=u?t.tags[g(e.doc,u)]:null;i=c?.attrs&&c.attrs[h]}if(i){let u=e.sliceDoc(a,r).toLowerCase(),c='"',d='"';/^['"]/.test(u)?(p=u[0]=='"'?/^[^"]*$/:/^[^']*$/,c="",d=e.sliceDoc(r,r+1)==u[0]?"":u[0],u=u.slice(1),a++):p=/^[^\s<>='"]*$/;for(let f of i)O.push({label:f,apply:c+f+d,type:"constant"})}}return{from:a,to:r,options:O,validFor:p}}function he(e,t){let{state:l,pos:a}=t,r=z(l).resolveInner(a),n=r.resolve(a,-1);for(let o=a,O;r==n&&(O=n.childBefore(o));){let p=O.lastChild;if(!p||!p.type.isError||p.fromhe(a,r)}const me=[{tag:"script",attrs:e=>e.type=="text/typescript"||e.lang=="ts",parser:we.parser},{tag:"script",attrs:e=>e.type=="text/babel"||e.type=="text/jsx",parser:Ce.parser},{tag:"script",attrs:e=>e.type=="text/typescript-jsx",parser:Qe.parser},{tag:"script",attrs(e){return!e.type||/^(?:text|application)\/(?:x-)?(?:java|ecma)script$|^module$|^$/i.test(e.type)},parser:K.parser},{tag:"style",attrs(e){return(!e.lang||e.lang=="css")&&(!e.type||/^(text\/)?(x-)?(stylesheet|css)$/i.test(e.type))},parser:F.parser}],Se=[{name:"style",parser:F.parser.configure({top:"Styles"})}].concat(ce.map(e=>({name:e,parser:K.parser}))),_=Ve.define({name:"html",parser:ht.configure({props:[xe.add({Element(e){let t=/^(\s*)(<\/)?/.exec(e.textAfter);return e.node.to<=e.pos+t[0].length?e.continue():e.lineIndent(e.node.from)+(t[2]?0:e.unit)},"OpenTag CloseTag SelfClosingTag"(e){return e.column(e.node.from)+e.unit},Document(e){if(e.pos+/\s*/.exec(e.textAfter)[0].lengthe.getChild("TagName")})],wrap:ue(me,Se)}),languageData:{commentTokens:{block:{open:""}},indentOnInput:/^\s*<\/\w+\W$/,wordChars:"-._"}});function Xt(e={}){let t="",l;e.matchClosingTags===!1&&(t="noMatch"),e.selfClosingTags===!0&&(t=(t?t+" ":"")+"selfClosing"),(e.nestedLanguages&&e.nestedLanguages.length||e.nestedAttributes&&e.nestedAttributes.length)&&(l=ue((e.nestedLanguages||[]).concat(me),(e.nestedAttributes||[]).concat(Se)));let a=l||t?_.configure({dialect:t,wrap:l}):_;return new ve(a,[_.data.of({autocomplete:Tt(e)}),e.autoCloseTags!==!1?bt:[],Ae().support,$e().support])}const L=new Set("area base br col command embed frame hr img input keygen link meta param source track wbr menuitem".split(" ")),bt=qe.inputHandler.of((e,t,l,a)=>{if(e.composing||e.state.readOnly||t!=l||a!=">"&&a!="/"||!_.isActiveAt(e.state,t,-1))return!1;let{state:r}=e,n=r.changeByRange(o=>{var O,p,h;let{head:i}=o,u=z(r).resolveInner(i,-1),c;if((u.name=="TagName"||u.name=="StartTag")&&(u=u.parent),a==">"&&u.name=="OpenTag"){if(((p=(O=u.parent)===null||O===void 0?void 0:O.lastChild)===null||p===void 0?void 0:p.name)!="CloseTag"&&(c=g(r.doc,u.parent,i))&&!L.has(c)){let d=e.state.doc.sliceString(i,i+1)===">",f=`${d?"":">"}`;return{range:G.cursor(i+1),changes:{from:i+(d?1:0),insert:f}}}}else if(a=="/"&&u.name=="OpenTag"){let d=u.parent,f=d?.parent;if(d.from==i-1&&((h=f.lastChild)===null||h===void 0?void 0:h.name)!="CloseTag"&&(c=g(r.doc,f,i))&&!L.has(c)){let P=e.state.doc.sliceString(i,i+1)===">",T=`/${c}${P?"":">"}`,x=i+T.length+(P?1:0);return{range:G.cursor(x),changes:{from:i,insert:T}}}}return{range:o}});return n.changes.empty?!1:(e.dispatch(n,{userEvent:"input.type",scrollIntoView:!0}),!0)});export{bt as autoCloseTags,Xt as html,At as htmlCompletionSource,Tt as htmlCompletionSourceWith,_ as htmlLanguage}; -//# sourceMappingURL=index-e60153e4.js.map diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index 5cb3abb4f54e9ae00107bc3354deb1b80d642c9b..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,460 +0,0 @@ -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPImageProcessor, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import deprecate, logging -from ..onnx_utils import ORT_TO_NP_TYPE, OnnxRuntimeModel -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - w, h = image.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask, scale_factor=8): - mask = mask.convert("L") - w, h = mask.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - return mask - - -class OnnxStableDiffusionInpaintPipelineLegacy(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. This is a *legacy feature* for Onnx pipelines to - provide compatibility with StableDiffusionInpaintPipelineLegacy and may be removed in the future. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - vae_encoder: OnnxRuntimeModel - vae_decoder: OnnxRuntimeModel - text_encoder: OnnxRuntimeModel - tokenizer: CLIPTokenizer - unet: OnnxRuntimeModel - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - safety_checker: OnnxRuntimeModel - feature_extractor: CLIPImageProcessor - - def __init__( - self, - vae_encoder: OnnxRuntimeModel, - vae_decoder: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: CLIPTokenizer, - unet: OnnxRuntimeModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: OnnxRuntimeModel, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae_encoder=vae_encoder, - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids - - if not np.array_equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - prompt_embeds = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0] - prompt_embeds = np.repeat(prompt_embeds, num_images_per_prompt, axis=0) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] * batch_size - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="np", - ) - negative_prompt_embeds = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0] - negative_prompt_embeds = np.repeat(negative_prompt_embeds, num_images_per_prompt, axis=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = np.concatenate([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[np.ndarray, PIL.Image.Image] = None, - mask_image: Union[np.ndarray, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[np.random.RandomState] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, np.ndarray], None]] = None, - callback_steps: int = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. This is the image whose masked region will be inpainted. - mask_image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a - PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should - contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.uu - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (?) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`np.random.RandomState`, *optional*): - A np.random.RandomState to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if generator is None: - generator = np.random - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - if isinstance(image, PIL.Image.Image): - image = preprocess(image) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - prompt_embeds = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - latents_dtype = prompt_embeds.dtype - image = image.astype(latents_dtype) - - # encode the init image into latents and scale the latents - init_latents = self.vae_encoder(sample=image)[0] - init_latents = 0.18215 * init_latents - - # Expand init_latents for batch_size and num_images_per_prompt - init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0) - init_latents_orig = init_latents - - # preprocess mask - if not isinstance(mask_image, np.ndarray): - mask_image = preprocess_mask(mask_image, 8) - mask_image = mask_image.astype(latents_dtype) - mask = np.concatenate([mask_image] * num_images_per_prompt, axis=0) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError("The mask and image should be the same size!") - - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - timesteps = self.scheduler.timesteps.numpy()[-init_timestep] - timesteps = np.array([timesteps] * batch_size * num_images_per_prompt) - - # add noise to latents using the timesteps - noise = generator.randn(*init_latents.shape).astype(latents_dtype) - init_latents = self.scheduler.add_noise( - torch.from_numpy(init_latents), torch.from_numpy(noise), torch.from_numpy(timesteps) - ) - init_latents = init_latents.numpy() - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (?) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to ? in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:].numpy() - timestep_dtype = next( - (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)" - ) - timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype] - - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - timestep = np.array([t], dtype=timestep_dtype) - noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)[ - 0 - ] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs - ).prev_sample - - latents = latents.numpy() - - init_latents_proper = self.scheduler.add_noise( - torch.from_numpy(init_latents_orig), torch.from_numpy(noise), torch.from_numpy(np.array([t])) - ) - - init_latents_proper = init_latents_proper.numpy() - - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = 1 / 0.18215 * latents - # image = self.vae_decoder(latent_sample=latents)[0] - # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1 - image = np.concatenate( - [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])] - ) - - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor( - self.numpy_to_pil(image), return_tensors="np" - ).pixel_values.astype(image.dtype) - # There will throw an error if use safety_checker batchsize>1 - images, has_nsfw_concept = [], [] - for i in range(image.shape[0]): - image_i, has_nsfw_concept_i = self.safety_checker( - clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1] - ) - images.append(image_i) - has_nsfw_concept.append(has_nsfw_concept_i[0]) - image = np.concatenate(images) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_lms.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_lms.py deleted file mode 100644 index ca3574e9ee638546d313e5256feba804522da65b..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_lms.py +++ /dev/null @@ -1,115 +0,0 @@ -import torch - -from diffusers import LMSDiscreteScheduler -from diffusers.utils import torch_device - -from .test_schedulers import SchedulerCommonTest - - -class LMSDiscreteSchedulerTest(SchedulerCommonTest): - scheduler_classes = (LMSDiscreteScheduler,) - num_inference_steps = 10 - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1100, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [10, 50, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "scaled_linear"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_time_indices(self): - for t in [0, 500, 800]: - self.check_over_forward(time_step=t) - - def test_full_loop_no_noise(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 1006.388) < 1e-2 - assert abs(result_mean.item() - 1.31) < 1e-3 - - def test_full_loop_with_v_prediction(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 0.0017) < 1e-2 - assert abs(result_mean.item() - 2.2676e-06) < 1e-3 - - def test_full_loop_device(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps, device=torch_device) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 1006.388) < 1e-2 - assert abs(result_mean.item() - 1.31) < 1e-3 diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_score_sde_ve.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_score_sde_ve.py deleted file mode 100644 index 08c30f9b1e0c2ce1f7baab82f5076efabe465a69..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_score_sde_ve.py +++ /dev/null @@ -1,189 +0,0 @@ -import tempfile -import unittest - -import numpy as np -import torch - -from diffusers import ScoreSdeVeScheduler - - -class ScoreSdeVeSchedulerTest(unittest.TestCase): - # TODO adapt with class SchedulerCommonTest (scheduler needs Numpy Integration) - scheduler_classes = (ScoreSdeVeScheduler,) - forward_default_kwargs = () - - @property - def dummy_sample(self): - batch_size = 4 - num_channels = 3 - height = 8 - width = 8 - - sample = torch.rand((batch_size, num_channels, height, width)) - - return sample - - @property - def dummy_sample_deter(self): - batch_size = 4 - num_channels = 3 - height = 8 - width = 8 - - num_elems = batch_size * num_channels * height * width - sample = torch.arange(num_elems) - sample = sample.reshape(num_channels, height, width, batch_size) - sample = sample / num_elems - sample = sample.permute(3, 0, 1, 2) - - return sample - - def dummy_model(self): - def model(sample, t, *args): - return sample * t / (t + 1) - - return model - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 2000, - "snr": 0.15, - "sigma_min": 0.01, - "sigma_max": 1348, - "sampling_eps": 1e-5, - } - - config.update(**kwargs) - return config - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - - for scheduler_class in self.scheduler_classes: - sample = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - - output = scheduler.step_pred( - residual, time_step, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - new_output = new_scheduler.step_pred( - residual, time_step, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step_correct(residual, sample, generator=torch.manual_seed(0), **kwargs).prev_sample - new_output = new_scheduler.step_correct( - residual, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler correction are not identical" - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - kwargs.update(forward_kwargs) - - for scheduler_class in self.scheduler_classes: - sample = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - - output = scheduler.step_pred( - residual, time_step, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - new_output = new_scheduler.step_pred( - residual, time_step, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output = scheduler.step_correct(residual, sample, generator=torch.manual_seed(0), **kwargs).prev_sample - new_output = new_scheduler.step_correct( - residual, sample, generator=torch.manual_seed(0), **kwargs - ).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler correction are not identical" - - def test_timesteps(self): - for timesteps in [10, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_sigmas(self): - for sigma_min, sigma_max in zip([0.0001, 0.001, 0.01], [1, 100, 1000]): - self.check_over_configs(sigma_min=sigma_min, sigma_max=sigma_max) - - def test_time_indices(self): - for t in [0.1, 0.5, 0.75]: - self.check_over_forward(time_step=t) - - def test_full_loop_no_noise(self): - kwargs = dict(self.forward_default_kwargs) - - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps = 3 - - model = self.dummy_model() - sample = self.dummy_sample_deter - - scheduler.set_sigmas(num_inference_steps) - scheduler.set_timesteps(num_inference_steps) - generator = torch.manual_seed(0) - - for i, t in enumerate(scheduler.timesteps): - sigma_t = scheduler.sigmas[i] - - for _ in range(scheduler.config.correct_steps): - with torch.no_grad(): - model_output = model(sample, sigma_t) - sample = scheduler.step_correct(model_output, sample, generator=generator, **kwargs).prev_sample - - with torch.no_grad(): - model_output = model(sample, sigma_t) - - output = scheduler.step_pred(model_output, t, sample, generator=generator, **kwargs) - sample, _ = output.prev_sample, output.prev_sample_mean - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert np.isclose(result_sum.item(), 14372758528.0) - assert np.isclose(result_mean.item(), 18714530.0) - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - sample = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - scheduler.set_timesteps(num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output_0 = scheduler.step_pred(residual, 0, sample, generator=torch.manual_seed(0), **kwargs).prev_sample - output_1 = scheduler.step_pred(residual, 1, sample, generator=torch.manual_seed(0), **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/web_browser_engine_playwright.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/web_browser_engine_playwright.py deleted file mode 100644 index 8eecc4f403f6413729c73c125826481881c6573f..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/web_browser_engine_playwright.py +++ /dev/null @@ -1,153 +0,0 @@ -#!/usr/bin/env python -""" -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" - -from __future__ import annotations - -import asyncio -import sys -from pathlib import Path -from typing import Literal - -from playwright.async_api import async_playwright - -from metagpt.config import CONFIG -from metagpt.logs import logger -from metagpt.utils.parse_html import WebPage - - -class PlaywrightWrapper: - """Wrapper around Playwright. - - To use this module, you should have the `playwright` Python package installed and ensure that - the required browsers are also installed. You can install playwright by running the command - `pip install metagpt[playwright]` and download the necessary browser binaries by running the - command `playwright install` for the first time. - """ - - def __init__( - self, - browser_type: Literal["chromium", "firefox", "webkit"] | None = None, - launch_kwargs: dict | None = None, - **kwargs, - ) -> None: - if browser_type is None: - browser_type = CONFIG.playwright_browser_type - self.browser_type = browser_type - launch_kwargs = launch_kwargs or {} - if CONFIG.global_proxy and "proxy" not in launch_kwargs: - args = launch_kwargs.get("args", []) - if not any(str.startswith(i, "--proxy-server=") for i in args): - launch_kwargs["proxy"] = {"server": CONFIG.global_proxy} - self.launch_kwargs = launch_kwargs - context_kwargs = {} - if "ignore_https_errors" in kwargs: - context_kwargs["ignore_https_errors"] = kwargs["ignore_https_errors"] - self._context_kwargs = context_kwargs - self._has_run_precheck = False - - async def run(self, url: str, *urls: str) -> WebPage | list[WebPage]: - async with async_playwright() as ap: - browser_type = getattr(ap, self.browser_type) - await self._run_precheck(browser_type) - browser = await browser_type.launch(**self.launch_kwargs) - _scrape = self._scrape - - if urls: - return await asyncio.gather(_scrape(browser, url), *(_scrape(browser, i) for i in urls)) - return await _scrape(browser, url) - - async def _scrape(self, browser, url): - context = await browser.new_context(**self._context_kwargs) - page = await context.new_page() - async with page: - try: - await page.goto(url) - await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") - html = await page.content() - inner_text = await page.evaluate("() => document.body.innerText") - except Exception as e: - inner_text = f"Fail to load page content for {e}" - html = "" - return WebPage(inner_text=inner_text, html=html, url=url) - - async def _run_precheck(self, browser_type): - if self._has_run_precheck: - return - - executable_path = Path(browser_type.executable_path) - if not executable_path.exists() and "executable_path" not in self.launch_kwargs: - kwargs = {} - if CONFIG.global_proxy: - kwargs["env"] = {"ALL_PROXY": CONFIG.global_proxy} - await _install_browsers(self.browser_type, **kwargs) - - if self._has_run_precheck: - return - - if not executable_path.exists(): - parts = executable_path.parts - available_paths = list(Path(*parts[:-3]).glob(f"{self.browser_type}-*")) - if available_paths: - logger.warning( - "It seems that your OS is not officially supported by Playwright. " - "Try to set executable_path to the fallback build version." - ) - executable_path = available_paths[0].joinpath(*parts[-2:]) - self.launch_kwargs["executable_path"] = str(executable_path) - self._has_run_precheck = True - - -def _get_install_lock(): - global _install_lock - if _install_lock is None: - _install_lock = asyncio.Lock() - return _install_lock - - -async def _install_browsers(*browsers, **kwargs) -> None: - async with _get_install_lock(): - browsers = [i for i in browsers if i not in _install_cache] - if not browsers: - return - process = await asyncio.create_subprocess_exec( - sys.executable, - "-m", - "playwright", - "install", - *browsers, - # "--with-deps", - stdout=asyncio.subprocess.PIPE, - stderr=asyncio.subprocess.PIPE, - **kwargs, - ) - - await asyncio.gather(_log_stream(process.stdout, logger.info), _log_stream(process.stderr, logger.warning)) - - if await process.wait() == 0: - logger.info("Install browser for playwright successfully.") - else: - logger.warning("Fail to install browser for playwright.") - _install_cache.update(browsers) - - -async def _log_stream(sr, log_func): - while True: - line = await sr.readline() - if not line: - return - log_func(f"[playwright install browser]: {line.decode().strip()}") - - -_install_lock: asyncio.Lock = None -_install_cache = set() - - -if __name__ == "__main__": - import fire - - async def main(url: str, *urls: str, browser_type: str = "chromium", **kwargs): - return await PlaywrightWrapper(browser_type=browser_type, **kwargs).run(url, *urls) - - fire.Fire(main) diff --git a/spaces/denisp1/Streamlit-Grammar-Corrector-Styler/README.md b/spaces/denisp1/Streamlit-Grammar-Corrector-Styler/README.md deleted file mode 100644 index f0baec2dc7b7e753e390520e48c0b483e98a9b43..0000000000000000000000000000000000000000 --- a/spaces/denisp1/Streamlit-Grammar-Corrector-Styler/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🔥 Streamlit Grammar Corrector Styler -emoji: 🌀 -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Activator For Windows And Office KMS Pico V9.1 Crack LINK.md b/spaces/diacanFperku/AutoGPT/Activator For Windows And Office KMS Pico V9.1 Crack LINK.md deleted file mode 100644 index 03d17b81e91a8ee2105a0d0797dd8d93bd70affb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Activator For Windows And Office KMS Pico V9.1 Crack LINK.md +++ /dev/null @@ -1,90 +0,0 @@ - -

                Activator for Windows and Office KMS Pico v9.1 crack: How to activate your Microsoft products for free

                - -

                If you are looking for a way to activate your Microsoft Windows and Office products without paying a dime, you might have heard of KMS Pico. This is a software tool that can activate any version of Windows and Office by using a crack method called KMS (Key Management Service). In this article, we will explain what KMS Pico is, how it works, and how you can use it to activate your Microsoft products for free.

                - -

                What is KMS Pico and how does it work?

                - -

                KMS Pico is a software tool that can activate any version of Windows and Office by emulating a genuine KMS server on your local machine. KMS stands for Key Management Service, which is a technology that Microsoft uses to activate its products on large networks of computers.

                -

                Activator for Windows and Office KMS Pico v9.1 crack


                DOWNLOAD ---> https://gohhs.com/2uFUCM



                - -

                When you install Windows or Office on your PC, you need to enter a product key or a license key to activate it. This key is verified by Microsoft through an online server. However, if you are using a volume license edition of Windows or Office, such as in an enterprise or an educational institution, you can activate your products through a local KMS server instead of an online server.

                - -

                A KMS server is a computer that runs a special software that can generate and validate license keys for Windows and Office products. When you activate your products through a KMS server, you don't need to enter a product key or connect to the internet. You just need to connect to the local network where the KMS server is located.

                - -

                KMS Pico works by creating a virtual KMS server on your PC and forcing your Windows and Office products to activate themselves against it. This way, you can bypass the online verification process and enjoy the full features of your Microsoft products without paying anything.

                - -

                How to use KMS Pico to activate your Windows and Office products?

                - -

                Using KMS Pico to activate your Windows and Office products is very easy and fast. You just need to follow these simple steps:

                - -
                  -
                1. Download the latest version of KMS Pico from the official website: https://www.kmspicoofficial.com/
                2. -
                3. Extract the zip file and run the setup.exe file as administrator.
                4. -
                5. Follow the installation instructions and accept the terms and conditions.
                6. -
                7. Wait for the installation to complete and launch the program.
                8. -
                9. Click on the red button to start the activation process.
                10. -
                11. Wait for a few seconds until you see a green check mark and a message saying "Activation successful".
                12. -
                13. Restart your PC and enjoy your activated Windows and Office products.
                14. -
                - -

                Note: You may need to disable your antivirus or firewall before running KMS Pico, as some security programs may detect it as a malicious software. However, KMS Pico is 100% safe and clean to use.

                - -

                What are the benefits of using KMS Pico?

                - -

                Using KMS Pico to activate your Windows and Office products has many benefits, such as:

                -

                - -
                  -
                • You can activate any version of Windows and Office, including Windows 7/8/8.1/10/11 and Office 2010/2013/2016/2019/2021.
                • -
                • You can activate your products permanently, without any expiration date or trial period.
                • -
                • You can enjoy all the premium features of your Microsoft products, such as updates, security patches, customization options, etc.
                • -
                • You can save money by not buying expensive license keys or subscriptions.
                • -
                • You can avoid any legal issues or penalties by using genuine activation methods.
                • -
                - -

                Conclusion

                - -

                KMS Pico is a powerful software tool that can activate any version of Windows and Office by using a crack method called KMS. It is easy to use, fast, and reliable. It can help you enjoy all the benefits of your Microsoft products without paying anything. If you want to download KMS Pico and activate your Windows and Office products for free, visit the official website: https://www.kmspicoofficial.com/

                -

                Tips and tricks for using KMS Pico

                - -

                KMS Pico is a simple and effective tool for activating your Windows and Office products, but there are some tips and tricks that can help you use it better and avoid some common problems. Here are some of them:

                - -
                  -
                • Disable your antivirus or firewall before running KMS Pico. Some security programs may detect KMS Pico as a malicious software and block it from running or delete it from your PC. To prevent this, you should disable your antivirus or firewall temporarily before running KMS Pico. You can enable them again after the activation is done.
                • -
                • Run KMS Pico as administrator. To ensure that KMS Pico can access all the necessary files and registry entries to activate your products, you should run it as administrator. To do this, right-click on the KMS Pico icon and select "Run as administrator". This will give KMS Pico the highest level of permission to perform its tasks.
                • -
                • Check your activation status regularly. To make sure that your products are still activated and not expired, you should check your activation status regularly. You can do this by clicking on the "Tokens" tab in KMS Pico and then clicking on the blue square with a big "I" in it. This will show you your system edition and activation status. You can also check your activation status by going to Start > right-click on Computer > Properties.
                • -
                • Update your products after activation. After activating your products with KMS Pico, you can update them normally through Windows Update or Office Update. This will keep your products secure and up-to-date with the latest features and patches. However, you should avoid updating your products to a newer version that is not supported by KMS Pico, as this may cause your activation to be lost.
                • -
                • Use KMS Pico only for testing purposes. KMS Pico is a tool that is intended for testing purposes only, not for commercial or illegal use. By using KMS Pico, you are violating the terms and conditions of Microsoft and may face legal consequences. Therefore, you should use KMS Pico only for testing purposes and buy a genuine license key or subscription if you want to use the products for a long time.
                • -
                - -

                These are some of the tips and tricks that can help you use KMS Pico more effectively and safely. If you have any questions or problems with KMS Pico, you can contact the support team of Solvusoft, the developer of KMS Pico, at support@solvusoft.com or visit their website: https://www.solvusoft.com/en

                -

                Alternatives to KMS Pico

                - -

                KMS Pico is not the only tool that can activate your Windows and Office products using the KMS method. There are other alternatives that you can try if you want to use a different software or have some issues with KMS Pico. Here are some of them:

                - -
                  -
                • KMSAuto Net: This is another popular and reliable KMS activator that can activate any version of Windows and Office. It is a portable tool that does not require installation and has a simple and user-friendly interface. It also has some extra features such as backup and restore of activation, conversion of Office 2016/2019/2021 retail to volume, and activation of Windows 10 LTSC editions. You can download KMSAuto Net from https://kmsauto.net/
                • -
                • Microsoft Toolkit: This is a versatile and multifunctional tool that can activate Windows and Office as well as manage, customize, and optimize them. It can activate any edition of Windows from Vista to 10 and any version of Office from 2010 to 2019. It also has some other features such as creating bootable USB drives, installing or uninstalling product keys, checking activation status, and more. You can download Microsoft Toolkit from https://microsoft-toolkit.com/
                • -
                • py-kms: This is a KMS server emulator written in Python that can activate Windows and Office products on your local network. It can run on any platform that supports Python, such as Windows, Linux, or Mac OS. It can activate any edition of Windows from Vista to 11 and any version of Office from 2010 to 2021. It also supports online activation and renewal of licenses. You can download py-kms from https://github.com/SystemRage/py-kms
                • -
                - -

                These are some of the alternatives to KMS Pico that you can use to activate your Windows and Office products for free using the KMS method. Each one has its own advantages and disadvantages, so you can choose the one that suits your needs best.

                -

                FAQs about KMS Pico

                - -

                KMS Pico is a tool that has raised many questions and doubts among users who want to activate their Windows and Office products for free. Here are some of the most frequently asked questions about KMS Pico and their answers:

                - -
                  -
                • Is KMS Pico safe to use? KMS Pico is safe to use if you download it from a trusted source and follow the instructions carefully. However, there are many fake and malicious versions of KMS Pico on the internet that may contain viruses, trojans, or adware. Therefore, you should always scan the file with an antivirus before running it and disable your antivirus or firewall temporarily while using it.
                • -
                • Is KMS Pico legal to use? KMS Pico is not legal to use as it violates the terms and conditions of Microsoft and infringes their intellectual property rights. By using KMS Pico, you are using a pirated version of Windows and Office that may not be genuine or updated. This may cause legal issues or penalties if you are caught by Microsoft or other authorities.
                • -
                • How long does KMS Pico last? KMS Pico lasts for 180 days after which it needs to be activated again. However, KMS Pico has a built-in feature that runs twice a day and resets the activation counter to zero. This way, you can keep your products activated permanently without any expiration date or trial period.
                • -
                • Does KMS Pico work offline? KMS Pico works offline as it creates a virtual KMS server on your local machine and activates your products against it. You don't need to connect to the internet or any online server to use KMS Pico. However, you may need to connect to the internet once in a while to update your products or check their activation status.
                • -
                • Can I uninstall KMS Pico after activation? You can uninstall KMS Pico after activation if you want to free up some space on your PC or remove any traces of the tool. However, this may affect your activation status and cause your products to become unactivated or invalid. Therefore, it is recommended to keep KMS Pico on your PC and let it run in the background to maintain your activation.
                • -
                - -

                These are some of the FAQs about KMS Pico that you may find useful or informative. If you have any other questions or problems with KMS Pico, you can contact the support team of Solvusoft, the developer of KMS Pico, at support@solvusoft.com or visit their website: https://www.solvusoft.com/en

                -

                Conclusion

                - -

                KMS Pico is a tool that can activate any version of Windows and Office using the KMS method. It is easy to use, fast, and reliable. It can help you enjoy all the benefits of your Microsoft products without paying anything. However, KMS Pico is not a legal or safe tool to use as it violates the terms and conditions of Microsoft and may contain malware or viruses. Therefore, you should use KMS Pico only for testing purposes and buy a genuine license key or subscription if you want to use the products for a long time. If you want to download KMS Pico and activate your Windows and Office products for free, visit the official website: https://www.getkmspico.com/

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md b/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md deleted file mode 100644 index 997ca18471e0a86aac971b21f19dd9003dd37e2f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md +++ /dev/null @@ -1,12 +0,0 @@ -
                -

                chapai bec3238e82 https://playdownloadonline.com/story/4616064-online-player-bhai-telugu-. They're made with the purpose of softening of the data that is accessible for the information that is relevant to you.

                -

                HD Online Player (Tevar Movie 720p Kickass Torrent)


                Downloadhttps://gohhs.com/2uFUtr



                -

                https://www.cakeresume.com/portfolios/alien-shooter-td-torrent-download-pc. net/kreditkarte-bestellen/dopo-finish-8247. https://www.cakeresume.com/portfolios/alien-shooter-td-torrent-download-pc. net/profile/HD-Online-Player-Download-Hindi-Movie-DHOOM-3-Torrent-HOT/profile

                -

                The three key components of this philosophy of your company are honesty, dedication and accountability. The popularity of the 360|Fusion has created VR scenes that -keygen- Product Design Suite 2013 – 64-Bit- Kickass- Torrent-dorrtal. 0 -train-simulator-albula 80de58dbe1. They're made with the purpose of softening of the data that is accessible for the information that is relevant to you.

                -

                Muzućnosti: Tevar movie 720p kickass torrent. -The-Tevar-Movie-Download-HD-1080p-Kicksass-Torrent. Online-Player-Bangladesh. HD Online Player. 100 dilwale movie download in kickass torrent top.

                -

                See more projects: HD Online Player.. - Online-Player-Tevar-Movie-720p-Kickass-Torrent..com/d/UgllEzIJI. Apollo Glider

                Ecoductor elwflo 7b17bfd26b. Hunterrr-Movie-Download-Hd-1080p-Kickass.pdf. Download Tevar movie 720p kickass torrent.

                -

                thefussballmovie - Meshtir Nisar (Compulsive) - Maargad - Keygen - mhddd. https://coub.com/stories/2101660-download-hd-online-player-tevar-movie-720p-kickass-torrent. -0xe9-client-du-oh-guh-keygen/file.

                -

                chapai b8d0503c82 https://coub.com/stories/1231823-online-player-hd-online-player-download-tevar-movie-720p-kickass-torrent. The 2nd Copa America 2021 semifinal played between Argentina and. I must confess that it is a pleasure to relish the.

                -

                899543212b
                -
                -
                \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md b/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md deleted file mode 100644 index 0e1fa574212ac627c8ebe391f54988b47a0433aa..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md +++ /dev/null @@ -1,70 +0,0 @@ -
                -

                Huawei HG532e Firmware: How to Download and Install It

                - -

                The Huawei HG532e is a wireless modem that can provide high-speed internet access and voice over IP (VoIP) services. It supports ADSL2+, 3G, and Wi-Fi connections, and has four LAN ports and two USB ports. The Huawei HG532e firmware is the software that runs on the modem and controls its functions and features.

                - -

                Updating the Huawei HG532e firmware can improve the performance and stability of the modem, fix some bugs and errors, and add new features and functions. However, updating the firmware can also be risky if not done properly. If the firmware update fails or is interrupted, the modem may become unusable or bricked.

                -

                huawei hg532e firmware


                DOWNLOADhttps://gohhs.com/2uFV7N



                - -

                Therefore, it is important to follow some precautions and steps before and during the firmware update process. In this article, we will show you how to download and install the Huawei HG532e firmware safely and easily.

                - -

                How to Download the Huawei HG532e Firmware

                - -

                The first step to update the Huawei HG532e firmware is to download the latest version of the firmware from a reliable source. You can find the official firmware files on the Huawei Enterprise Support Community website or on the Easy Firmware website.

                - -

                To download the firmware from the Huawei Enterprise Support Community website, you need to register an account and log in. Then, you can search for the Huawei Modem HG532e Firmware Download thread or click on this link: https://forum.huawei.com/enterprise/en/huawei-modem-hg532e-firmware-download/thread/679561-100181. There, you can find the download links for different versions of the firmware according to your region and operator.

                - -

                To download the firmware from the Easy Firmware website, you need to pay a subscription fee or use a free trial account. Then, you can search for the Firmware HUAWEI HG532e page or click on this link: https://easy-firmware.com/solution/en/2020/04/16/firmware-huawei-hg532e/. There, you can find the download links for different versions of the firmware according to your region and operator.

                - -

                After downloading the firmware file, you need to extract it using a software like WinRAR or 7-Zip. You will get a folder named dload that contains an UPDATE.APP file. This is the firmware file that you need to copy to your SD card or USB flash drive.

                - -

                How to Install the Huawei HG532e Firmware

                - -

                The second step to update the Huawei HG532e firmware is to install it on your modem using one of these two methods: normal update or forced update.

                -

                - -

                The normal update method is recommended if your modem is working normally and can access the internet. To use this method, you need to follow these steps:

                - -
                  -
                1. Insert your SD card or USB flash drive that contains the dload folder with the UPDATE.APP file into your modem.
                2. -
                3. Open your web browser and enter http://192.168.1.1 in

                  -
                4. -
                5. Enter your username and password to log in to your modem's web interface. The default username and password are both admin. If you have changed them, use your custom username and password instead.
                6. -
                7. Go to System Tools > Firmware Upgrade.
                8. -
                9. Select Local Upgrade from SD Card or Local Upgrade from USB Storage depending on where you copied the dload folder.
                10. -
                11. Click Browse and select the dload folder with the UPDATE.APP file.
                12. -
                13. Click Upgrade and wait for the process to complete.
                14. -
                15. Do not turn off or disconnect your modem during the upgrade process.
                16. -
                17. When the upgrade is done, your modem will reboot automatically.
                18. -
                - -

                The forced update method is recommended if your modem is not working normally or cannot access the internet. To use this method, you need to follow these steps:

                - -
                  -
                1. Insert your SD card or USB flash drive that contains the dload folder with the UPDATE.APP file into your modem.
                2. -
                3. Turn off your modem by pressing and holding the power button for a few seconds.
                4. -
                5. Press and hold both WPS and Reset buttons on your modem at the same time.
                6. -
                7. While holding both buttons, turn on your modem by pressing and holding the power button for a few seconds.
                8. -
                9. Release all buttons when you see all LED lights flashing on your modem.
                10. -
                11. The upgrade process will start automatically and may take several minutes.
                12. -
                13. Do not turn off or disconnect your modem during the upgrade process.
                14. -
                15. When the upgrade is done, your modem will reboot automatically.
                16. -
                - -

                How to Check Your Huawei HG532e Firmware Version

                - -

                The third step to update the Huawei HG532e firmware is to check if your firmware version has been updated successfully. To do this, you need to follow these steps:

                - -
                  -
                1. Open your web browser and enter http://192.168.1.1 in
                2. -
                3. Enter your username and password to log in to your modem's web interface. The default username and password are both admin. If you have changed them, use your custom username and password instead.
                4. -
                5. Go to Status > Device Information.
                6. -
                7. Check the Firmware Version field and compare it with the version you downloaded.
                8. -
                9. If they match, congratulations! You have successfully updated your Huawei HG532e firmware.
                10. -
                - -

                Conclusion

                - -

                In this article, we have shown you how to download and install the Huawei HG532e firmware using two methods: normal update and forced update. We have also shown you how to check your firmware version after the update. Updating the Huawei HG532e firmware can improve the performance and stability of your modem, fix some bugs and errors, and add new features and functions. However, updating the firmware can also be risky if not done properly. Therefore, it is important to follow some precautions and steps before and during the firmware update process. We hope this article has been helpful for you. If you have any questions or comments, please feel free to leave them below.

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md b/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md deleted file mode 100644 index 5ed6b6d19bbb22be79b53ca414f229dee81684d6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

                IVT BlueSoleil 6.2.227.11 32 64bit with crack XP Vista utorrent


                Download File >>> https://gohhs.com/2uFVvv



                - - d5da3c52bf
                -
                -
                -

                diff --git a/spaces/diacanFperku/AutoGPT/Loaris Trojan Remover 3.1.15 Crack With Keygen Free Download.md b/spaces/diacanFperku/AutoGPT/Loaris Trojan Remover 3.1.15 Crack With Keygen Free Download.md deleted file mode 100644 index 6abea980583e3cdee68a22bb545b04d8c142cee6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Loaris Trojan Remover 3.1.15 Crack With Keygen Free Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

                Loaris Trojan Remover 3.1.15 Crack With Keygen Free Download


                Download Zip - https://gohhs.com/2uFVIu



                - -Loaris Trojan Remover 3.2.0.1695 Crack + License Key 2022 Download Loaris Troja n Remover Crack full version is a great app that makes it easy. Loaris Trojan Remover 3.2.0.1695 Crack license key is what you need if you want to protect your PC from Trojans. -Loaris Trojan Remover is a program to remove malware that can infect your computer. -It includes a set of recovery tools. -Loaris Trojan Remover 2. Crack Key. -It includes the 8a78ff9644
                -
                -
                -

                diff --git a/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md b/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md deleted file mode 100644 index 55584849b33d9574797e93cbdd3b6f35939ebe8f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md +++ /dev/null @@ -1,45 +0,0 @@ -
                -

                How to Download and Install Panasonic KX-TD500 Software

                -

                Panasonic KX-TD500 is a digital super hybrid system that offers advanced features and functions for business communication. It can support up to 512 extensions and 448 trunks, and can be programmed and managed via PC interface software. In this article, we will show you how to download and install the software for Panasonic KX-TD500.

                -

                Step 1: Download the software from Panasonic website

                -

                The software for Panasonic KX-TD500 consists of four components: Server, Agent, Console and Module Creator. You can download them from the Panasonic website at https://panasonic.net/cns/pcc/support/fax/Central%20Management%20Controller%20software.html. You will also find the Read Me files and the Operator's Guide files that explain the system requirements, installation procedures and usage instructions for each component.

                -

                Panasonic Kx Td500 Software Downloadl


                Download File > https://gohhs.com/2uFTJR



                -

                Step 2: Install the Server component on a PC

                -

                The Server component is the main program that communicates with the Panasonic KX-TD500 system and stores the data and settings. You need to install it on a PC that meets the following requirements:

                -
                  -
                • Operating System: Windows Vista® (32bit / 64bit), Windows® 7 (32bit / 64bit), Windows® 8 (32bit / 64bit), Windows® 10 (32bit / 64bit), Windows Server® 2008 (32bit / 64bit), Windows Server® 2012 (64bit)
                • -
                • CPU: Intel® Core™ i3 or higher
                • -
                • Memory: 4GB or more
                • -
                • HDD: 10GB or more of free space
                • -
                • Network: LAN connection with TCP/IP protocol
                • -
                • Display: Resolution of 1024 x 768 pixels or higher
                • -
                -

                To install the Server component, follow these steps:

                -
                  -
                1. Run the ServerXXXX_Setup.exe file that you downloaded from the Panasonic website.
                2. -
                3. Follow the instructions on the screen to complete the installation.
                4. -
                5. Restart your PC if prompted.
                6. -
                7. Launch the Server program from the Start menu or the desktop shortcut.
                8. -
                9. Enter the IP address and port number of the Panasonic KX-TD500 system in the Server Settings dialog box.
                10. -
                11. Click OK to save the settings and connect to the system.
                12. -
                -

                Step 3: Install the Agent component on each PC that connects to Panasonic KX-TD500

                -

                The Agent component is a program that runs in the background on each PC that connects to Panasonic KX-TD500 via LAN. It collects and sends information about the status and unit information of each Multi-Function Printer and PC in same network. You need to install it on each PC that meets the following requirements:

                -
                  -
                • Operating System: Windows® XP (32bit / 64bit), Windows Vista® (32bit / 64bit), Windows® 7 (32bit / 64bit), Windows® 8 (32bit / 64bit), Windows® 10 (32bit / 64bit), Windows Server® 2008 (32bit / 64bit), Windows Server® 2012 (64bit)
                • -
                • CPU: Intel® Pentium® III or higher
                • -
                • Memory: 256MB or more
                • -
                • HDD: 100MB or more of free space
                • -
                • Network: LAN connection with TCP/IP protocol
                • -
                -

                To install the Agent component, follow these steps:

                -
                  -
                1. Run the AgentXXXX_Setup.exe file that you downloaded from the Panasonic website.
                2. -
                3. Follow the instructions on the screen to complete the installation.
                4. -
                5. Restart your PC if prompted.
                6. -
                7. The Agent program will start automatically when you log on to your PC.
                8. -
                9. You can check the status of the Agent program by right-clicking on its icon in the system tray.
                10. - -

                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py b/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py b/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py deleted file mode 100644 index 2d69bf222675312e2dbc7f6739406e21afe9603b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py +++ /dev/null @@ -1,180 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/xt_read/xt_read_1.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 星瞳 朗读专用(小王子版本) Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳整合 https://huggingface.co/spaces/digitalxingtong/Xingtong-All-in-One\n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/dilums/sentence-similarity/app/api/compare/route.ts b/spaces/dilums/sentence-similarity/app/api/compare/route.ts deleted file mode 100644 index cce6e61dbf4721976990ea62fc1546d4aca8900e..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/app/api/compare/route.ts +++ /dev/null @@ -1,20 +0,0 @@ -import { NextResponse } from "next/server"; - -export async function POST(request: Request) { - const { inputs } = await request.json(); - - const response = await fetch( - "https://api-inference.huggingface.co/models/sentence-transformers/all-MiniLM-L6-v2", - { - headers: { - Authorization: `Bearer ${process.env.HF_TOKEN}`, - }, - method: "POST", - body: JSON.stringify({ inputs }), - } - ); - - const result = await response.json(); - - return NextResponse.json({ data: result }); -} diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py deleted file mode 100644 index 58856312705bcc757550ca84f97a097f80f9be24..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py +++ /dev/null @@ -1,128 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_5e.py' -] - -dict_file = 'data/chineseocr/labels/dict_printed_chinese_english_digits.txt' -label_convertor = dict( - type='AttnConvertor', dict_file=dict_file, with_unknown=True) - -model = dict( - type='SARNet', - backbone=dict(type='ResNet31OCR'), - encoder=dict( - type='SAREncoder', - enc_bi_rnn=False, - enc_do_rnn=0.1, - enc_gru=False, - ), - decoder=dict( - type='ParallelSARDecoder', - enc_bi_rnn=False, - dec_bi_rnn=False, - dec_do_rnn=0, - dec_gru=False, - pred_dropout=0.1, - d_k=512, - pred_concat=True), - loss=dict(type='SARLoss'), - label_convertor=label_convertor, - max_seq_len=30) - -img_norm_cfg = dict(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=48, - min_width=48, - max_width=256, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiRotateAugOCR', - rotate_degrees=[0, 90, 270], - transforms=[ - dict( - type='ResizeOCR', - height=48, - min_width=48, - max_width=256, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'valid_ratio' - ]), - ]) -] - -dataset_type = 'OCRDataset' - -train_prefix = 'data/chinese/' - -train_ann_file = train_prefix + 'labels/train.txt' - -train = dict( - type=dataset_type, - img_prefix=train_prefix, - ann_file=train_ann_file, - loader=dict( - type='HardDiskLoader', - repeat=1, - parser=dict( - type='LineStrParser', - keys=['filename', 'text'], - keys_idx=[0, 1], - separator=' ')), - pipeline=None, - test_mode=False) - -test_prefix = 'data/chineseocr/' - -test_ann_file = test_prefix + 'labels/test.txt' - -test = dict( - type=dataset_type, - img_prefix=test_prefix, - ann_file=test_ann_file, - loader=dict( - type='HardDiskLoader', - repeat=1, - parser=dict( - type='LineStrParser', - keys=['filename', 'text'], - keys_idx=[0, 1], - separator=' ')), - pipeline=None, - test_mode=False) - -data = dict( - samples_per_gpu=40, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', datasets=[train], - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', datasets=[test], pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', datasets=[test], pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py b/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py deleted file mode 100644 index de8b5ebcd51de864852c9d710de377f72513ff97..0000000000000000000000000000000000000000 --- a/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/NousResearch/Nous-Hermes-13b").launch() \ No newline at end of file diff --git a/spaces/dongyi/MMFS/data/__init__.py b/spaces/dongyi/MMFS/data/__init__.py deleted file mode 100644 index 0966d776800af917654beb18020d28a942eaa89c..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/data/__init__.py +++ /dev/null @@ -1,58 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- : (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import importlib -import torch.utils.data -from torch.utils.data.distributed import DistributedSampler - -class CustomDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, config, dataset, DDP_gpu=None, drop_last=False): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.config = config - self.dataset = dataset - - if DDP_gpu is None: - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=config['dataset']['batch_size'], - shuffle=not config['dataset']['serial_batches'], - num_workers=int(config['dataset']['n_threads']), drop_last=drop_last) - else: - sampler = DistributedSampler(self.dataset, num_replicas=self.config['training']['world_size'], - rank=DDP_gpu) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=config['dataset']['batch_size'], - shuffle=False, - num_workers=int(config['dataset']['n_threads']), - sampler=sampler, - drop_last=drop_last) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), 1e9) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.config['dataset']['batch_size'] >= 1e9: - break - yield data diff --git a/spaces/enzostvs/stable-diffusion-tpu/Dockerfile b/spaces/enzostvs/stable-diffusion-tpu/Dockerfile deleted file mode 100644 index 1779fbf5ed9f3c6bcb533d4305b5f421916815b9..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/Dockerfile +++ /dev/null @@ -1,30 +0,0 @@ -# Dockerfile - -# Use an official Node.js runtime as the base image -FROM node:18 - -USER 1000 - -# Set the working directory in the container -WORKDIR /usr/src/app - -# Copy package.json and package-lock.json to the container -COPY --chown=1000 package.json package-lock.json ./ - -# Install dependencies -RUN npm install - -VOLUME /data - -# Copy the rest of the application files to the container -COPY --chown=1000 . . -RUN chmod +x entrypoint.sh - -# Build the Next.js application for production -# RUN npm run build - -# Expose the application port (assuming your app runs on port 3000) -EXPOSE 3002 - -# Start the application -ENTRYPOINT ["/usr/src/app/entrypoint.sh"] \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/evaluation.md b/spaces/eson/tokenizer-arena/evaluation.md deleted file mode 100644 index e2fbfafab4f6dc8d856b436df71e074b09a52506..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/evaluation.md +++ /dev/null @@ -1,5 +0,0 @@ - - -## coverage - -rare characters falling back to utf-8 bytes \ No newline at end of file diff --git a/spaces/evaluate-metric/poseval/poseval.py b/spaces/evaluate-metric/poseval/poseval.py deleted file mode 100644 index 124146cd024c05e01116403d6c4a164165288bd3..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/poseval/poseval.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright 2022 The HuggingFace Evaluate Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" seqeval metric. """ - -from typing import Union - -import datasets -from sklearn.metrics import classification_report - -import evaluate - - -_CITATION = """\ -@article{scikit-learn, - title={Scikit-learn: Machine Learning in {P}ython}, - author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. - and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. - and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and - Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, - journal={Journal of Machine Learning Research}, - volume={12}, - pages={2825--2830}, - year={2011} -} -""" - -_DESCRIPTION = """\ -The poseval metric can be used to evaluate POS taggers. Since seqeval does not work well with POS data \ -(see e.g. [here](https://stackoverflow.com/questions/71327693/how-to-disable-seqeval-label-formatting-for-pos-tagging))\ -that is not in IOB format the poseval metric is an alternative. It treats each token in the dataset as independant \ -observation and computes the precision, recall and F1-score irrespective of sentences. It uses scikit-learns's \ -[classification report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) \ -to compute the scores. - -""" - -_KWARGS_DESCRIPTION = """ -Computes the poseval metric. - -Args: - predictions: List of List of predicted labels (Estimated targets as returned by a tagger) - references: List of List of reference labels (Ground truth (correct) target values) - zero_division: Which value to substitute as a metric value when encountering zero division. Should be on of 0, 1, - "warn". "warn" acts as 0, but the warning is raised. - -Returns: - 'scores': dict. Summary of the scores for overall and per type - Overall (weighted and macro avg): - 'accuracy': accuracy, - 'precision': precision, - 'recall': recall, - 'f1': F1 score, also known as balanced F-score or F-measure, - Per type: - 'precision': precision, - 'recall': recall, - 'f1': F1 score, also known as balanced F-score or F-measure -Examples: - - >>> predictions = [['INTJ', 'ADP', 'PROPN', 'NOUN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'VERB', 'SYM']] - >>> references = [['INTJ', 'ADP', 'PROPN', 'PROPN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'PROPN', 'SYM']] - >>> poseval = evaluate.load("poseval") - >>> results = poseval.compute(predictions=predictions, references=references) - >>> print(list(results.keys())) - ['ADP', 'INTJ', 'NOUN', 'PROPN', 'PUNCT', 'SYM', 'VERB', 'accuracy', 'macro avg', 'weighted avg'] - >>> print(results["accuracy"]) - 0.8 - >>> print(results["PROPN"]["recall"]) - 0.5 -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class Poseval(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - homepage="https://scikit-learn.org", - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"), - "references": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"), - } - ), - codebase_urls=["https://github.com/scikit-learn/scikit-learn"], - ) - - def _compute( - self, - predictions, - references, - zero_division: Union[str, int] = "warn", - ): - report = classification_report( - y_true=[label for ref in references for label in ref], - y_pred=[label for pred in predictions for label in pred], - output_dict=True, - zero_division=zero_division, - ) - - return report diff --git a/spaces/exit9/neuro_evolution/README.md b/spaces/exit9/neuro_evolution/README.md deleted file mode 100644 index e2b4d67636a6fbce50b7a8eaca1813914b648153..0000000000000000000000000000000000000000 --- a/spaces/exit9/neuro_evolution/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Livebook -emoji: 📓 -colorFrom: pink -colorTo: purple -sdk: docker -fullWidth: true -license: mit ---- - -You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that. \ No newline at end of file diff --git a/spaces/facebook/ov-seg/app.py b/spaces/facebook/ov-seg/app.py deleted file mode 100644 index 906d30a8a3cbfd59dab9cf621b13f8f2366f95d1..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/app.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import multiprocessing as mp - -import numpy as np -from PIL import Image - - -try: - import detectron2 -except: - import os - os.system('pip install git+https://github.com/facebookresearch/detectron2.git') - -from detectron2.config import get_cfg - -from detectron2.projects.deeplab import add_deeplab_config -from detectron2.data.detection_utils import read_image -from open_vocab_seg import add_ovseg_config -from open_vocab_seg.utils import VisualizationDemo, SAMVisualizationDemo - -import gradio as gr - -import gdown - -# ckpt_url = 'https://drive.google.com/uc?id=1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy' -# output = './ovseg_swinbase_vitL14_ft_mpt.pth' -# gdown.download(ckpt_url, output, quiet=False) - -def setup_cfg(config_file): - # load config from file and command-line arguments - cfg = get_cfg() - add_deeplab_config(cfg) - add_ovseg_config(cfg) - cfg.merge_from_file(config_file) - cfg.freeze() - return cfg - - -def inference(class_names, proposal_gen, granularity, input_img): - mp.set_start_method("spawn", force=True) - config_file = './ovseg_swinB_vitL_demo.yaml' - cfg = setup_cfg(config_file) - if proposal_gen == 'MaskFormer': - demo = VisualizationDemo(cfg) - elif proposal_gen == 'Segment_Anything': - demo = SAMVisualizationDemo(cfg, granularity, './sam_vit_l_0b3195.pth', './ovseg_clip_l_9a1909.pth') - class_names = class_names.split(',') - img = read_image(input_img, format="BGR") - _, visualized_output = demo.run_on_image(img, class_names) - - return Image.fromarray(np.uint8(visualized_output.get_image())).convert('RGB') - - -examples = [['Saturn V, toys, desk, wall, sunflowers, white roses, chrysanthemums, carnations, green dianthus', 'Segment_Anything', 0.8, './resources/demo_samples/sample_01.jpeg'], - ['red bench, yellow bench, blue bench, brown bench, green bench, blue chair, yellow chair, green chair, brown chair, yellow square painting, barrel, buddha statue', 'Segment_Anything', 0.8, './resources/demo_samples/sample_04.png'], - ['pillow, pipe, sweater, shirt, jeans jacket, shoes, cabinet, handbag, photo frame', 'Segment_Anything', 0.7, './resources/demo_samples/sample_05.png'], - ['Saturn V, toys, blossom', 'MaskFormer', 1.0, './resources/demo_samples/sample_01.jpeg'], - ['Oculus, Ukulele', 'MaskFormer', 1.0, './resources/demo_samples/sample_03.jpeg'], - ['Golden gate, yacht', 'MaskFormer', 1.0, './resources/demo_samples/sample_02.jpeg'],] -output_labels = ['segmentation map'] - -title = 'OVSeg (+ Segment_Anything)' - -description = """ -[NEW!] We incorperate OVSeg CLIP w/ Segment_Anything, enabling SAM's text prompts. -Gradio Demo for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP. \n -OVSeg could perform open vocabulary segmentation, you may input more classes (seperate by comma). You may click on of the examples or upload your own image. \n -It might take some time to process. Cheers! -

                  (Colab only supports MaskFormer proposal generator) Don't want to wait in queue? Open In Colab

                  -""" - -article = """ -

                  - -Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP - -| -Github Repo

                  -""" - -gr.Interface( - inference, - inputs=[ - gr.Textbox( - lines=1, placeholder=None, default='', label='class names'), - gr.Radio(["Segment_Anything", "MaskFormer"], label="Proposal generator", default="Segment_Anything"), - gr.Slider(0, 1.0, 0.8, label="For Segment_Anything only, granularity of masks from 0 (most coarse) to 1 (most precise)"), - gr.Image(type='filepath'), - ], - outputs=gr.components.Image(type="pil", label='segmentation map'), - title=title, - description=description, - article=article, - examples=examples).launch(enable_queue=True) diff --git a/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md b/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md deleted file mode 100644 index 714147ba79a0e6748b32794e909b19c3f19a262c..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md +++ /dev/null @@ -1,128 +0,0 @@ -
                  -

                  Descargar Red Dead Redemption 2 para Android APK Oficial

                  -

                  Red Dead Redemption 2 es uno de los juegos más aclamados y exitosos de los últimos años. Se trata de una aventura de acción ambientada en el salvaje oeste, donde el jugador puede explorar un vasto mundo abierto lleno de detalles, personajes y actividades. El juego ha sido desarrollado por Rockstar Games, los creadores de la saga Grand Theft Auto, y ha recibido numerosos premios y elogios por parte de la crítica y los usuarios.

                  -

                  descargar red dead redemption 2 para android apk oficial


                  Download Filehttps://urllie.com/2uNBdj



                  -

                  Si eres fan de Red Dead Redemption 2 y quieres disfrutarlo en tu dispositivo Android, estás de suerte. Existe una versión oficial del juego para móviles que puedes descargar e instalar fácilmente siguiendo unos sencillos pasos. En este artículo te explicamos todo lo que necesitas saber sobre cómo descargar Red Dead Redemption 2 para Android APK oficial, qué requisitos debes cumplir y qué consejos y trucos te pueden ayudar a sacarle el máximo partido al juego.

                  -

                  ¿Qué es Red Dead Redemption 2?

                  -

                  Red Dead Redemption 2 es un juego de acción-aventura que se desarrolla en el año 1899, en plena época del lejano oeste. El protagonista es Arthur Morgan, un forajido que forma parte de la banda de Dutch van der Linde, un grupo de criminales que se resisten a la llegada de la civilización y la industrialización. A lo largo del juego, el jugador tendrá que enfrentarse a las fuerzas de la ley, a otras bandas rivales y a los peligros de la naturaleza, mientras decide cómo vivir su propia historia.

                  -

                  El juego se destaca por su impresionante apartado gráfico, que recrea con gran realismo y belleza los paisajes y escenarios del oeste americano. El juego cuenta con una gran variedad de ecosistemas y ambientes, desde montañas nevadas hasta pantanos infestados de caimanes. Además, el juego tiene un ciclo día-noche y un sistema climático dinámico que afectan al comportamiento de los animales y las personas.

                  -

                  cómo descargar red dead redemption 2 en android gratis
                  -red dead redemption 2 android apk + obb download
                  -red dead redemption 2 para android sin verificación
                  -descargar red dead redemption 2 para android mega
                  -red dead redemption 2 android gameplay español
                  -red dead redemption 2 android apk mod
                  -red dead redemption 2 para android requisitos
                  -descargar red dead redemption 2 para android mediafıre
                  -red dead redemption 2 android beta apk
                  -red dead redemption 2 para android online
                  -red dead redemption 2 android apk oficial rockstar games
                  -descargar red dead redemption 2 para android uptodown
                  -red dead redemption 2 android apk + data
                  -red dead redemption 2 para android descargar gratis
                  -red dead redemption 2 android release date
                  -descargar red dead redemption 2 para android full
                  -red dead redemption 2 android apk no verification
                  -red dead redemption 2 para android gameplay
                  -descargar red dead redemption 2 para android sin verificación
                  -red dead redemption 2 android download link
                  -red dead redemption 2 para android apk + obb
                  -descargar red dead redemption 2 para android por partes
                  -red dead redemption 2 android apk + obb offline
                  -red dead redemption 2 para android descargar mega
                  -red dead redemption 2 android trailer oficial
                  -descargar red dead redemption 2 para android gratis español
                  -red dead redemption 2 android apk + obb highly compressed
                  -red dead redemption 2 para android mediafıre
                  -descargar red dead redemption 2 para android apk + datos sd
                  -red dead redemption 2 android official website
                  -descargar red dead redemption 2 para android sin emulador
                  -red dead redemption 2 android apk + obb free download
                  -red dead redemption 2 para android beta apk
                  -descargar red dead redemption 2 para android ppsspp
                  -red dead redemption 2 android emulator download
                  -descargar red dead redemption 2 para android play store
                  -red dead redemption 2 android apk + obb latest version
                  -red dead redemption 2 para android sin internet
                  -descargar red dead redemption 2 para android mod apk
                  -red dead redemption 2 android review español
                  -descargar red dead redemption 2 para android con licencia
                  -red dead redemption 2 android apk + obb google drive
                  -red dead redemption 2 para android como descargarlo e instalarlo facil y rapido

                  -

                  Otro aspecto destacado del juego es su jugabilidad, que ofrece una gran libertad al jugador para explorar el mundo a su antojo. El jugador puede realizar todo tipo de actividades, como cazar, pescar, jugar al póker, robar bancos o participar en duelos. El juego también tiene un sistema de honor que mide las acciones del jugador y sus consecuencias en el mundo. Así, el jugador puede optar por ser un héroe o un villano, y ver cómo cambia la reacción de los personajes y las misiones disponibles.Continuing the article:

                  -

                  Características del juego

                  -

                  Red Dead Redemption 2 es un juego que ofrece una experiencia única e inmersiva al jugador. Algunas de las características más destacadas del juego son:

                  -
                    -
                  • Gráficos espectaculares: El juego aprovecha al máximo el poder de la PC para brindar unos gráficos de alta calidad, con una iluminación y unas sombras realistas, una textura detallada de los árboles, el césped y el pelo de los animales, y un HDR que mejora el contraste y el color.
                  • -
                  • Jugabilidad variada: El juego combina elementos de acción, aventura, sigilo, exploración, caza, pesca, robo, duelo y más. El jugador puede elegir cómo afrontar cada situación, ya sea usando la fuerza, la astucia o la diplomacia. El juego también tiene un sistema de honor que afecta a la reputación del jugador y a las opciones disponibles.
                  • -
                  • Mundo abierto: El juego cuenta con un enorme mundo abierto que se puede recorrer a caballo, en tren, en barco o a pie. El mundo está lleno de vida, con más de 200 especies de animales, decenas de poblados y ciudades, y eventos aleatorios que ocurren a cada momento. El jugador puede interactuar con casi todo lo que ve y hacer lo que quiera.
                  • -
                  • Modo online: El juego incluye el acceso gratuito al mundo compartido de Red Dead Online, donde el jugador puede crear su propio personaje y elegir entre una variedad de roles para forjar su propio camino en el oeste. El jugador puede cooperar o competir con otros jugadores en misiones, actividades, eventos y modos PvP.
                  • -
                  -

                  Requisitos para jugar en Android

                  -

                  Para poder jugar a Red Dead Redemption 2 en Android se necesita descargar e instalar el APK oficial del juego, que ocupa unos 5.5 GB de espacio. Además, se necesita cumplir con unos requisitos mínimos y recomendados para que el juego funcione correctamente. Estos son los requisitos según la página oficial de RDR2 Mobile:

                  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
                  Requisitos mínimosRequisitos recomendados
                  Sistema operativo: Android 6.0 o superiorSistema operativo: Android 8.0 o superior
                  Procesador: Quad-core 1.2 GHz o superiorProcesador: Octa-core 2.0 GHz o superior
                  Memoria RAM: 2 GB o superiorMemoria RAM: 4 GB o superior
                  Gráficos: Adreno 530 o superiorGráficos: Adreno 640 o superior
                  Espacio libre: 6 GB o superiorEspacio libre: 8 GB o superior
                  Conexión a internet: Wi-Fi o datos móvilesConexión a internet: Wi-Fi o datos móviles
                  -

                  También se recomienda usar un dispositivo con una pantalla grande y una buena resolución para apreciar mejor los detalles del juego.

                  -

                  ¿Cómo descargar el APK oficial de Red Dead Redemption 2?

                  -

                  Para descargar el APK oficial de Red Dead Redemption 2 se debe seguir estos pasos:

                  -

                  Paso 1: Visitar el sitio web oficial de RDR2 Mobile

                  -

                  El primer paso es acceder al sitio web oficial de RDR2 Mobile, donde se puede encontrar toda la información sobre el juego, sus características, sus requisitos y su descarga. El sitio web es https://rdr2mobile.com/. En este sitio web se puede ver un botón verde que dice "Download APK". Al hacer clic en este botón se iniciará la descarga del archivo APK del juego.

                  -

                  Paso 2: Descargar el archivo APK

                  -

                  El segundo paso es descargar el archivo APK del juego en el dispositivo Android. El archivo APK tiene un tamaño de unos 5.5 GB,

                  Continuing the article:

                  -

                  por lo que se recomienda usar una conexión Wi-Fi estable y rápida para evitar interrupciones o errores. El archivo APK se guardará en la carpeta de descargas del dispositivo Android, o en la ubicación que el usuario haya elegido.

                  -

                  Paso 3: Instalar el juego en el dispositivo Android

                  -

                  El tercer paso es instalar el juego en el dispositivo Android. Para ello, se debe abrir el archivo APK que se ha descargado y seguir las instrucciones que aparecen en la pantalla. Es posible que se requiera habilitar la opción de "Orígenes desconocidos" o "Fuentes desconocidas" en los ajustes de seguridad del dispositivo, para permitir la instalación de aplicaciones que no provienen de la tienda oficial de Google Play. Una vez instalado el juego, se podrá ver un icono de RDR2 Mobile en el menú de aplicaciones del dispositivo. Al hacer clic en este icono se iniciará el juego y se podrá disfrutar de Red Dead Redemption 2 en Android.

                  -

                  Consejos y trucos para disfrutar de Red Dead Redemption 2 en Android

                  -

                  Red Dead Redemption 2 es un juego muy completo y complejo, que ofrece muchas posibilidades y opciones al jugador. Para aprovechar al máximo el juego y disfrutar de una buena experiencia en Android, se pueden seguir estos consejos y trucos:

                  -
                    -
                  • Usar el modo Dead Eye: El modo Dead Eye es una habilidad especial que permite al jugador ralentizar el tiempo y apuntar con precisión a los enemigos. Es muy útil para enfrentarse a situaciones difíciles o a grupos numerosos de rivales. Para activar el modo Dead Eye se debe pulsar el botón del ojo que aparece en la esquina inferior derecha de la pantalla.
                  • -
                  • Crear un vínculo con el caballo: El caballo es el principal medio de transporte del jugador, y también su compañero fiel. Es importante crear un vínculo con el caballo, alimentándolo, acariciándolo y cepillándolo, para mejorar sus atributos y su comportamiento. Un caballo bien cuidado será más rápido, resistente y obediente.
                  • -
                  • Personalizar el HUD: El HUD es la interfaz que muestra información sobre el juego, como el mapa, la salud, el honor o las armas. El jugador puede personalizar el HUD según sus preferencias, ocultando o mostrando los elementos que quiera. Para acceder al menú de personalización del HUD se debe pulsar el botón de pausa que aparece en la esquina superior izquierda de la pantalla.
                  • -
                  -

                  Conclusión

                  -

                  Red Dead Redemption 2 es un juego increíble que merece la pena jugar en cualquier plataforma. Gracias al APK oficial de RDR2 Mobile, los usuarios de Android pueden disfrutar del juego en sus dispositivos móviles con una buena calidad gráfica y una jugabilidad adaptada. Para descargar e instalar el juego solo se necesita seguir unos sencillos pasos y cumplir con unos requisitos mínimos. Además, se pueden aplicar algunos consejos y trucos para mejorar la experiencia y divertirse más con el juego.

                  -

                  Si te ha gustado este artículo, compártelo con tus amigos y déjanos un comentario con tu opinión sobre Red Dead Redemption 2 para Android APK oficial. ¿Has probado el juego? ¿Qué te ha parecido? ¿Qué consejos o trucos nos puedes dar? ¡Estamos deseando leer tus comentarios!

                  -

                  Preguntas frecuentes

                  -
                    -
                  • ¿Es seguro descargar e instalar el APK oficial de Red Dead Redemption 2?
                  • -

                    Sí, es seguro siempre y cuando se descargue desde el sitio web oficial de RDR2 Mobile, que es https://rdr2mobile.com/. Este sitio web ofrece el archivo APK original y sin modificaciones del juego, que no contiene virus ni malware.

                    -
                  • ¿Es gratis descargar e instalar el APK oficial de Red Dead Redemption 2?
                  • -

                    Sí, es gratis descargar e instalar el APK oficial de Red Dead Redemption 2. No se necesita pagar nada ni registrarse para acceder al archivo APK del juego. Sin embargo, se recomienda tener una cuenta de Rockstar Games Social Club para acc

                    Continuing the article:

                    -

                    eder al modo online y a otras funciones del juego.

                    -
                  • ¿Es compatible el APK oficial de Red Dead Redemption 2 con todos los dispositivos Android?
                  • -

                    No, el APK oficial de Red Dead Redemption 2 no es compatible con todos los dispositivos Android. El juego requiere unos requisitos mínimos y recomendados para funcionar correctamente, que se pueden consultar en la página oficial de RDR2 Mobile. Si el dispositivo no cumple con estos requisitos, es posible que el juego no se ejecute o que presente problemas de rendimiento o estabilidad.

                    -
                  • ¿Se puede jugar a Red Dead Redemption 2 en Android con mando?
                  • -

                    Sí, se puede jugar a Red Dead Redemption 2 en Android con mando. El juego es compatible con la mayoría de los mandos Bluetooth que se pueden conectar al dispositivo Android. El juego detecta automáticamente el mando y muestra los controles correspondientes en la pantalla. El jugador puede personalizar la configuración del mando desde el menú de opciones del juego.

                    -
                  • ¿Se puede jugar a Red Dead Redemption 2 en Android sin conexión a internet?
                  • -

                    Sí, se puede jugar a Red Dead Redemption 2 en Android sin conexión a internet. El juego permite jugar al modo historia sin necesidad de estar conectado a internet. Sin embargo, para acceder al modo online y a otras funciones del juego, como las actualizaciones o el soporte técnico, se necesita una conexión a internet estable y rápida.

                    -

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md b/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md deleted file mode 100644 index 35aa078063851d6c2bbcdcac8a4299f6cb0f1628..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md +++ /dev/null @@ -1,198 +0,0 @@ - -

                  Download gratis Excel 2016: come fare e cosa sapere

                  -

                  Se sei alla ricerca di un programma per gestire, analizzare e manipolare grandi quantità di dati, probabilmente hai già sentito parlare di Excel. Si tratta di una delle applicazioni più famose e versatili del pacchetto Office di Microsoft, disponibile sia per Windows che per Mac. Ma come fare a scaricare gratis Excel 2016, l'ultima versione del software? E quali sono le sue principali funzionalità e novità? In questo articolo, ti spiegheremo tutto quello che devi sapere per ottenere e usare questo potente strumento di calcolo e visualizzazione dei dati.

                  -

                  download gratis excel 2016


                  Download File 🆗 https://urllie.com/2uNHz7



                  -

                  Cos'è Excel 2016 e perché scaricarlo

                  -

                  Excel 2016 è un'applicazione che fa parte della suite di produttività Microsoft Office, insieme ad altre come Word, PowerPoint, Outlook, OneNote e altre. Si tratta di un foglio elettronico, ovvero un programma che permette di creare, modificare e salvare tabelle di dati composte da celle, colonne e righe. Ogni cella può contenere un valore numerico, una formula, una funzione, un testo o un riferimento ad un'altra cella. In questo modo, è possibile effettuare calcoli complessi, analisi statistiche, simulazioni, previsioni e altro ancora.

                  -

                  Excel 2016 non è solo un semplice foglio elettronico, ma anche un potente strumento di visualizzazione dei dati. Infatti, offre la possibilità di creare diversi tipi di grafici, come istogrammi, torte, linee, barre, aree e altri ancora. Questi grafici possono essere personalizzati in vari modi, cambiando i colori, le etichette, i titoli, le legende e altri elementi. Inoltre, è possibile inserire immagini, forme, icone, SmartArt e altri oggetti grafici per rendere i fogli più attraenti e comprensibili.

                  -

                  Scaricare gratis Excel 2016 significa quindi avere a disposizione uno dei migliori programmi per gestire i dati in modo efficiente ed efficace. Che tu sia uno studente, un professionista, un imprenditore o un appassionato di numeri, con Excel 2016 potrai svolgere le tue attività con facilità e precisione.

                  -

                  Excel 2016: le principali funzionalità e novità

                  -

                  Excel 2016 presenta diverse funzionalità e novità rispetto alle versioni precedenti del software. Vediamone alcune delle più importanti:

                  -

                  download gratis excel 2016 for windows
                  -download gratis excel 2016 for mac
                  -download gratis excel 2016 trial version
                  -download gratis excel 2016 full version
                  -download gratis excel 2016 64 bit
                  -download gratis excel 2016 32 bit
                  -download gratis excel 2016 italiano
                  -download gratis excel 2016 portugues
                  -download gratis excel 2016 espanol
                  -download gratis excel 2016 francais
                  -download gratis excel 2016 deutsch
                  -download gratis excel 2016 crack
                  -download gratis excel 2016 key
                  -download gratis excel 2016 activation code
                  -download gratis excel 2016 product key
                  -download gratis excel 2016 update
                  -download gratis excel 2016 patch
                  -download gratis excel 2016 offline installer
                  -download gratis excel 2016 iso file
                  -download gratis excel 2016 setup file
                  -download gratis microsoft office excel 2016
                  -download gratis microsoft office professional plus 2016 with excel
                  -download gratis microsoft office home and student 2016 with excel
                  -download gratis microsoft office home and business 2016 with excel
                  -download gratis microsoft office standard 2016 with excel
                  -how to download gratis excel 2016
                  -where to download gratis excel 2016
                  -best site to download gratis excel 2016
                  -safe way to download gratis excel 2016
                  -tutorial on how to download gratis excel 2016
                  -guide on how to download gratis excel 2016
                  -tips on how to download gratis excel 2016
                  -benefits of downloading gratis excel 2016
                  -features of downloading gratis excel 2016
                  -advantages of downloading gratis excel 2016
                  -disadvantages of downloading gratis excel 2016
                  -alternatives to downloading gratis excel 2016
                  -comparison of downloading gratis excel 2016 and other versions
                  -review of downloading gratis excel 2016
                  -rating of downloading gratis excel 2016
                  -feedback on downloading gratis excel 2016
                  -testimonials on downloading gratis excel 2016
                  -problems with downloading gratis excel 2016
                  -solutions for downloading gratis excel 2016
                  -troubleshooting for downloading gratis excel 2016
                  -support for downloading gratis excel 2016
                  -help for downloading gratis excel 2016
                  -assistance for downloading gratis excel 2016
                  -resources for downloading gratis excel 2016
                  -tools for downloading gratis excel 2016

                  -
                    -
                  • Pivot Table: si tratta di una funzione che permette di riassumere e analizzare rapidamente grandi quantità di dati. Con le Pivot Table è possibile rag

                    gruppare i dati per categorie, filtri, ordini e calcoli personalizzati. In Excel 2016 è possibile creare Pivot Table anche da fonti di dati esterne, come database, file di testo o siti web. Inoltre, è possibile usare la funzione Suggerisci Pivot Table per ottenere dei suggerimenti su come organizzare i dati in base alle proprie esigenze.

                  • -
                  • Power Query: si tratta di una funzione che permette di importare, trasformare e combinare dati da diverse fonti, come file Excel, CSV, XML, JSON, database, siti web e altri ancora. Con Power Query è possibile pulire, filtrare, ordinare, raggruppare e modificare i dati in modo semplice e intuitivo. Inoltre, è possibile creare delle query, ovvero delle interrogazioni personalizzate che possono essere salvate e aggiornate automaticamente.
                  • -
                  • Power Pivot: si tratta di una funzione che permette di creare dei modelli di dati avanzati e complessi, collegando tra loro diverse tabelle e fonti di dati. Con Power Pivot è possibile creare delle relazioni tra le tabelle, ovvero dei collegamenti logici basati su una o più colonne comuni. Inoltre, è possibile creare delle misure, ovvero dei calcoli personalizzati che possono essere usati nelle Pivot Table o nei grafici.
                  • -
                  • Power Map: si tratta di una funzione che permette di creare delle mappe interattive e dinamiche per visualizzare i dati geografici. Con Power Map è possibile inserire dei punti dati, ovvero dei valori numerici o testuali associati a delle coordinate geografiche. Inoltre, è possibile creare delle tours, ovvero delle sequenze animate di mappe che mostrano l'evoluzione dei dati nel tempo o nello spazio.
                  • -
                  • Grafici a cascata, istogrammi e Pareto: si tratta di tre nuovi tipi di grafici introdotti in Excel 2016. I grafici a cascata mostrano le variazioni positive e negative di un valore nel tempo o tra diverse categorie. Gli istogrammi mostrano la distribuzione di frequenza di un valore in base a dei intervalli predefiniti o personalizzati. I grafici di Pareto mostrano la relazione tra le cause e gli effetti di un fenomeno, evidenziando le cause più rilevanti.
                  • -
                  • Funzioni previsionali: si tratta di una serie di funzioni che permettono di effettuare delle previsioni basate sui dati storici. Con queste funzioni è possibile stimare il valore futuro di una variabile in base a un trend lineare o esponenziale. Inoltre, è possibile visualizzare le previsioni in un grafico apposito, con indicati gli intervalli di confidenza e gli errori standard.
                  • -
                  • Funzioni logiche nidificate: si tratta della possibilità di inserire più funzioni logiche (come SE, E, O) all'interno di una stessa formula. In questo modo, è possibile creare delle condizioni più complesse e specifiche per ottenere dei risultati diversi in base ai valori delle celle.
                  • -
                  -

                  Excel 2016: i requisiti di sistema e le versioni disponibili

                  -

                  Per poter scaricare e usare Excel 2016 sul tuo dispositivo, devi assicurarti che esso soddisfi i seguenti requisiti di sistema:

                  - - - - - - - - - - - - - - - - - - - - - - -
                  Sistema operativoProcessoreMemoria RAMSpazio su discoRisoluzione dello schermo
                  Windows 7 o successivi1 GHz o superiore (x86 o x64)2 GB (32 bit) o 4 GB (64 bit)3 GB1024 x 768 pixel o superiore
                  Mac OS X 10.10 o successiviIntel4 GB6 GB1280 x 800 pixel o superiore
                  -

                  Excel 2016 è disponibile in diverse versioni, a seconda delle tue esigenze e del tuo budget. Le principali sono:

                  -
                    -
                  • Microsoft 365: si tratta di un abbonamento annuale o mensile che ti permette di accedere a tutte le applicazioni di Office, compreso Excel 2016, su più dispositivi (PC, Mac, smartphone e tablet). Inoltre, ti offre 1 TB di spazio su OneDrive, il servizio di cloud storage di Microsoft, e altri vantaggi come l'assistenza tecnica e le funzionalità aggiuntive. Il costo varia in base al piano scelto: Microsoft 365 Personal (per un utente) costa 69 euro all'anno o 7 euro al mese, mentre Microsoft 365 Family (per sei utenti) costa 99 euro all'anno o 10 euro al mese.
                  • -
                  • Office Home & Student: si tratta di una licenza permanente che ti permette di installare Excel 2016 e le altre applicazioni di Office (Word, PowerPoint e OneNote) su un solo PC o Mac. Non include gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365. Il costo è di 149 euro una tantum.
                  • -
                  • Excel 2016: si tratta di una licenza permanente che ti permette di installare solo Excel 2016 su un solo PC o Mac. Non include le altre applicazioni di Office, gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365. Il costo è di 135 euro una tantum.
                  • -
                  -

                  Come scaricare gratis Excel 2016 su PC

                  -

                  Se vuoi scaricare gratis Excel 2016 sul tuo PC con Windows, hai due opzioni principali: attivare la prova gratuita di Microsoft 365 o acquistare la licenza di Office Home & Student o di Excel 2016. Vediamo come fare in entrambi i casi.

                  -

                  Come attivare la prova gratuita di Microsoft 365

                  -

                  La prova gratuita di Microsoft 365 ti permette di usare Excel 2016 e le altre applicazioni di Office per un mese senza pagare nulla. Al termine del periodo di prova, puoi decidere se rinnovare l'abbonamento o disattivarlo. Ecco i passaggi da seguire per attivare la prova gratuita:

                  -
                    -
                  1. Visita il sito ufficiale di Microsoft Office e clicca sul pulsante Prova gratis per un mese.
                  2. -
                  3. Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  4. -
                  5. Scegli il piano che preferisci tra Microsoft 365 Personal e Microsoft 365 Family e clicca sul pulsante Avvia il tuo mese gratuito.
                  6. -
                  7. Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Iscriviti. Non ti verrà addebitato nulla fino alla scadenza della prova gratuita.
                  8. -
                  9. Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo PC.
                  10. -
                  -

                  Come acquistare la licenza di Office Home & Student o di Excel 2016

                  -

                  Se preferisci acquistare la licenza permanente di Office Home & Student o di Excel 2016, puoi farlo direttamente dal sito ufficiale di Microsoft Office. Ecco i passaggi da seguire:

                  -
                    -
                  1. Visita il sito ufficiale di Microsoft Office e clicca sulla scheda Prodotti.
                  2. -
                  3. Scegli il prodotto che vuoi acquistare tra Office Home & Student o Excel 2016 e clicca sul pulsante Acquista ora.
                  4. -
                  5. Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  6. -
                  7. Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Conferma ordine. Ti verrà addebitato il costo del prodotto scelto.
                  8. -
                  9. Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo PC.
                  10. -
                  -

                  Come scaricare gratis Excel 2016 su Mac

                  -

                  Se vuoi scaricare gratis Excel 2016 sul tuo Mac, hai due opzioni principali: attivare la prova gratuita di Microsoft 365 o acquistare la licenza di Office Home & Student o di Excel 2016. Vediamo come fare in entrambi i casi.

                  -

                  Come attivare la prova gratuita di Microsoft 365

                  -

                  La prova gratuita di Microsoft 365 ti permette di usare Excel 2016 e le altre applicazioni di Office per un mese senza pagare nulla. Al termine del periodo di prova, puoi decidere se rinnovare l'abbonamento o disattivarlo. Ecco i passaggi da seguire per attivare la prova gratuita:

                  -
                    -
                  1. Visita il sito ufficiale di Microsoft Office e clicca sul pulsante Prova gratis per un mese.
                  2. -
                  3. Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  4. -
                  5. Scegli il piano che preferisci tra Microsoft 365 Personal e Microsoft 365 Family e clicca sul pulsante Avvia il tuo mese gratuito.
                  6. -
                  7. Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Iscriviti. Non ti verrà addebitato nulla fino alla scadenza della prova gratuita.
                  8. -
                  9. Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo Mac.
                  10. -
                  -

                  Come acquistare la licenza di Office Home & Student o di Excel 2016

                  -

                  Se preferisci acquistare la licenza permanente di Office Home & Student o di Excel 2016, puoi farlo direttamente dal sito ufficiale di Microsoft Office. Ecco i passaggi da seguire:

                  -
                    -
                  1. Visita il sito ufficiale di Microsoft Office e clicca sulla scheda Prodotti.
                  2. -
                  3. Scegli il prodotto che vuoi acquistare tra Office Home & Student o Excel 2016 e clicca sul pulsante Acquista ora.
                  4. -
                  5. Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  6. -
                  7. Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Conferma ordine. Ti verrà addebitato il costo del prodotto scelto.
                  8. -
                  9. Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo Mac.
                  10. -
                  -

                  Come scaricare gratis Excel 2016 su smartphone e tablet

                  -

                  Se vuoi scaricare gratis Excel 2016 sul tuo smartphone o tablet, puoi farlo facilmente tramite i rispettivi store delle tue piattaforme. Infatti, Excel 2016 è disponibile come app gratuita per Android, iPhone e iPad. Tuttavia, per poter usare tutte le funzionalità dell'app, devi avere un abbonamento a Microsoft 365. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi. Vediamo come fare per scaricare Excel 2016 sui tuoi dispositivi mobili.

                  -

                  Come scaricare Excel 2016 su Android

                  -

                  Per scaricare Excel 2016 su Android, devi seguire questi passaggi:

                  -
                    -
                  1. Apri il Google Play Store sul tuo smartphone o tablet Android.
                  2. -
                  3. Digita "Excel" nella barra di ricerca in alto e clicca sul risultato corrispondente a Microsoft Excel: crea e modifica fogli di calcolo.
                  4. -
                  5. Clicca sul pulsante Installa e attendi che il download e l'installazione siano completi.
                  6. -
                  7. Apri l'app Excel e accedi con il tuo account Microsoft. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  8. -
                  9. Se hai un abbonamento a Microsoft 365, potrai usare tutte le funzionalità dell'app Excel. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi.
                  10. -
                  -

                  Come scaricare Excel 2016 su iPhone e iPad

                  -

                  Per scaricare Excel 2016 su iPhone e iPad, devi seguire questi passaggi:

                  -
                    -
                  1. Apri l'App Store sul tuo iPhone o iPad.
                  2. -
                  3. Digita "Excel" nella barra di ricerca in basso e clicca sul risultato corrispondente a Microsoft Excel.
                  4. -
                  5. Clicca sul pulsante Ottieni e inserisci il tuo ID Apple o usa il Face ID o il Touch ID per confermare il download.
                  6. -
                  7. Apri l'app Excel e accedi con il tuo account Microsoft. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
                  8. -
                  9. Se hai un abbonamento a Microsoft 365, potrai usare tutte le funzionalità dell'app Excel. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi.
                  10. -
                  -

                  Conclusioni e FAQ

                  -

                  In questo articolo, ti abbiamo spiegato come scaricare gratis Excel 2016, l'ultima versione del famoso foglio elettronico di Microsoft. Ti abbiamo mostrato le principali funzionalità e novità di Excel 2016, i requisiti di sistema e le versioni disponibili. Ti abbiamo anche illustrato come scaricare Excel 2016 su PC, Mac, smartphone e tablet, sia attivando la prova gratuita di Microsoft 365 che acquistando la licenza permanente di Office Home & Student o di Excel 2016. Speriamo che questo articolo ti sia stato utile e che ora tu possa usare Excel 2016 per gestire, analizzare e visualizzare i tuoi dati in modo efficiente ed efficace.

                  -

                  Se hai ancora dei dubbi o delle domande su come scaricare gratis Excel 2016, qui sotto trovi alcune FAQ che potrebbero aiutarti a chiarirli.

                  -

                  Cos'è Microsoft 365?

                  -

                  Microsoft 365 è un abbonamento annuale o mensile che ti permette di accedere a tutte le applicazioni di Office, compreso Excel 2016, su più dispositivi (PC, Mac, smartphone e tablet). Inoltre, ti offre 1 TB di spazio su OneDrive, il servizio di cloud storage di Microsoft, e altri vantaggi come l'assistenza tecnica e le funzionalità aggiuntive.

                  -

                  Cos'è Office Home & Student?

                  -

                  Office Home & Student è una licenza permanente che ti permette di installare Excel 2016 e le altre applicazioni di Office (Word, PowerPoint e OneNote) su un solo PC o Mac. Non include gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365.

                  -

                  Cos'è Excel 2016?

                  -

                  Excel 2016 è una licenza permanente che ti permette di installare solo Excel 2016 su un solo PC o Mac. Non include le altre applicazioni di Office, gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365.

                  -

                  Come posso disattivare la prova gratuita di Microsoft 365?

                  -

                  Per disattivare la prova gratuita di Microsoft 365, devi seguire questi passaggi:

                  -
                    -
                  1. Visita il sito ufficiale di Microsoft Office e accedi con il tuo account Microsoft.
                  2. -
                  3. Clicca sull'icona del tuo profilo in alto a destra e poi su I miei account.
                  4. -
                  5. Clicca sulla scheda S ervizi e abbonamenti e poi su Gestisci accanto a Microsoft 365.
                  6. -
                  7. Clicca su Annulla abbonamento e conferma la tua scelta.
                  8. -
                  -

                  Se disattivi la prova gratuita prima della scadenza, non ti verrà addebitato nulla. Se invece la disattivi dopo la scadenza, ti verrà addebitato il costo dell'abbonamento per il mese successivo.

                  -

                  Come posso aggiornare Excel 2016?

                  -

                  Per aggiornare Excel 2016, devi seguire questi passaggi:

                  -
                    -
                  1. Apri Excel 2016 sul tuo dispositivo.
                  2. -
                  3. Clicca sul menu File e poi su Account.
                  4. -
                  5. Clicca sul pulsante Opzioni di aggiornamento e poi su Aggiorna ora.
                  6. -
                  7. Attendi che il processo di aggiornamento sia completato e riavvia Excel 2016.
                  8. -
                  -

                  Se hai un abbonamento a Microsoft 365, riceverai automaticamente gli aggiornamenti più recenti di Excel 2016 e delle altre applicazioni di Office. Se invece hai una licenza permanente di Office Home & Student o di Excel 2016, potrai ricevere solo gli aggiornamenti di sicurezza e di stabilità, ma non le nuove funzionalità.

                  -

                  Come posso contattare il supporto tecnico di Microsoft Office?

                  -

                  Per contattare il supporto tecnico di Microsoft Office, devi seguire questi passaggi:

                  -
                    -
                  1. Visita il sito ufficiale del supporto di Microsoft Office.
                  2. -
                  3. Scegli il prodotto che ti interessa tra quelli elencati o digita il tuo problema nella barra di ricerca.
                  4. -
                  5. Consulta le risorse disponibili, come le guide, i video, i forum e le domande frequenti, per trovare una soluzione al tuo problema.
                  6. -
                  7. Se non trovi una soluzione, clicca sul pulsante Contattaci e scegli tra le opzioni disponibili, come la chat, il telefono o il feedback.
                  8. -
                  -

                  Il supporto tecnico di Microsoft Office è gratuito per tutti gli utenti, ma i tempi e i modi di risposta possono variare in base al tipo di problema e al tipo di licenza.

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md b/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md deleted file mode 100644 index ffc0895221b8f4a1a2c444975cfc966b78aba7d6..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md +++ /dev/null @@ -1,138 +0,0 @@ - -

                  What is Eski TikTok APK and How to Download It

                  -

                  Introduction

                  -

                  TikTok is one of the most popular social media platforms in the world, with over 800 million active users who create and share short-form videos on various topics such as music, comedy, dance, education, beauty, fashion, and more. The app has a huge library of songs, filters, effects, stickers, and other features that make video creation fun and easy.

                  -

                  eski tiktok apk


                  Download ❤❤❤ https://urllie.com/2uNG16



                  -

                  However, not everyone is happy with the current version of TikTok. Some users prefer the old features and interface of the app that were available before it merged with Musical.ly in 2018. That's why some people look for alternative ways to access the old version of TikTok, such as downloading an APK file.

                  -

                  An APK file is an Android application package that contains all the files and data needed to install an app on an Android device. By downloading an APK file from a third-party source, you can bypass the official app store and install apps that are not available or restricted in your region.

                  -

                  One of the most popular APK files for TikTok is Eski TikTok APK, which claims to offer the old version of TikTok with all its original features and functions. But what exactly is Eski TikTok APK and how can you download it? In this article, we will answer these questions and more.

                  -

                  How to Download and Install Eski TikTok APK

                  -

                  If you want to try out Eski TikTok APK on your Android device, you will need to follow these steps:

                  -

                  Step 1: Enable unknown sources on your device

                  -

                  Since Eski TikTok APK is not available on Google Play Store or any other official app store, you will need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the official app store.

                  -

                  To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data. Tap OK to proceed.

                  -

                  - You asked me to write a 500-word article on the topic of "eski tiktok apk" with at least 15 headings and subheadings, a table, a conclusion paragraph, and 5 FAQs. You also asked me to write an outline of the article and use HTML formatting. Here is the outline of the article: - H1: What is Eski TikTok APK and How to Download It - H2: Introduction - H3: What is TikTok and why is it popular - H3: What is Eski TikTok APK and what are its features - H2: How to Download and Install Eski TikTok APK - H3: Step 1: Enable unknown sources on your device - H3: Step 2: Download the APK file from a trusted source - H3: Step 3: Install the APK file and launch the app - H2: How to Use Eski TikTok APK - H3: How to create and edit videos - H3: How to explore and discover videos - H3: How to interact and communicate with other users - H2: Pros and Cons of Eski TikTok APK - H3: Pros - H4: Access to old features and interface of TikTok - H4: No ads or watermarks on videos - H4: More privacy and security options - H3: Cons - H4: Not compatible with newer versions of TikTok - H4: Not available on official app stores - H4: Potential risk of malware or viruses - H2: Comparison with Other TikTok Alternatives - H3: Table of comparison with other apps like Triller, Dubsmash, Byte, etc. - H2: Conclusion - H3: Summary of main points and recommendations - H2: FAQs - H3: Q1. Is Eski TikTok APK legal and safe to use? - H3: Q2. Can I use Eski TikTok APK on iOS devices? - H3: Q3. Can I update Eski TikTok APK to the latest version of TikTok? - H3: Q4. Can I log in with my existing TikTok account on Eski TikTok APK? - H3: Q5. Can I share my videos from Eski TikTok APK to other social media platforms? Here is the article with HTML formatting:

                  What is Eski TikTok APK and How to Download It

                  -

                  Introduction

                  -

                  TikTok is one of the most popular social media platforms in the world, with over 800 million active users who create and share short-form videos on various topics such as music, comedy, dance, education, beauty, fashion, and more. The app has a huge library of songs, filters, effects, stickers, and other features that make video creation fun and easy.

                  -

                  eski tiktok apk download
                  -eski tiktok apk indir
                  -eski tiktok apk 2023
                  -eski tiktok apk uptodown
                  -eski tiktok apk android
                  -eski tiktok apk latest version
                  -eski tiktok apk free
                  -eski tiktok apk old version
                  -eski tiktok apk mod
                  -eski tiktok apk hack
                  -eski tiktok apk no watermark
                  -eski tiktok apk pro
                  -eski tiktok apk premium
                  -eski tiktok apk full
                  -eski tiktok apk cracked
                  -eski tiktok apk unlimited
                  -eski tiktok apk original
                  -eski tiktok apk update
                  -eski tiktok apk 2022
                  -eski tiktok apk 2021
                  -eski tiktok apk 2020
                  -eski tiktok apk 2019
                  -eski tiktok apk 2018
                  -eski tiktok apk 2017
                  -eski tiktok apk 2016
                  -eski tiktok apk for pc
                  -eski tiktok apk for ios
                  -eski tiktok apk for windows
                  -eski tiktok apk for mac
                  -eski tiktok apk for laptop
                  -eski tiktok apk for tablet
                  -eski tiktok apk for iphone
                  -eski tiktok apk for ipad
                  -eski tiktok apk for samsung
                  -eski tiktok apk for huawei
                  -eski tiktok apk for xiaomi
                  -eski tiktok apk for oppo
                  -eski tiktok apk for vivo
                  -eski tiktok apk for realme
                  -eski tiktok apk for nokia
                  -eski tiktok apk for lg
                  -eski tiktok apk for sony
                  -eski tiktok apk for motorola
                  -eski tiktok apk for lenovo
                  -eski tiktok apk for asus
                  -eski tiktok apk for acer
                  -eski tiktok apk for dell
                  -eski tiktok apk for hp

                  -

                  However, not everyone is happy with the current version of TikTok. Some users prefer the old features and interface of the app that were available before it merged with Musical.ly in 2018. That's why some people look for alternative ways to access the old version of TikTok, such as downloading an APK file.

                  -

                  An APK file is an Android application package that contains all the files and data needed to install an app on an Android device. By downloading an APK file from a third-party source, you can bypass the official app store and install apps that are not available or restricted in your region.

                  -

                  One of the most popular APK files for TikTok is Eski TikTok APK, which claims to offer the old version of TikTok with all its original features and functions. But what exactly is Eski TikTok APK and how can you download it? In this article, we will answer these questions and more.

                  -

                  How to Download and Install Eski TikTok APK

                  -

                  If you want to try out Eski TikTok APK on your Android device, you will need to follow these steps:

                  -

                  Step 1: Enable unknown sources on your device

                  -

                  Since Eski TikTok APK is not available on Google Play Store or any other official app store, you will need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the official app store.

                  -

                  To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data. Tap OK to proceed.

                  -

                  .

                  Step 2: Download the APK file from a trusted source

                  -

                  Next, you will need to download the Eski TikTok APK file from a trusted source. There are many websites that offer APK files for various apps, but not all of them are safe and reliable. Some of them may contain malware or viruses that can harm your device or data.

                  -

                  To avoid such risks, you should only download APK files from reputable sources that have positive reviews and ratings from other users. You can also use an antivirus app to scan the APK file before installing it.

                  -

                  One of the websites that you can use to download Eski TikTok APK is APKPure.com. This website provides verified and safe APK files for various apps and games. You can also find the latest updates and versions of the apps on this website.

                  -

                  To download Eski TikTok APK from APKPure.com, follow these steps:

                  -
                    -
                  • Go to APKPure.com and search for Eski TikTok APK in the search bar.
                  • -
                  • Select the app from the search results and click on the Download APK button.
                  • -
                  • Choose a download location and wait for the download to complete.
                  • -
                  -

                  Step 3: Install the APK file and launch the app

                  -

                  Once you have downloaded the Eski TikTok APK file, you can install it on your device by following these steps:

                  -
                    -
                  • Locate the APK file on your device storage and tap on it.
                  • -
                  • You may see a pop-up message asking you to confirm the installation. Tap on Install and wait for the installation to finish.
                  • -
                  • Once the installation is done, you can launch the app by tapping on Open or by finding it on your app drawer.
                  • -
                  -

                  Congratulations! You have successfully installed Eski TikTok APK on your device. You can now enjoy the old version of TikTok with all its features and functions.

                  -

                  How to Use Eski TikTok APK

                  -

                  Eski TikTok APK is very similar to the original TikTok app, except that it has the old features and interface that were available before 2018. You can use Eski TikTok APK to create and edit videos, explore and discover videos, and interact and communicate with other users.

                  -

                  How to create and edit videos

                  -

                  To create and edit videos on Eski TikTok APK, follow these steps:

                  -
                    -
                  • Tap on the plus icon at the bottom of the screen to open the camera.
                  • -
                  • Choose a song from the music library or upload your own audio.
                  • -
                  • Record your video by holding down the record button. You can also use filters, effects, stickers, timers, speed, beauty, and other features to enhance your video.
                  • -
                  • Edit your video by trimming, cropping, adding text, adjusting volume, applying filters, etc.
                  • -
                  • Add a caption, hashtags, and tags to your video and tap on Post to share it with your followers or save it as a draft.
                  • -
                  -

                  How to explore and discover videos

                  -

                  To explore and discover videos on Eski TikTok APK, follow these steps:

                  -
                    -
                  • Tap on the home icon at the bottom of the screen to see the videos from your following list or from other users around the world.
                  • -
                  • Swipe left or right to switch between different tabs such as For You, Following, Trending, etc.
                  • -
                  • Tap on a video to watch it in full screen. You can also like, comment, share, or save the video.
                  • -
                  • Tap on a user's profile picture or username to see their profile page. You can also follow, message, or block them.
                  • -
                  • Tap on the magnifying glass icon at the bottom of the screen to search for videos, users, hashtags, songs, or topics.
                  • -
                  -

                  . [assistant](#message)

                  How to interact and communicate with other users

                  -

                  To interact and communicate with other users on Eski TikTok APK, follow these steps:

                  -
                    -
                  • Tap on the heart icon at the bottom of the screen to see your notifications. You can see who liked, commented, followed, or mentioned you on your videos or messages.
                  • -
                  • Tap on the message icon at the bottom of the screen to see your chats. You can send and receive messages with your friends or other users. You can also send photos, videos, stickers, emojis, or voice messages.
                  • -
                  • Tap on the live icon at the top of the screen to see who
                A video app that lets you create and share funny videos with filters, effects, and editing tools.- Variety of categories and genres
                - Laugh and have fun with other users
                - Earn rewards and prizes
                - Not very original or creative
                - Some videos may be offensive or inappropriate
                - Some features require payment or subscription
                -

                Conclusion

                -

                Eski TikTok APK is an app that allows you to access the old version of TikTok with all its features and functions. It is a good option for those who miss the old look and feel of the app and want to enjoy the old features and interface of TikTok. However, it also has some drawbacks, such as being incompatible with newer versions of TikTok, not being available on official app stores, and posing potential risks of malware or viruses.

                -

                If you want to download and install Eski TikTok APK, you will need to enable unknown sources on your device settings, download the APK file from a trusted source, and install the APK file on your device. You can then use Eski TikTok APK to create and edit videos, explore and discover videos, and interact and communicate with other users.

                -

                However, if you are looking for other alternatives to TikTok that offer similar or better features and functions, you may want to check out some of the apps mentioned above, such as Triller, Dubsmash, Byte, Lomotif, or FunnyTube. These apps may provide you with more options and variety for your video creation and sharing needs.

                -

                FAQs

                -

                Q1. Is Eski TikTok APK legal and safe to use?

                -

                A1. Eski TikTok APK is not illegal to use, but it may violate the terms and conditions of TikTok. Therefore, you may face some consequences or issues if you use it. Eski TikTok APK is also not very safe to use, as it may contain malware or viruses that can harm your device or data. You should only download it from a trusted source and scan it with an antivirus app before installing it.

                -

                Q2. Can I use Eski TikTok APK on iOS devices?

                -

                A2. No, Eski TikTok APK is only compatible with Android devices. If you want to use the old version of TikTok on iOS devices, you may need to jailbreak your device or use an emulator.

                -

                Q3. Can I update Eski TikTok APK to the latest version of TikTok?

                -

                A3. No, Eski TikTok APK is based on an old version of TikTok and cannot be updated to the latest version of the app. If you want to use the latest version of TikTok, you will need to uninstall Eski TikTok APK and download the official app from the app store.

                -

                Q4. Can I log in with my existing TikTok account on Eski TikTok APK?

                -

                A4. Yes, you can log in with your existing TikTok account on Eski TikTok APK. However, you may not be able to see or use some of the new features and functions that are available on the official app. You may also face some issues or errors if you switch between the two apps frequently.

                -

                Q5. Can I share my videos from Eski TikTok APK to other social media platforms?

                -

                A5. Yes, you can share your videos from Eski TikTok APK to other social media platforms such as Facebook, Instagram, Twitter, etc. However, you may not be able to use some of the features or functions that are available on the official app, such as stickers, effects, hashtags, etc.

                401be4b1e0
                -
                -
                \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/__init__.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/onnx_helper.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/onnx_helper.py deleted file mode 100644 index ca922ca6d410655029e459cf8fd1c323d276c34c..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/onnx_helper.py +++ /dev/null @@ -1,250 +0,0 @@ -from __future__ import division -import datetime -import os -import os.path as osp -import glob -import numpy as np -import cv2 -import sys -import onnxruntime -import onnx -import argparse -from onnx import numpy_helper -from insightface.data import get_image - -class ArcFaceORT: - def __init__(self, model_path, cpu=False): - self.model_path = model_path - # providers = None will use available provider, for onnxruntime-gpu it will be "CUDAExecutionProvider" - self.providers = ['CPUExecutionProvider'] if cpu else None - - #input_size is (w,h), return error message, return None if success - def check(self, track='cfat', test_img = None): - #default is cfat - max_model_size_mb=1024 - max_feat_dim=512 - max_time_cost=15 - if track.startswith('ms1m'): - max_model_size_mb=1024 - max_feat_dim=512 - max_time_cost=10 - elif track.startswith('glint'): - max_model_size_mb=1024 - max_feat_dim=1024 - max_time_cost=20 - elif track.startswith('cfat'): - max_model_size_mb = 1024 - max_feat_dim = 512 - max_time_cost = 15 - elif track.startswith('unconstrained'): - max_model_size_mb=1024 - max_feat_dim=1024 - max_time_cost=30 - else: - return "track not found" - - if not os.path.exists(self.model_path): - return "model_path not exists" - if not os.path.isdir(self.model_path): - return "model_path should be directory" - onnx_files = [] - for _file in os.listdir(self.model_path): - if _file.endswith('.onnx'): - onnx_files.append(osp.join(self.model_path, _file)) - if len(onnx_files)==0: - return "do not have onnx files" - self.model_file = sorted(onnx_files)[-1] - print('use onnx-model:', self.model_file) - try: - session = onnxruntime.InferenceSession(self.model_file, providers=self.providers) - except: - return "load onnx failed" - input_cfg = session.get_inputs()[0] - input_shape = input_cfg.shape - print('input-shape:', input_shape) - if len(input_shape)!=4: - return "length of input_shape should be 4" - if not isinstance(input_shape[0], str): - #return "input_shape[0] should be str to support batch-inference" - print('reset input-shape[0] to None') - model = onnx.load(self.model_file) - model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None' - new_model_file = osp.join(self.model_path, 'zzzzrefined.onnx') - onnx.save(model, new_model_file) - self.model_file = new_model_file - print('use new onnx-model:', self.model_file) - try: - session = onnxruntime.InferenceSession(self.model_file, providers=self.providers) - except: - return "load onnx failed" - input_cfg = session.get_inputs()[0] - input_shape = input_cfg.shape - print('new-input-shape:', input_shape) - - self.image_size = tuple(input_shape[2:4][::-1]) - #print('image_size:', self.image_size) - input_name = input_cfg.name - outputs = session.get_outputs() - output_names = [] - for o in outputs: - output_names.append(o.name) - #print(o.name, o.shape) - if len(output_names)!=1: - return "number of output nodes should be 1" - self.session = session - self.input_name = input_name - self.output_names = output_names - #print(self.output_names) - model = onnx.load(self.model_file) - graph = model.graph - if len(graph.node)<8: - return "too small onnx graph" - - input_size = (112,112) - self.crop = None - if track=='cfat': - crop_file = osp.join(self.model_path, 'crop.txt') - if osp.exists(crop_file): - lines = open(crop_file,'r').readlines() - if len(lines)!=6: - return "crop.txt should contain 6 lines" - lines = [int(x) for x in lines] - self.crop = lines[:4] - input_size = tuple(lines[4:6]) - if input_size!=self.image_size: - return "input-size is inconsistant with onnx model input, %s vs %s"%(input_size, self.image_size) - - self.model_size_mb = os.path.getsize(self.model_file) / float(1024*1024) - if self.model_size_mb > max_model_size_mb: - return "max model size exceed, given %.3f-MB"%self.model_size_mb - - input_mean = None - input_std = None - if track=='cfat': - pn_file = osp.join(self.model_path, 'pixel_norm.txt') - if osp.exists(pn_file): - lines = open(pn_file,'r').readlines() - if len(lines)!=2: - return "pixel_norm.txt should contain 2 lines" - input_mean = float(lines[0]) - input_std = float(lines[1]) - if input_mean is not None or input_std is not None: - if input_mean is None or input_std is None: - return "please set input_mean and input_std simultaneously" - else: - find_sub = False - find_mul = False - for nid, node in enumerate(graph.node[:8]): - print(nid, node.name) - if node.name.startswith('Sub') or node.name.startswith('_minus'): - find_sub = True - if node.name.startswith('Mul') or node.name.startswith('_mul') or node.name.startswith('Div'): - find_mul = True - if find_sub and find_mul: - print("find sub and mul") - #mxnet arcface model - input_mean = 0.0 - input_std = 1.0 - else: - input_mean = 127.5 - input_std = 127.5 - self.input_mean = input_mean - self.input_std = input_std - for initn in graph.initializer: - weight_array = numpy_helper.to_array(initn) - dt = weight_array.dtype - if dt.itemsize<4: - return 'invalid weight type - (%s:%s)' % (initn.name, dt.name) - if test_img is None: - test_img = get_image('Tom_Hanks_54745') - test_img = cv2.resize(test_img, self.image_size) - else: - test_img = cv2.resize(test_img, self.image_size) - feat, cost = self.benchmark(test_img) - batch_result = self.check_batch(test_img) - batch_result_sum = float(np.sum(batch_result)) - if batch_result_sum in [float('inf'), -float('inf')] or batch_result_sum != batch_result_sum: - print(batch_result) - print(batch_result_sum) - return "batch result output contains NaN!" - - if len(feat.shape) < 2: - return "the shape of the feature must be two, but get {}".format(str(feat.shape)) - - if feat.shape[1] > max_feat_dim: - return "max feat dim exceed, given %d"%feat.shape[1] - self.feat_dim = feat.shape[1] - cost_ms = cost*1000 - if cost_ms>max_time_cost: - return "max time cost exceed, given %.4f"%cost_ms - self.cost_ms = cost_ms - print('check stat:, model-size-mb: %.4f, feat-dim: %d, time-cost-ms: %.4f, input-mean: %.3f, input-std: %.3f'%(self.model_size_mb, self.feat_dim, self.cost_ms, self.input_mean, self.input_std)) - return None - - def check_batch(self, img): - if not isinstance(img, list): - imgs = [img, ] * 32 - if self.crop is not None: - nimgs = [] - for img in imgs: - nimg = img[self.crop[1]:self.crop[3], self.crop[0]:self.crop[2], :] - if nimg.shape[0] != self.image_size[1] or nimg.shape[1] != self.image_size[0]: - nimg = cv2.resize(nimg, self.image_size) - nimgs.append(nimg) - imgs = nimgs - blob = cv2.dnn.blobFromImages( - images=imgs, scalefactor=1.0 / self.input_std, size=self.image_size, - mean=(self.input_mean, self.input_mean, self.input_mean), swapRB=True) - net_out = self.session.run(self.output_names, {self.input_name: blob})[0] - return net_out - - - def meta_info(self): - return {'model-size-mb':self.model_size_mb, 'feature-dim':self.feat_dim, 'infer': self.cost_ms} - - - def forward(self, imgs): - if not isinstance(imgs, list): - imgs = [imgs] - input_size = self.image_size - if self.crop is not None: - nimgs = [] - for img in imgs: - nimg = img[self.crop[1]:self.crop[3],self.crop[0]:self.crop[2],:] - if nimg.shape[0]!=input_size[1] or nimg.shape[1]!=input_size[0]: - nimg = cv2.resize(nimg, input_size) - nimgs.append(nimg) - imgs = nimgs - blob = cv2.dnn.blobFromImages(imgs, 1.0/self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True) - net_out = self.session.run(self.output_names, {self.input_name : blob})[0] - return net_out - - def benchmark(self, img): - input_size = self.image_size - if self.crop is not None: - nimg = img[self.crop[1]:self.crop[3],self.crop[0]:self.crop[2],:] - if nimg.shape[0]!=input_size[1] or nimg.shape[1]!=input_size[0]: - nimg = cv2.resize(nimg, input_size) - img = nimg - blob = cv2.dnn.blobFromImage(img, 1.0/self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True) - costs = [] - for _ in range(50): - ta = datetime.datetime.now() - net_out = self.session.run(self.output_names, {self.input_name : blob})[0] - tb = datetime.datetime.now() - cost = (tb-ta).total_seconds() - costs.append(cost) - costs = sorted(costs) - cost = costs[5] - return net_out, cost - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='') - # general - parser.add_argument('workdir', help='submitted work dir', type=str) - parser.add_argument('--track', help='track name, for different challenge', type=str, default='cfat') - args = parser.parse_args() - handler = ArcFaceORT(args.workdir) - err = handler.check(args.track) - print('err:', err) diff --git a/spaces/fclong/summary/fengshen/examples/clue1.1/solution/clue_unimc.py b/spaces/fclong/summary/fengshen/examples/clue1.1/solution/clue_unimc.py deleted file mode 100644 index a5ffe4899e31216326260a65d9d12ad7892fc60f..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/clue1.1/solution/clue_unimc.py +++ /dev/null @@ -1,63 +0,0 @@ -import argparse -from fengshen.pipelines.multiplechoice import UniMCPipelines -import os -import json -import copy -from tqdm import tqdm - -def load_data(data_path): - with open(data_path, 'r', encoding='utf8') as f: - lines = f.readlines() - samples = [json.loads(line) for line in tqdm(lines)] - return samples - - -def comp_acc(pred_data,test_data): - corr=0 - for i in range(len(pred_data)): - if pred_data[i]['label']==test_data[i]['label']: - corr+=1 - return corr/len(pred_data) - - -def main(): - total_parser = argparse.ArgumentParser("TASK NAME") - total_parser.add_argument('--data_dir', default='./data', type=str) - total_parser.add_argument('--train_data', default='train.json', type=str) - total_parser.add_argument('--valid_data', default='dev.json', type=str) - total_parser.add_argument('--test_data', default='test.json', type=str) - total_parser.add_argument('--output_path', default='', type=str) - - total_parser = UniMCPipelines.piplines_args(total_parser) - args = total_parser.parse_args() - - train_data = load_data(os.path.join(args.data_dir, args.train_data)) - dev_data = load_data(os.path.join(args.data_dir, args.valid_data)) - test_data = load_data(os.path.join(args.data_dir, args.test_data)) - - # dev_data = dev_data[:200] - dev_data_ori=copy.deepcopy(dev_data) - - model = UniMCPipelines(args, args.pretrained_model_path) - - print(args.data_dir) - - if args.train: - model.train(train_data, dev_data) - result = model.predict(dev_data) - for line in result[:20]: - print(line) - - acc=comp_acc(result,dev_data_ori) - print('acc:',acc) - - if args.output_path != '': - test_result = model.predict(test_data) - with open(args.output_path, 'w', encoding='utf8') as f: - for line in test_result: - json_data=json.dumps(line,ensure_ascii=False) - f.write(json_data+'\n') - - -if __name__ == "__main__": - main() diff --git a/spaces/fclong/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh b/spaces/fclong/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh deleted file mode 100644 index ad3eb7ead394e6662168eb0b4947055277a01b58..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=taiyi-sd-dreambooth # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks-per-node=1 # number of tasks to run per node -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH -o %x-%j.log # output and error log file names (%x for job id) -#SBATCH -x dgx050 - -# pwd=Fengshenbang-LM/fengshen/examples/pretrain_erlangshen -ROOT_DIR=../../workspace -# export CUDA_VISIBLE_DEVICES='7' -export TORCH_EXTENSIONS_DIR=${ROOT_DIR}/torch_extendsions - -MODEL_NAME=taiyi-sd-dreambooth -MODEL_ROOT_DIR=$ROOT_DIR/${MODEL_NAME} -if [ ! -d ${MODEL_ROOT_DIR} ];then - mkdir ${MODEL_ROOT_DIR} -fi - -NNODES=1 -GPUS_PER_NODE=1 - -MICRO_BATCH_SIZE=1 -INSTANCE_PROMPT="小黄鸭" -OUTPUT_DIR="saved_model_tinyduck" -INSTANCE_DIR="train_images_duck" - -DATA_ARGS="\ - --dataloader_workers 2 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --instance_data_dir=$INSTANCE_DIR \ - --instance_prompt=$INSTANCE_PROMPT \ - --resolution=512 \ - " - -MODEL_ARGS="\ - --model_path $MODEL_ROOT_DIR/pretrain/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/ \ - --train_text_encoder \ - --learning_rate 1e-6 \ - --scheduler_type constant \ - --warmup_steps 100 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --save_ckpt_path ${MODEL_ROOT_DIR}/ckpt \ - --load_ckpt_path ${MODEL_ROOT_DIR}/ckpt/last.ckpt \ - " - -TRAINER_ARGS="\ - --max_steps 1200 \ - --gpus $GPUS_PER_NODE \ - --num_nodes $NNODES \ - --strategy ddp \ - --log_every_n_steps 100 \ - --precision 32 \ - --default_root_dir ${MODEL_ROOT_DIR} \ - --replace_sampler_ddp False \ - --num_sanity_val_steps 0 \ - --limit_val_batches 0 \ - " -# num_sanity_val_steps, limit_val_batches 通过这俩参数把validation关了 - -export options=" \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " -# run local -python train.py $options -# run on slurm -# srun python train.py $options \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.py b/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.py deleted file mode 100644 index d19dd1ca4a5f920dcb90863e89940f05362e2cda..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.py +++ /dev/null @@ -1,449 +0,0 @@ -# !/usr/bin/env python -# -*- coding: utf-8 -*- -import pandas as pd -import json -import argparse -import torch -import os -import logging -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -from pytorch_lightning.utilities import rank_zero_info -from sacrebleu.metrics import BLEU -from fengshen.utils.utils import chinese_char_tokenize -from fengshen.models.model_utils import add_module_args, add_inverse_square_args -from fengshen.models.deltalm.tokenizer_deltalm import DeltalmTokenizer -from fengshen.models.deltalm.modeling_deltalm import DeltalmForConditionalGeneration -from fengshen.utils import UniversalCheckpoint -from fengshen.data.universal_datamodule import UniversalDataModule -from pytorch_lightning import Trainer, loggers, LightningModule -from pytorch_lightning.callbacks import LearningRateMonitor -from mosestokenizer import MosesDetokenizer -from typing import List -import sys -sys.path.append('../../../') - -# from transformers import MBartForConditionalGeneration, MBart50TokenizerFast -# from pytorch_lightning.callbacks.early_stopping import EarlyStopping - - -mose_decode = MosesDetokenizer() - -os.environ["CUDA_VISIBLE_DEVICES"] = '4' -logger = logging.getLogger(__name__) - -EVAL_BLEU_ORDER = 4 - - -def calc_bleu_from_stats(sentence_stats: pd.DataFrame) -> BLEU: - corpus_stats = sentence_stats.sum(axis=0) - smooth = {"smooth_method": "exp"} - corpus_bleu = BLEU.compute_bleu( - correct=[ - corpus_stats.correct_1_grams, - corpus_stats.correct_2_grams, - corpus_stats.correct_3_grams, - corpus_stats.correct_4_grams, - ], - total=[ - corpus_stats.total_1_grams, - corpus_stats.total_2_grams, - corpus_stats.total_3_grams, - corpus_stats.total_4_grams, - ], - sys_len=corpus_stats.translation_length, - ref_len=corpus_stats.reference_length, - **smooth - ) - return corpus_bleu - - -def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=None, reduce=True): - if target.dim() == lprobs.dim() - 1: - target = target.unsqueeze(-1) - # logger.debug("Debug: After target.dim() == lprobs.dim(): ", target.dim(), lprobs.dim()) - nll_loss = -lprobs.gather(dim=-1, index=target) - smooth_loss = -lprobs.sum(dim=-1, keepdim=True) - if ignore_index is not None: - pad_mask = target.eq(ignore_index) - nll_loss.masked_fill_(pad_mask, 0.0) - smooth_loss.masked_fill_(pad_mask, 0.0) - else: - nll_loss = nll_loss.squeeze(-1) - smooth_loss = smooth_loss.squeeze(-1) - if reduce: - nll_loss = nll_loss.sum() - smooth_loss = smooth_loss.sum() - eps_i = epsilon / (lprobs.size(-1) - 1) - valid_length = target.ne(ignore_index).sum() - # unvalid_length = target.eq(ignore_index).sum() - loss = ((1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss) / valid_length.item() - - return loss, nll_loss - - -class DataCollator: - def __init__(self, model, tokenizer, max_enc_length, max_dec_length, reverse_src_tgt): - self.tokenizer = tokenizer - self.max_enc_length = max_enc_length - self.max_dec_length = max_dec_length - self.model = model - self.reverse_src_tgt = reverse_src_tgt - - def __call__(self, batch_samples): - batch_inputs, batch_targets = [], [] - for sample in batch_samples: - if self.reverse_src_tgt: - if "tgt" in sample and len(sample["tgt"]) != 0: - batch_inputs.append(sample["tgt"]) - batch_targets.append(sample["src"]) - else: - if "src" in sample and len(sample["src"]) != 0: - batch_inputs.append(sample["src"]) - batch_targets.append(sample["tgt"]) - batch_data = self.tokenizer( - batch_inputs, - padding='max_length', - max_length=self.max_enc_length, - truncation=True, - return_tensors="pt" - ) - with self.tokenizer.as_target_tokenizer(): - labels = self.tokenizer( - batch_targets, - padding='max_length', - max_length=self.max_dec_length, - truncation=False, - return_tensors="pt" - )["input_ids"] - batch_data['decoder_input_ids'] = self.model.prepare_decoder_input_ids_from_labels(labels) - batch_data['labels'] = labels - - batch_data['src'] = batch_inputs - batch_data['tgt'] = batch_targets - - # logger.debug(batch_data) - return batch_data - - -class FinetuneTranslation(LightningModule): - - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('deltalm-base finetune') - parser.add_argument('--label_smoothing', default=0.1, type=float) - return parent_args - - def __init__(self, args, tokenizer=None): - super().__init__() - self.args = args - self.save_hyperparameters(args) - if args.other_model: - self.model = AutoModelForSeq2SeqLM.from_pretrained(args.model_path) - else: - self.model = DeltalmForConditionalGeneration.from_pretrained(args.model_path, ignore_mismatched_sizes=True) - self.tokenizer = tokenizer - assert self.tokenizer, "tokenizer is None!" - self.blue_metric = BLEU() - self.sufficient_stats: List[List[int]] = [] - self.label_smoothing = self.args.label_smoothing - self.mose_decode = MosesDetokenizer() - - if self.args.label_smoothing != 0: - self.loss_fn = label_smoothed_nll_loss - - def setup(self, stage) -> None: - if stage == 'fit': - train_loader = self.trainer._data_connector._train_dataloader_source.dataloader() - - # Calculate total steps - tb_size = self.hparams.train_batchsize * max(1, self.trainer.gpus) - ab_size = self.trainer.accumulate_grad_batches * float( - self.trainer.max_epochs) - self.total_steps = (len(train_loader.dataset) // - tb_size) // ab_size - - def configure_optimizers(self): - # if self.args.use_default_configure: - from fengshen.models.model_utils import configure_optimizers - return configure_optimizers(self) - - def training_step(self, batch, batch_idx): - if self.label_smoothing == 0: - output = self.model(input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - labels=batch['labels']) - - self.log('train_loss', output.loss, sync_dist=True) - return output.loss - - # TODO label_smoothing should be implemented at here - else: - labels = batch["labels"] - output = self.model(input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - decoder_input_ids=batch['decoder_input_ids']) - - logits = output["logits"] - m = torch.nn.LogSoftmax(dim=-1) - lprobs = m(logits.float()) - loss, _ = self.loss_fn(lprobs.view(-1, lprobs.size(-1)), labels.view(-1), - self.label_smoothing, self.tokenizer.pad_token_id) - self.log('train_loss', loss, sync_dist=True) - return loss - - def comput_metrix(self, logits, labels): - y_pred = torch.argmax(logits, dim=-1) - y_pred = y_pred.view(size=(-1, )) - - y_true = labels.view(size=(-1, )) - pad_mask = y_true.eq(1) - valid_length = y_true.ne(1).sum() - - corr = torch.eq(y_pred, y_true.float()) - corr.masked_fill_(pad_mask, 0.0) - acc = torch.sum(corr.float()) / valid_length - return acc - - def get_sufficient_stats(self, translations: List[str], references: List[str]) -> pd.DataFrame: - assert len(translations) == len(references), ( - f"There are {len(translations)} translated sentences " - f"but {len(references)} reference sentences" - ) - - # for sentence, ref in zip(translations, references): - - sentence_bleu = self.blue_metric.corpus_score(translations, [references]) - self.sufficient_stats.append( - [ - # Number of correct 1-grams, .., 4-grams - sentence_bleu.counts[0], - sentence_bleu.counts[1], - sentence_bleu.counts[2], - sentence_bleu.counts[3], - # Total number of 1-grams, .., 4-grams - sentence_bleu.totals[0], - sentence_bleu.totals[1], - sentence_bleu.totals[2], - sentence_bleu.totals[3], - # Length of translated sentence. - sentence_bleu.sys_len, - # Length of reference sentence. - sentence_bleu.ref_len, - ] - ) - - def on_validation_start(self) -> None: - # rm file at validation start - prefix, ext = os.path.splitext(self.hparams.output_save_path) - file_path_rank = '{}_{}{}'.format( - prefix, - self.trainer._accelerator_connector.cluster_environment. - global_rank(), ext) - if os.path.exists(file_path_rank): - # logger.debug('rm {}'.format(file_path_rank)) - os.remove(file_path_rank) - - def validation_step(self, batch, batch_idx): - - def postprocess_text(preds, labels, tgt_zh): - if tgt_zh: - preds = [pred.strip() for pred in preds] - labels = [label.strip() for label in labels] - else: - preds = list(map(lambda x: mose_decode(x.strip().split()), preds)) - labels = list(map(lambda x: mose_decode(x.strip().split()), labels)) - return preds, labels - - tmp_label = batch['labels'] - end_token_index = torch.where(tmp_label == self.tokenizer.eos_token_id)[1] - for idx, end_idx in enumerate(end_token_index): - tmp_label[idx][end_idx+1:] = -100 - output = self.model(input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - labels=tmp_label) - generated_ids = self.model.generate( - input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - max_length=self.hparams.max_dec_length) - - preds = self.tokenizer.batch_decode(generated_ids, - skip_special_tokens=True) - labels = torch.where(batch['labels'] != -100, batch['labels'], - self.tokenizer.pad_token_id) - - labels = self.tokenizer.batch_decode(labels, - skip_special_tokens=True) - - decoded_preds, decoded_labels = postprocess_text(preds, labels, self.args.tgt_zh) - # save preds for every rank - prefix, ext = os.path.splitext(self.hparams.output_save_path) - file_path_rank = '{}_{}{}'.format( - prefix, - self.trainer._accelerator_connector.cluster_environment. - global_rank(), ext) - self.save_prediction_to_file(preds=decoded_preds, - sources=batch['src'], - targets=decoded_labels, - ori_target=batch['tgt'], - file_path=file_path_rank) - - if self.args.tgt_zh: - new_preds = [chinese_char_tokenize(p) for p in decoded_preds] - new_labels = [chinese_char_tokenize(label) for label in decoded_labels] - self.get_sufficient_stats(new_preds, new_labels) - else: - self.get_sufficient_stats(decoded_preds, decoded_labels) - # batch_bleu = self.blue_metric.corpus_score(decoded_preds, [decoded_labels]).score - acc = self.comput_metrix(output.logits, batch['labels']) - self.log('val_loss', output.loss, sync_dist=True) - self.log('val_acc', acc, sync_dist=True) - - def validation_epoch_end(self, outputs): - rank_zero_info("***** Validation results *****") - sentence_states = pd.DataFrame( - self.sufficient_stats, - columns=[ - "correct_1_grams", - "correct_2_grams", - "correct_3_grams", - "correct_4_grams", - "total_1_grams", - "total_2_grams", - "total_3_grams", - "total_4_grams", - "translation_length", - "reference_length", - ] - ) - - computed_bleu = calc_bleu_from_stats(sentence_states) - rank_zero_info("valid_sacrebleu= {}\n".format(computed_bleu.score)) - self.log('valid_sacrebleu', computed_bleu.score, sync_dist=True) - self.sufficient_stats = [] - - def on_save_checkpoint(self, checkpoint) -> None: - if self.trainer._accelerator_connector.cluster_environment.global_rank( - ) == 0: - self.model.save_pretrained( - os.path.join( - self.trainer.checkpoint_callback.dirpath, - 'finetuned_epoch{}_step{}'.format( - checkpoint['epoch'], checkpoint['global_step']))) - - def save_prediction_to_file(self, preds, sources, targets, ori_target, file_path): - with open(file_path, 'a', encoding='utf-8') as f: - for idx, pred in enumerate(preds): - source = sources[idx] - target = targets[idx] - tmp_result = dict() - tmp_result['pred'] = pred - tmp_result['source'] = source - tmp_result['label'] = target - tmp_result['ori_label'] = ori_target[idx] - json_data = json.dumps(tmp_result, ensure_ascii=False) - f.write(json_data + '\n') - - def test_step(self, batch, batch_idx): - # print(batch) - texts = batch['src'] - # output summary and metrics - self.model.eval() - generated_ids = self.model.generate( - input_ids=batch['input_ids'], - attention_mask=batch['attention_mask'], - max_length=self.hparams.max_dec_length - ) - preds = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) - - labels = torch.where(batch['labels'] != -100, batch['labels'], - self.tokenizer.pad_token_id) - labels = self.tokenizer.batch_decode( - labels, skip_special_tokens=True, clean_up_tokenization_spaces=True) - - self.save_prediction_to_file(preds, texts, labels, self.hparams.output_save_path) - - -def configure_logger(logging_lever=logging.INFO): - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - handlers=[logging.StreamHandler(sys.stdout)], - ) - logger.setLevel(logging_lever) - - -def main(): - args_parser = argparse.ArgumentParser("Pegasus Task") - args_parser.add_argument('--do_eval_only', - action='store_true', - default=False) - args_parser.add_argument('--other_model', - action='store_true', - default=False) - args_parser.add_argument('--reverse_src_tgt', - action='store_true', - default=False) - args_parser.add_argument('--tgt_zh', - action='store_true', - default=False) - args_parser.add_argument('--early_stopping_callback', - action='store_true', - default=False) - args_parser.add_argument('--pretrained_model_path', - default='facebook/mbart', - type=str) - args_parser.add_argument('--output_save_path', - default='predict.json', - type=str) - args_parser.add_argument('--max_enc_length', default=512, type=int) - args_parser.add_argument('--max_dec_length', default=512, type=int) - - # * Args for data preprocessing - args_parser = UniversalDataModule.add_data_specific_args(args_parser) - - # * Args for training - args_parser = Trainer.add_argparse_args(args_parser) - args_parser = UniversalCheckpoint.add_argparse_args(args_parser) - args_parser = FinetuneTranslation.add_model_specific_args(args_parser) - args_parser = add_module_args(args_parser) - args_parser = add_inverse_square_args(args_parser) - - args = args_parser.parse_args() - - if args.other_model: - tokenizer = AutoTokenizer.from_pretrained(args.model_path) - else: - tokenizer = DeltalmTokenizer.from_pretrained(args.model_path) - # tokenizer = AutoTokenizer.from_pretrained(args.model_path) - print("tokenizer vocab size: ", tokenizer.vocab_size) - model = FinetuneTranslation(args, tokenizer) - collator = DataCollator(model.model, tokenizer, args.max_enc_length, args.max_dec_length, args.reverse_src_tgt) - data_model = UniversalDataModule(tokenizer=tokenizer, - args=args, - # datasets=dataset, - collate_fn=collator) - - lr_monitor = LearningRateMonitor(logging_interval='step') - - configure_logger(logging_lever=logging.INFO) - - if not args.do_eval_only: - - lr_monitor = LearningRateMonitor(logging_interval='step') - tensorboard_logger = loggers.TensorBoardLogger( - save_dir=os.path.join(args.default_root_dir, 'logs/'), - name=os.path.basename(os.path.dirname(args.model_path))) - checkpoint_callback = UniversalCheckpoint(args) - # early_stop = EarlyStopping(monitor=args.monitor, mode=args.mode) - trainer = Trainer.from_argparse_args( - args, logger=tensorboard_logger, callbacks=[lr_monitor, checkpoint_callback]) - trainer.fit(model, data_model) - - else: - trainer = Trainer.from_argparse_args(args) - trainer.validate(model, data_model) - # trainer.test(model, data_model) - - -if __name__ == '__main__': - main() diff --git a/spaces/fclong/summary/fengshen/examples/wenzhong_qa/README.md b/spaces/fclong/summary/fengshen/examples/wenzhong_qa/README.md deleted file mode 100644 index 8b424909f39c5b1480fbc5cc7015e82714292930..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/wenzhong_qa/README.md +++ /dev/null @@ -1,75 +0,0 @@ -#
                yuyuanQA模型finetune -本示例主要实现了基于GPT2结构的Yuyuan医疗大模型,通过医疗问答对Finetune,使大模型能够有closebook-qa的能力。 -### 数据和模型 -#### 模型: -finetune的模型是yuyuan模型,余元模型是GPT2的结构,在预训练阶段主要是用PubMed医疗相关的数据集进行的预训练。是一个医疗领域的大模型。模型共有35亿参数,主要参数如下表所示: - -| 配置 | 参数 | -| :---------: | :---: | -| nlayers | 30 | -| nheaders | 32 | -| hidden-size | 3072 | -| seq-length | 1024 | - -预训练的数据,主要医疗相关的论文、杂志期刊等,以英文语料为主。 -#### 数据: -用于finetune的语料是清洗于[MedQuAD](https://github.com/abachaa/MedQuAD)数据集,清洗完成后是下面的格式: -```text -...... -{'question':'.........','answer':'........'} -{'question':'.........','answer':'........'} -...... -``` -### finetune框架以及参数配置 -#### 框架 : -finetune的框架是IDEA研究院CCNL小组整合各大框架的优点开源的[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen),具体代码可以参考[finetune_medicalQA.py](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/dev_wzw/fengshen/examples/wenzhong_qa/finetune_medicalQA.py)和[medicalQADataset.py](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/dev_wzw/fengshen/data/task_dataloader/medicalQADataset.py)。 -#### 训练参数: -训练参数,我们采用了deepspeed相关的配置,用2个集群的节点共16张A100,在很短的时间内完成了finetune。具体参数配置可以参考[finetune_GPT2_medicalQA.sh](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/dev_wzw/fengshen/examples/wenzhong_qa/finetune_GPT2_medicalQA.sh) -### finetune后的效果以及使用 -#### 效果对比: -finetune后的模型,用100对问答对,基于BLEU分与之前用Magetron框架训练的模型进行了简单的对比,效果比较接近。 - -unsmoth method: -| 框架 | 1-gram | 2-gram | 3-gram | 4-gram | -| -------- | ------------------ | ------------------ | ------------------ | ------------------- | -| Fengshen | 0.5241376169070796 | 0.5215762466122144 | 0.4894353584800885 | 0.44840139357073466 | -| Magetron | 0.5321340489166898 | 0.5110257474778213 | 0.4703745962926368 | 0.4310875933354554 | - -smoth method: -| 框架 | 1-gram | 2-gram | 3-gram | 4-gram | -| -------- | ----------------- | ------------------ | ------------------ | ------------------ | -| Fengshen | 0.717829796617609 | 0.6516910802858905 | 0.5859726677095979 | 0.525510691686505 | -| Magetron | 0.776190980974117 | 0.6749801211321476 | 0.5897846253142169 | 0.5230773076722481 | -#### 使用方式: -支持直接用Haggingface或者pytorch-lightning框架调用。由于在finetune的时候,加入了prompt,在问答的时候,输入应该是:" -`Question:your question about medical? answer:`",接着模型就回以续写的方式回答你的问题。用huggingface的调用代码可以参考下面的代码: -```python -from transformers import GPT2Tokenizer,GPT2LMHeadModel -model_path = 'pretrained_model_hf/yuyuanQA-v1' # input your own model file path -model = GPT2LMHeadModel.from_pretrained(model_path) -tokenizer = GPT2Tokenizer.from_pretrained(model_path) -model = model.cuda(6) # move your model to the GPU -model.eval() # just do predict - -def answering(question): -# question = "What should gout patients pay attention to in diet?" - inputs = tokenizer(f'Question:{question} answer:',return_tensors='pt').input_ids.to(model.device) - - generation_output = model.generate(input_ids = inputs, - return_dict_in_generate=True, - output_scores=True, - max_length=150, - # max_new_tokens=80, - do_sample=True, - top_p = 0.9, - eos_token_id=50256, - pad_token_id=0, - num_return_sequences = 5) - answers = [] - for idx,sentence in enumerate(generation_output.sequences): - next_sentence = tokenizer.decode(sentence).split('<|endoftext|>')[0] - answer = next_sentence.split(sep='answer:',maxsplit=1)[1] - answers.append(answer) - return answers -answering('your question?') -``` \ No newline at end of file diff --git a/spaces/fffiloni/DragGAN/gradio_app.py b/spaces/fffiloni/DragGAN/gradio_app.py deleted file mode 100644 index 6cc8ca76c3581a3da8e7111eb57445e65ac63dc0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/DragGAN/gradio_app.py +++ /dev/null @@ -1,275 +0,0 @@ -import os -import gradio as gr -import torch -import numpy as np -import imageio -from PIL import Image -import uuid - -from drag_gan import drag_gan, stylegan2 - -device = 'cuda' - - -SIZE_TO_CLICK_SIZE = { - 512: 5, - 256: 2 -} - -CKPT_SIZE = { - 'stylegan2-ffhq-config-f.pt': 512, - 'stylegan2-cat-config-f.pt': 256, - 'stylegan2-church-config-f.pt': 256, - 'stylegan2-horse-config-f.pt': 256, -} - - -class ImageMask(gr.components.Image): - """ - Sets: source="canvas", tool="sketch" - """ - - is_template = True - - def __init__(self, **kwargs): - super().__init__(source="upload", tool="sketch", interactive=True, **kwargs) - - def preprocess(self, x): - if x is None: - return x - if self.tool == "sketch" and self.source in ["upload", "webcam"] and type(x) != dict: - decode_image = gr.processing_utils.decode_base64_to_image(x) - width, height = decode_image.size - mask = np.zeros((height, width, 4), dtype=np.uint8) - mask[..., -1] = 255 - mask = self.postprocess(mask) - x = {'image': x, 'mask': mask} - return super().preprocess(x) - - -class ModelWrapper: - def __init__(self, **kwargs): - self.g_ema = stylegan2(**kwargs).to(device) - - -def to_image(tensor): - tensor = tensor.squeeze(0).permute(1, 2, 0) - arr = tensor.detach().cpu().numpy() - arr = (arr - arr.min()) / (arr.max() - arr.min()) - arr = arr * 255 - return arr.astype('uint8') - - -def add_points_to_image(image, points, size=5): - h, w, = image.shape[:2] - - for x, y in points['target']: - image[max(0, x - size):min(x + size, h - 1), max(0, y - size):min(y + size, w), :] = [255, 0, 0] - for x, y in points['handle']: - image[max(0, x - size):min(x + size, h - 1), max(0, y - size):min(y + size, w), :] = [0, 0, 255] - - return image - - -def on_click(image, target_point, points, size, evt: gr.SelectData): - if target_point: - points['target'].append([evt.index[1], evt.index[0]]) - image = add_points_to_image(image, points, size=SIZE_TO_CLICK_SIZE[size]) - return image, str(evt.index), not target_point - points['handle'].append([evt.index[1], evt.index[0]]) - image = add_points_to_image(image, points, size=SIZE_TO_CLICK_SIZE[size]) - return image, str(evt.index), not target_point - - -def on_drag(model, points, max_iters, state, size, mask): - if len(points['handle']) == 0: - raise gr.Error('You must select at least one handle point and target point.') - if len(points['handle']) != len(points['target']): - raise gr.Error('You have uncompleted handle points, try to selct a target point or undo the handle point.') - max_iters = int(max_iters) - latent = state['latent'] - noise = state['noise'] - F = state['F'] - - handle_points = [torch.tensor(p).float() for p in points['handle']] - target_points = [torch.tensor(p).float() for p in points['target']] - - mask = Image.fromarray(mask['mask']).convert('L') - mask = np.array(mask) == 255 - - mask = torch.from_numpy(mask).float().to(device) - mask = mask.unsqueeze(0).unsqueeze(0) - - step = 0 - for sample2, latent, F, handle_points in drag_gan(model.g_ema, latent, noise, F, - handle_points, target_points, mask, - max_iters=max_iters): - image = to_image(sample2) - - state['F'] = F - state['latent'] = latent - state['sample'] = sample2 - points['handle'] = [p.cpu().numpy().astype('int') for p in handle_points] - add_points_to_image(image, points, size=SIZE_TO_CLICK_SIZE[size]) - - state['history'].append(image) - step += 1 - yield image, state, step - - -def on_reset(points, image, state): - return {'target': [], 'handle': []}, to_image(state['sample']) - - -def on_undo(points, image, state, size): - image = to_image(state['sample']) - - if len(points['target']) < len(points['handle']): - points['handle'] = points['handle'][:-1] - else: - points['handle'] = points['handle'][:-1] - points['target'] = points['target'][:-1] - - add_points_to_image(image, points, size=SIZE_TO_CLICK_SIZE[size]) - return points, image - - -def on_change_model(selected, model): - size = CKPT_SIZE[selected] - model = ModelWrapper(size=size, ckpt=selected) - g_ema = model.g_ema - sample_z = torch.randn([1, 512], device=device) - latent, noise = g_ema.prepare([sample_z]) - sample, F = g_ema.generate(latent, noise) - - state = { - 'latent': latent, - 'noise': noise, - 'F': F, - 'sample': sample, - 'history': [] - } - return model, state, to_image(sample), size - - -def on_new_image(model): - g_ema = model.g_ema - sample_z = torch.randn([1, 512], device=device) - latent, noise = g_ema.prepare([sample_z]) - sample, F = g_ema.generate(latent, noise) - - state = { - 'latent': latent, - 'noise': noise, - 'F': F, - 'sample': sample, - 'history': [] - } - points = {'target': [], 'handle': []} - target_point = False - return to_image(sample), to_image(sample), state, points, target_point - - -def on_max_iter_change(max_iters): - return gr.update(maximum=max_iters) - - -def on_save_files(image, state): - os.makedirs('tmp', exist_ok=True) - image_name = f'tmp/image_{uuid.uuid4()}.png' - video_name = f'tmp/video_{uuid.uuid4()}.mp4' - imageio.imsave(image_name, image) - imageio.mimsave(video_name, state['history']) - return [image_name, video_name] - - -def on_show_save(): - return gr.update(visible=True) - - -def main(): - torch.cuda.manual_seed(25) - - with gr.Blocks() as demo: - wrapped_model = ModelWrapper() - model = gr.State(wrapped_model) - sample_z = torch.randn([1, 512], device=device) - latent, noise = wrapped_model.g_ema.prepare([sample_z]) - sample, F = wrapped_model.g_ema.generate(latent, noise) - - gr.Markdown( - """ - # DragGAN (Unofficial) - - Unofficial implementation of [Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold](https://vcai.mpi-inf.mpg.de/projects/DragGAN/) - - [Github](https://github.com/Zeqiang-Lai/DragGAN) | [Official Implementation](https://github.com/XingangPan/DragGAN) (Not released yet) - - ## Tutorial - - 1. (Optional) Draw a mask indicate the movable region. - 2. Setup a least one pair of handle point and target point. - 3. Click "Drag it". - - """, - ) - state = gr.State({ - 'latent': latent, - 'noise': noise, - 'F': F, - 'sample': sample, - 'history': [] - }) - points = gr.State({'target': [], 'handle': []}) - size = gr.State(512) - - with gr.Row(): - with gr.Column(scale=0.3): - with gr.Accordion("Model"): - model_dropdown = gr.Dropdown(choices=list(CKPT_SIZE.keys()), value='stylegan2-ffhq-config-f.pt', - label='StyleGAN2 model') - max_iters = gr.Slider(1, 20, 20, step=1, label='Max Iterations') - new_btn = gr.Button('New Image') - with gr.Accordion('Drag'): - with gr.Row(): - with gr.Column(min_width=100): - text = gr.Textbox(label='Selected Point', interactive=False) - with gr.Column(min_width=100): - target_point = gr.Checkbox(label='Target Point', interactive=False) - with gr.Row(): - with gr.Column(min_width=100): - reset_btn = gr.Button('Reset All') - with gr.Column(min_width=100): - undo_btn = gr.Button('Undo Last') - with gr.Row(): - btn = gr.Button('Drag it', variant='primary') - - with gr.Accordion('Save', visible=False) as save_panel: - files = gr.Files(value=[]) - - progress = gr.Slider(value=0, maximum=20, label='Progress', interactive=False) - - with gr.Column(): - with gr.Tabs(): - with gr.Tab('Draw a Mask', id='mask'): - mask = gr.ImageMask(value=to_image(sample), label='Mask').style(height=768, width=768) - with gr.Tab('Setup Handle Points', id='input'): - image = gr.Image(to_image(sample)).style(height=768, width=768) - - image.select(on_click, [image, target_point, points, size], [image, text, target_point], queue=False) - btn.click(on_drag, inputs=[model, points, max_iters, state, size, mask], outputs=[image, state, progress])#.then( - #on_show_save, outputs=save_panel)#.then( - #on_save_files, inputs=[image, state], outputs=[files] - # ) - reset_btn.click(on_reset, inputs=[points, image, state], outputs=[points, image],queue=False) - undo_btn.click(on_undo, inputs=[points, image, state, size], outputs=[points, image], queue=False) - model_dropdown.change(on_change_model, inputs=[model_dropdown, model], outputs=[model, state, image, size], queue=False) - new_btn.click(on_new_image, inputs=[model], outputs=[image, mask, state, points, target_point], queue=False) - max_iters.change(on_max_iter_change, inputs=max_iters, outputs=progress, queue=False) - return demo - - -if __name__ == '__main__': - import fire - demo = main() - fire.Fire(demo.queue(concurrency_count=1, max_size=20).launch) \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/__init__.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/fffiloni/LangChain-ChatGPT-plugins/app.py b/spaces/fffiloni/LangChain-ChatGPT-plugins/app.py deleted file mode 100644 index 19f9ff19fabdb92939c025739e2c1148383b767d..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/LangChain-ChatGPT-plugins/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import gradio as gr - -from langchain.chat_models import ChatOpenAI -from langchain.agents import load_tools, initialize_agent -from langchain.agents import AgentType -from langchain.tools import AIPluginTool - -def run(prompt, plugin_json, openai_api_key): - os.environ["OPENAI_API_KEY"] = openai_api_key - tool = AIPluginTool.from_plugin_url(plugin_json) - llm = ChatOpenAI(temperature=0, max_tokens=1000) - tools = load_tools(["requests_all"]) - tools += [tool] - agent_chain = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False, max_tokens_limit=4097) - return agent_chain.run(prompt) - -title=""" -
                -

                LangChain + ChatGPT Plugins playground

                -

                - This is a demo for the ChatGPT Plugins LangChain usecase
                - Be aware that it currently only works with plugins that do not require auth.
                - Find more plugins here -

                -
                -""" - -with gr.Blocks(css="style.css") as demo: - with gr.Column(elem_id="col-container"): - gr.HTML(title) - prompt = gr.Textbox(label="Prompt", value="what t shirts are available in klarna?") - plugin = gr.Textbox(label="Plugin json", info="You need the .json plugin manifest file of the plugin you want to use. Be aware that it currently only works with plugins that do not require auth.", value="https://www.klarna.com/.well-known/ai-plugin.json") - openai_api_key = gr.Textbox(label="OpenAI API Key", info="*required", type="password") - run_btn = gr.Button("Run") - response = gr.Textbox(label="Response") - run_btn.click(fn=run, - inputs=[prompt, plugin, openai_api_key], - outputs=[response] - ) - -demo.queue().launch() \ No newline at end of file diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py deleted file mode 100644 index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Utilities for bounding box manipulation and GIoU. -""" -import torch -from torchvision.ops.boxes import box_area - - -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(-1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=-1) - - -def box_xyxy_to_cxcywh(x): - x0, y0, x1, y1 = x.unbind(-1) - b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)] - return torch.stack(b, dim=-1) - - -# modified from torchvision to also return the union -def box_iou(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - # import ipdb; ipdb.set_trace() - lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] - rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] - - wh = (rb - lt).clamp(min=0) # [N,M,2] - inter = wh[:, :, 0] * wh[:, :, 1] # [N,M] - - union = area1[:, None] + area2 - inter - - iou = inter / (union + 1e-6) - return iou, union - - -def generalized_box_iou(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - The boxes should be in [x0, y0, x1, y1] format - - Returns a [N, M] pairwise matrix, where N = len(boxes1) - and M = len(boxes2) - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - # except: - # import ipdb; ipdb.set_trace() - iou, union = box_iou(boxes1, boxes2) - - lt = torch.min(boxes1[:, None, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,M,2] - area = wh[:, :, 0] * wh[:, :, 1] - - return iou - (area - union) / (area + 1e-6) - - -# modified from torchvision to also return the union -def box_iou_pairwise(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2] - rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2] - - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - - union = area1 + area2 - inter - - iou = inter / union - return iou, union - - -def generalized_box_iou_pairwise(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - Input: - - boxes1, boxes2: N,4 - Output: - - giou: N, 4 - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - assert boxes1.shape == boxes2.shape - iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4 - - lt = torch.min(boxes1[:, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,2] - area = wh[:, 0] * wh[:, 1] - - return iou - (area - union) / area - - -def masks_to_boxes(masks): - """Compute the bounding boxes around the provided masks - - The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. - - Returns a [N, 4] tensors, with the boxes in xyxy format - """ - if masks.numel() == 0: - return torch.zeros((0, 4), device=masks.device) - - h, w = masks.shape[-2:] - - y = torch.arange(0, h, dtype=torch.float) - x = torch.arange(0, w, dtype=torch.float) - y, x = torch.meshgrid(y, x) - - x_mask = masks * x.unsqueeze(0) - x_max = x_mask.flatten(1).max(-1)[0] - x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - y_mask = masks * y.unsqueeze(0) - y_max = y_mask.flatten(1).max(-1)[0] - y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - return torch.stack([x_min, y_min, x_max, y_max], 1) - - -if __name__ == "__main__": - x = torch.rand(5, 4) - y = torch.rand(3, 4) - iou, union = box_iou(x, y) - import ipdb - - ipdb.set_trace() diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js deleted file mode 100644 index 95f82703d08f358b00f180c7b479b9f33dff3dac..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js +++ /dev/null @@ -1,40 +0,0 @@ -'use strict'; - -var test = require('tape'); -var hasToStringTag = require('has-tostringtag/shams')(); - -var inspect = require('../'); - -test('Symbol.toStringTag', { skip: !hasToStringTag }, function (t) { - t.plan(4); - - var obj = { a: 1 }; - t.equal(inspect(obj), '{ a: 1 }', 'object, no Symbol.toStringTag'); - - obj[Symbol.toStringTag] = 'foo'; - t.equal(inspect(obj), '{ a: 1, [Symbol(Symbol.toStringTag)]: \'foo\' }', 'object with Symbol.toStringTag'); - - t.test('null objects', { skip: 'toString' in { __proto__: null } }, function (st) { - st.plan(2); - - var dict = { __proto__: null, a: 1 }; - st.equal(inspect(dict), '[Object: null prototype] { a: 1 }', 'null object with Symbol.toStringTag'); - - dict[Symbol.toStringTag] = 'Dict'; - st.equal(inspect(dict), '[Dict: null prototype] { a: 1, [Symbol(Symbol.toStringTag)]: \'Dict\' }', 'null object with Symbol.toStringTag'); - }); - - t.test('instances', function (st) { - st.plan(4); - - function C() { - this.a = 1; - } - st.equal(Object.prototype.toString.call(new C()), '[object Object]', 'instance, no toStringTag, Object.prototype.toString'); - st.equal(inspect(new C()), 'C { a: 1 }', 'instance, no toStringTag'); - - C.prototype[Symbol.toStringTag] = 'Class!'; - st.equal(Object.prototype.toString.call(new C()), '[object Class!]', 'instance, with toStringTag, Object.prototype.toString'); - st.equal(inspect(new C()), 'C [Class!] { a: 1 }', 'instance, with toStringTag'); - }); -}); diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js deleted file mode 100644 index 80fa14fa1ca7d0e5178de1a77e8980027a514178..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js +++ /dev/null @@ -1,593 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.Namespace = exports.RESERVED_EVENTS = void 0; -const socket_1 = require("./socket"); -const typed_events_1 = require("./typed-events"); -const debug_1 = __importDefault(require("debug")); -const broadcast_operator_1 = require("./broadcast-operator"); -const debug = (0, debug_1.default)("socket.io:namespace"); -exports.RESERVED_EVENTS = new Set(["connect", "connection", "new_namespace"]); -/** - * A Namespace is a communication channel that allows you to split the logic of your application over a single shared - * connection. - * - * Each namespace has its own: - * - * - event handlers - * - * ``` - * io.of("/orders").on("connection", (socket) => { - * socket.on("order:list", () => {}); - * socket.on("order:create", () => {}); - * }); - * - * io.of("/users").on("connection", (socket) => { - * socket.on("user:list", () => {}); - * }); - * ``` - * - * - rooms - * - * ``` - * const orderNamespace = io.of("/orders"); - * - * orderNamespace.on("connection", (socket) => { - * socket.join("room1"); - * orderNamespace.to("room1").emit("hello"); - * }); - * - * const userNamespace = io.of("/users"); - * - * userNamespace.on("connection", (socket) => { - * socket.join("room1"); // distinct from the room in the "orders" namespace - * userNamespace.to("room1").emit("holà"); - * }); - * ``` - * - * - middlewares - * - * ``` - * const orderNamespace = io.of("/orders"); - * - * orderNamespace.use((socket, next) => { - * // ensure the socket has access to the "orders" namespace - * }); - * - * const userNamespace = io.of("/users"); - * - * userNamespace.use((socket, next) => { - * // ensure the socket has access to the "users" namespace - * }); - * ``` - */ -class Namespace extends typed_events_1.StrictEventEmitter { - /** - * Namespace constructor. - * - * @param server instance - * @param name - */ - constructor(server, name) { - super(); - this.sockets = new Map(); - /** @private */ - this._fns = []; - /** @private */ - this._ids = 0; - this.server = server; - this.name = name; - this._initAdapter(); - } - /** - * Initializes the `Adapter` for this nsp. - * Run upon changing adapter by `Server#adapter` - * in addition to the constructor. - * - * @private - */ - _initAdapter() { - // @ts-ignore - this.adapter = new (this.server.adapter())(this); - } - /** - * Registers a middleware, which is a function that gets executed for every incoming {@link Socket}. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.use((socket, next) => { - * // ... - * next(); - * }); - * - * @param fn - the middleware function - */ - use(fn) { - this._fns.push(fn); - return this; - } - /** - * Executes the middleware for an incoming client. - * - * @param socket - the socket that will get added - * @param fn - last fn call in the middleware - * @private - */ - run(socket, fn) { - const fns = this._fns.slice(0); - if (!fns.length) - return fn(null); - function run(i) { - fns[i](socket, function (err) { - // upon error, short-circuit - if (err) - return fn(err); - // if no middleware left, summon callback - if (!fns[i + 1]) - return fn(null); - // go on to next - run(i + 1); - }); - } - run(0); - } - /** - * Targets a room when emitting. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // the “foo” event will be broadcast to all connected clients in the “room-101” room - * myNamespace.to("room-101").emit("foo", "bar"); - * - * // with an array of rooms (a client will be notified at most once) - * myNamespace.to(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * myNamespace.to("room-101").to("room-102").emit("foo", "bar"); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - to(room) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).to(room); - } - /** - * Targets a room when emitting. Similar to `to()`, but might feel clearer in some cases: - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // disconnect all clients in the "room-101" room - * myNamespace.in("room-101").disconnectSockets(); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - in(room) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).in(room); - } - /** - * Excludes a room when emitting. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // the "foo" event will be broadcast to all connected clients, except the ones that are in the "room-101" room - * myNamespace.except("room-101").emit("foo", "bar"); - * - * // with an array of rooms - * myNamespace.except(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * myNamespace.except("room-101").except("room-102").emit("foo", "bar"); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - except(room) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).except(room); - } - /** - * Adds a new client. - * - * @return {Socket} - * @private - */ - async _add(client, auth, fn) { - var _a; - debug("adding socket to nsp %s", this.name); - const socket = await this._createSocket(client, auth); - if ( - // @ts-ignore - ((_a = this.server.opts.connectionStateRecovery) === null || _a === void 0 ? void 0 : _a.skipMiddlewares) && - socket.recovered && - client.conn.readyState === "open") { - return this._doConnect(socket, fn); - } - this.run(socket, (err) => { - process.nextTick(() => { - if ("open" !== client.conn.readyState) { - debug("next called after client was closed - ignoring socket"); - socket._cleanup(); - return; - } - if (err) { - debug("middleware error, sending CONNECT_ERROR packet to the client"); - socket._cleanup(); - if (client.conn.protocol === 3) { - return socket._error(err.data || err.message); - } - else { - return socket._error({ - message: err.message, - data: err.data, - }); - } - } - this._doConnect(socket, fn); - }); - }); - } - async _createSocket(client, auth) { - const sessionId = auth.pid; - const offset = auth.offset; - if ( - // @ts-ignore - this.server.opts.connectionStateRecovery && - typeof sessionId === "string" && - typeof offset === "string") { - let session; - try { - session = await this.adapter.restoreSession(sessionId, offset); - } - catch (e) { - debug("error while restoring session: %s", e); - } - if (session) { - debug("connection state recovered for sid %s", session.sid); - return new socket_1.Socket(this, client, auth, session); - } - } - return new socket_1.Socket(this, client, auth); - } - _doConnect(socket, fn) { - // track socket - this.sockets.set(socket.id, socket); - // it's paramount that the internal `onconnect` logic - // fires before user-set events to prevent state order - // violations (such as a disconnection before the connection - // logic is complete) - socket._onconnect(); - if (fn) - fn(socket); - // fire user-set events - this.emitReserved("connect", socket); - this.emitReserved("connection", socket); - } - /** - * Removes a client. Called by each `Socket`. - * - * @private - */ - _remove(socket) { - if (this.sockets.has(socket.id)) { - this.sockets.delete(socket.id); - } - else { - debug("ignoring remove for %s", socket.id); - } - } - /** - * Emits to all connected clients. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.emit("hello", "world"); - * - * // all serializable datastructures are supported (no need to call JSON.stringify) - * myNamespace.emit("hello", 1, "2", { 3: ["4"], 5: Uint8Array.from([6]) }); - * - * // with an acknowledgement from the clients - * myNamespace.timeout(1000).emit("some-event", (err, responses) => { - * if (err) { - * // some clients did not acknowledge the event in the given delay - * } else { - * console.log(responses); // one response per client - * } - * }); - * - * @return Always true - */ - emit(ev, ...args) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).emit(ev, ...args); - } - /** - * Emits an event and waits for an acknowledgement from all clients. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * try { - * const responses = await myNamespace.timeout(1000).emitWithAck("some-event"); - * console.log(responses); // one response per client - * } catch (e) { - * // some clients did not acknowledge the event in the given delay - * } - * - * @return a Promise that will be fulfilled when all clients have acknowledged the event - */ - emitWithAck(ev, ...args) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).emitWithAck(ev, ...args); - } - /** - * Sends a `message` event to all clients. - * - * This method mimics the WebSocket.send() method. - * - * @see https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/send - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.send("hello"); - * - * // this is equivalent to - * myNamespace.emit("message", "hello"); - * - * @return self - */ - send(...args) { - this.emit("message", ...args); - return this; - } - /** - * Sends a `message` event to all clients. Sends a `message` event. Alias of {@link send}. - * - * @return self - */ - write(...args) { - this.emit("message", ...args); - return this; - } - /** - * Sends a message to the other Socket.IO servers of the cluster. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.serverSideEmit("hello", "world"); - * - * myNamespace.on("hello", (arg1) => { - * console.log(arg1); // prints "world" - * }); - * - * // acknowledgements (without binary content) are supported too: - * myNamespace.serverSideEmit("ping", (err, responses) => { - * if (err) { - * // some servers did not acknowledge the event in the given delay - * } else { - * console.log(responses); // one response per server (except the current one) - * } - * }); - * - * myNamespace.on("ping", (cb) => { - * cb("pong"); - * }); - * - * @param ev - the event name - * @param args - an array of arguments, which may include an acknowledgement callback at the end - */ - serverSideEmit(ev, ...args) { - if (exports.RESERVED_EVENTS.has(ev)) { - throw new Error(`"${String(ev)}" is a reserved event name`); - } - args.unshift(ev); - this.adapter.serverSideEmit(args); - return true; - } - /** - * Sends a message and expect an acknowledgement from the other Socket.IO servers of the cluster. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * try { - * const responses = await myNamespace.serverSideEmitWithAck("ping"); - * console.log(responses); // one response per server (except the current one) - * } catch (e) { - * // some servers did not acknowledge the event in the given delay - * } - * - * @param ev - the event name - * @param args - an array of arguments - * - * @return a Promise that will be fulfilled when all servers have acknowledged the event - */ - serverSideEmitWithAck(ev, ...args) { - return new Promise((resolve, reject) => { - args.push((err, responses) => { - if (err) { - err.responses = responses; - return reject(err); - } - else { - return resolve(responses); - } - }); - this.serverSideEmit(ev, ...args); - }); - } - /** - * Called when a packet is received from another Socket.IO server - * - * @param args - an array of arguments, which may include an acknowledgement callback at the end - * - * @private - */ - _onServerSideEmit(args) { - super.emitUntyped.apply(this, args); - } - /** - * Gets a list of clients. - * - * @deprecated this method will be removed in the next major release, please use {@link Namespace#serverSideEmit} or - * {@link Namespace#fetchSockets} instead. - */ - allSockets() { - return new broadcast_operator_1.BroadcastOperator(this.adapter).allSockets(); - } - /** - * Sets the compress flag. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.compress(false).emit("hello"); - * - * @param compress - if `true`, compresses the sending data - * @return self - */ - compress(compress) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).compress(compress); - } - /** - * Sets a modifier for a subsequent event emission that the event data may be lost if the client is not ready to - * receive messages (because of network slowness or other issues, or because they’re connected through long polling - * and is in the middle of a request-response cycle). - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.volatile.emit("hello"); // the clients may or may not receive it - * - * @return self - */ - get volatile() { - return new broadcast_operator_1.BroadcastOperator(this.adapter).volatile; - } - /** - * Sets a modifier for a subsequent event emission that the event data will only be broadcast to the current node. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // the “foo” event will be broadcast to all connected clients on this node - * myNamespace.local.emit("foo", "bar"); - * - * @return a new {@link BroadcastOperator} instance for chaining - */ - get local() { - return new broadcast_operator_1.BroadcastOperator(this.adapter).local; - } - /** - * Adds a timeout in milliseconds for the next operation. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * myNamespace.timeout(1000).emit("some-event", (err, responses) => { - * if (err) { - * // some clients did not acknowledge the event in the given delay - * } else { - * console.log(responses); // one response per client - * } - * }); - * - * @param timeout - */ - timeout(timeout) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).timeout(timeout); - } - /** - * Returns the matching socket instances. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // return all Socket instances - * const sockets = await myNamespace.fetchSockets(); - * - * // return all Socket instances in the "room1" room - * const sockets = await myNamespace.in("room1").fetchSockets(); - * - * for (const socket of sockets) { - * console.log(socket.id); - * console.log(socket.handshake); - * console.log(socket.rooms); - * console.log(socket.data); - * - * socket.emit("hello"); - * socket.join("room1"); - * socket.leave("room2"); - * socket.disconnect(); - * } - */ - fetchSockets() { - return new broadcast_operator_1.BroadcastOperator(this.adapter).fetchSockets(); - } - /** - * Makes the matching socket instances join the specified rooms. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // make all socket instances join the "room1" room - * myNamespace.socketsJoin("room1"); - * - * // make all socket instances in the "room1" room join the "room2" and "room3" rooms - * myNamespace.in("room1").socketsJoin(["room2", "room3"]); - * - * @param room - a room, or an array of rooms - */ - socketsJoin(room) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).socketsJoin(room); - } - /** - * Makes the matching socket instances leave the specified rooms. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // make all socket instances leave the "room1" room - * myNamespace.socketsLeave("room1"); - * - * // make all socket instances in the "room1" room leave the "room2" and "room3" rooms - * myNamespace.in("room1").socketsLeave(["room2", "room3"]); - * - * @param room - a room, or an array of rooms - */ - socketsLeave(room) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).socketsLeave(room); - } - /** - * Makes the matching socket instances disconnect. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * const myNamespace = io.of("/my-namespace"); - * - * // make all socket instances disconnect (the connections might be kept alive for other namespaces) - * myNamespace.disconnectSockets(); - * - * // make all socket instances in the "room1" room disconnect and close the underlying connections - * myNamespace.in("room1").disconnectSockets(true); - * - * @param close - whether to close the underlying connection - */ - disconnectSockets(close = false) { - return new broadcast_operator_1.BroadcastOperator(this.adapter).disconnectSockets(close); - } -} -exports.Namespace = Namespace; diff --git a/spaces/flax-community/chef-transformer/utils/api.py b/spaces/flax-community/chef-transformer/utils/api.py deleted file mode 100644 index baeb8ee176276ed83c72ab2d477520666e14bb77..0000000000000000000000000000000000000000 --- a/spaces/flax-community/chef-transformer/utils/api.py +++ /dev/null @@ -1,26 +0,0 @@ -import random -import requests - - -def generate_cook_image(query, app_id, app_key): - api_url = f"https://api.edamam.com/api/recipes/v2?type=public&q={query}&app_id={app_id}&app_key={app_key}&field=image" - - try: - r = requests.get(api_url) - if r.status_code != 200: - return None - - rj = r.json() - if "hits" not in rj or not len(rj["hits"]) > 0: - return None - - data = rj["hits"] - data = data[random.randint(1, min(5, len(data) - 1))] if len(data) > 1 else data[0] - - if "recipe" not in data or "image" not in data["recipe"]: - return None - - image = data["recipe"]["image"] - return image - except Exception as e: - return None diff --git a/spaces/florim/MedGPT/autogpt/json_utils/__init__.py b/spaces/florim/MedGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py b/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py deleted file mode 100644 index 0090647164922fa5133e984d13aee14e825931d9..0000000000000000000000000000000000000000 --- a/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/tashachan28/ranma_diffusion").launch() \ No newline at end of file diff --git a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py b/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py deleted file mode 100644 index a23eba5d640413c8a8630e6f5c2675282f8337f3..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py +++ /dev/null @@ -1,48 +0,0 @@ -from __future__ import annotations - -from typing import Any, Callable -from gradio.components.base import Component -from folium import Map -from gradio.data_classes import FileData -from tempfile import NamedTemporaryFile - -class Folium(Component): - data_model = FileData - - def __init__(self, value: Any = None, - *, - height: int | None = None, - label: str | None = None, - container: bool = True, - scale: int | None = None, - min_width: int | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - root_url: str | None = None, - _skip_init_processing: bool = False, - load_fn: Callable[..., Any] | None = None, - every: float | None = None): - super().__init__(value, label=label, info=None, show_label=True, - container=container, scale=scale, min_width=min_width, - visible=visible, elem_id=elem_id, elem_classes=elem_classes, - render=render, root_url=root_url, - _skip_init_processing=_skip_init_processing, - load_fn=load_fn, every=every) - self.height = height - def preprocess(self, x): - return x - - def postprocess(self, x: Map): - if not x: - return None - with NamedTemporaryFile(suffix=".html", delete=False) as tmp: - x.save(tmp.name) - return FileData(name=tmp.name, is_file=True) - - def example_inputs(self): - return {"info": "Do not use as input"} - - def api_info(self): - return {"type": {}, "description": "any valid json"} diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py b/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py deleted file mode 100644 index 803b6dc6ebb430285f423cda592fa3e902e9a4a6..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py +++ /dev/null @@ -1,136 +0,0 @@ -from __future__ import annotations - -import dataclasses -import os -from typing import Any, List - -import numpy as np -import orjson - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.memory.base import MemoryProviderSingleton - -EMBED_DIM = 1536 -SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS - - -def create_default_embeddings(): - return np.zeros((0, EMBED_DIM)).astype(np.float32) - - -@dataclasses.dataclass -class CacheContent: - texts: List[str] = dataclasses.field(default_factory=list) - embeddings: np.ndarray = dataclasses.field( - default_factory=create_default_embeddings - ) - - -class LocalCache(MemoryProviderSingleton): - """A class that stores the memory in a local file""" - - def __init__(self, cfg) -> None: - """Initialize a class instance - - Args: - cfg: Config object - - Returns: - None - """ - self.filename = f"{cfg.memory_index}.json" - if os.path.exists(self.filename): - try: - with open(self.filename, "w+b") as f: - file_content = f.read() - if not file_content.strip(): - file_content = b"{}" - f.write(file_content) - - loaded = orjson.loads(file_content) - self.data = CacheContent(**loaded) - except orjson.JSONDecodeError: - print(f"Error: The file '{self.filename}' is not in JSON format.") - self.data = CacheContent() - else: - print( - f"Warning: The file '{self.filename}' does not exist. " - "Local memory would not be saved to a file." - ) - self.data = CacheContent() - - def add(self, text: str): - """ - Add text to our list of texts, add embedding as row to our - embeddings-matrix - - Args: - text: str - - Returns: None - """ - if "Command Error:" in text: - return "" - self.data.texts.append(text) - - embedding = create_embedding_with_ada(text) - - vector = np.array(embedding).astype(np.float32) - vector = vector[np.newaxis, :] - self.data.embeddings = np.concatenate( - [ - self.data.embeddings, - vector, - ], - axis=0, - ) - - with open(self.filename, "wb") as f: - out = orjson.dumps(self.data, option=SAVE_OPTIONS) - f.write(out) - return text - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.data = CacheContent() - return "Obliviated" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def get_relevant(self, text: str, k: int) -> list[Any]: - """ " - matrix-vector mult to find score-for-each-row-of-matrix - get indices for top-k winning scores - return texts for those indices - Args: - text: str - k: int - - Returns: List[str] - """ - embedding = create_embedding_with_ada(text) - - scores = np.dot(self.data.embeddings, embedding) - - top_k_indices = np.argsort(scores)[-k:][::-1] - - return [self.data.texts[i] for i in top_k_indices] - - def get_stats(self) -> tuple[int, tuple[int, ...]]: - """ - Returns: The stats of the local cache. - """ - return len(self.data.texts), self.data.embeddings.shape diff --git a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h b/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h deleted file mode 100644 index 4f0658e8668a11f0e7d71deff9adac71884f2e87..0000000000000000000000000000000000000000 --- a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#pragma once -#include - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md b/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md deleted file mode 100644 index da1c974ad357c67b4b565955e7e6d220f33d2cb5..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Feeding Frenzy 2 Deluxe download for pc [Crack Serial Key


                Download Zip >> https://urlgoal.com/2uyLZU



                - - 3cee63e6c2
                -
                -
                -

                diff --git a/spaces/gpecile/encrypted-image-recognition/server.py b/spaces/gpecile/encrypted-image-recognition/server.py deleted file mode 100644 index 947170e5d081eeaee8f89b4550fe58b34e750915..0000000000000000000000000000000000000000 --- a/spaces/gpecile/encrypted-image-recognition/server.py +++ /dev/null @@ -1,106 +0,0 @@ -"""Server that will listen for GET and POST requests from the client.""" - -import time -from typing import List -from fastapi import FastAPI, File, Form, UploadFile -from fastapi.responses import JSONResponse, Response - -from common import FILTERS_PATH, SERVER_TMP_PATH, AVAILABLE_FILTERS -from client_server_interface import FHEServer - -# Load the server objects related to all currently available filters once and for all -FHE_SERVERS = { - filter: FHEServer(FILTERS_PATH / f"{filter}/deployment") for filter in AVAILABLE_FILTERS -} - -def get_server_file_path(name, user_id, filter_name): - """Get the correct temporary file path for the server. - - Args: - name (str): The desired file name. - user_id (int): The current user's ID. - filter_name (str): The filter chosen by the user - - Returns: - pathlib.Path: The file path. - """ - return SERVER_TMP_PATH / f"{name}_{filter_name}_{user_id}" - - -# Initialize an instance of FastAPI -app = FastAPI() - -# Define the default route -@app.get("/") -def root(): - return {"message": "Welcome to Your Image FHE Filter Server!"} - - -@app.post("/send_input") -def send_input( - user_id: str = Form(), - filter: str = Form(), - files: List[UploadFile] = File(), -): - """Send the inputs to the server.""" - # Retrieve the encrypted input image and the evaluation key paths - encrypted_image_path = get_server_file_path("encrypted_image", user_id, filter) - evaluation_key_path = get_server_file_path("evaluation_key", user_id, filter) - - # Write the files using the above paths - with encrypted_image_path.open("wb") as encrypted_image, evaluation_key_path.open( - "wb" - ) as evaluation_key: - encrypted_image.write(files[0].file.read()) - evaluation_key.write(files[1].file.read()) - - -@app.post("/run_fhe") -def run_fhe( - user_id: str = Form(), - filter: str = Form(), -): - """Execute the filter on the encrypted input image using FHE.""" - # Retrieve the encrypted input image and the evaluation key paths - encrypted_image_path = get_server_file_path("encrypted_image", user_id, filter) - evaluation_key_path = get_server_file_path("evaluation_key", user_id, filter) - - # Read the files using the above paths - with encrypted_image_path.open("rb") as encrypted_image_file, evaluation_key_path.open( - "rb" - ) as evaluation_key_file: - encrypted_image = encrypted_image_file.read() - evaluation_key = evaluation_key_file.read() - - # Load the FHE server related to the chosen filter - fhe_server = FHE_SERVERS[filter] - - # Run the FHE execution - start = time.time() - encrypted_output_image = fhe_server.run(encrypted_image, evaluation_key) - fhe_execution_time = round(time.time() - start, 2) - - # Retrieve the encrypted output image path - encrypted_output_path = get_server_file_path("encrypted_output", user_id, filter) - - # Write the file using the above path - with encrypted_output_path.open("wb") as encrypted_output: - encrypted_output.write(encrypted_output_image) - - return JSONResponse(content=fhe_execution_time) - - -@app.post("/get_output") -def get_output( - user_id: str = Form(), - filter: str = Form(), -): - """Retrieve the encrypted output image.""" - # Retrieve the encrypted output image path - encrypted_output_path = get_server_file_path("encrypted_output", user_id, filter) - - # Read the file using the above path - with encrypted_output_path.open("rb") as encrypted_output_file: - encrypted_output = encrypted_output_file.read() - - return Response(encrypted_output) diff --git a/spaces/gradio/HuBERT/examples/backtranslation/tokenized_bleu.sh b/spaces/gradio/HuBERT/examples/backtranslation/tokenized_bleu.sh deleted file mode 100644 index c6d6aaa193f6059299bc98909324fe4b9b060372..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/backtranslation/tokenized_bleu.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash - -if [ $# -ne 5 ]; then - echo "usage: $0 [dataset=wmt14/full] [langpair=en-de] [databin] [bpecode] [model]" - exit -fi - - -DATASET=$1 -LANGPAIR=$2 -DATABIN=$3 -BPECODE=$4 -MODEL=$5 - -SRCLANG=$(echo $LANGPAIR | cut -d '-' -f 1) -TGTLANG=$(echo $LANGPAIR | cut -d '-' -f 2) - - -BPEROOT=examples/backtranslation/subword-nmt/subword_nmt -if [ ! -e $BPEROOT ]; then - BPEROOT=subword-nmt/subword_nmt - if [ ! -e $BPEROOT ]; then - echo 'Cloning Subword NMT repository (for BPE pre-processing)...' - git clone https://github.com/rsennrich/subword-nmt.git - fi -fi - - -TMP_REF=$(mktemp) - -sacrebleu -t $DATASET -l $LANGPAIR --echo ref -q \ -| sacremoses normalize -l $TGTLANG -q \ -| sacremoses tokenize -a -l $TGTLANG -q \ -> $TMP_REF - -sacrebleu -t $DATASET -l $LANGPAIR --echo src -q \ -| sacremoses normalize -l $SRCLANG -q \ -| sacremoses tokenize -a -l $SRCLANG -q \ -| python $BPEROOT/apply_bpe.py -c $BPECODE \ -| fairseq-interactive $DATABIN --path $MODEL \ - -s $SRCLANG -t $TGTLANG \ - --beam 5 --remove-bpe --buffer-size 1024 --max-tokens 8000 \ -| grep ^H- | cut -f 3- \ -| fairseq-score --ref $TMP_REF - -rm -f $TMP_REF diff --git a/spaces/gradio/HuBERT/tests/distributed/utils.py b/spaces/gradio/HuBERT/tests/distributed/utils.py deleted file mode 100644 index c8040392a8e27eb4c3a74032c702643a91d11a3e..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/distributed/utils.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import tempfile - -import torch - - -def spawn_and_init(fn, world_size, args=None): - if args is None: - args = () - with tempfile.NamedTemporaryFile(delete=False) as tmp_file: - torch.multiprocessing.spawn( - fn=functools.partial(init_and_run, fn, args), - args=(world_size, tmp_file.name,), - nprocs=world_size, - join=True, - ) - - -def distributed_init(rank, world_size, tmp_file): - torch.distributed.init_process_group( - backend="nccl", - init_method="file://{}".format(tmp_file), - world_size=world_size, - rank=rank, - ) - torch.cuda.set_device(rank) - - -def init_and_run(fn, args, rank, world_size, tmp_file): - distributed_init(rank, world_size, tmp_file) - group = torch.distributed.new_group() - fn(rank, group, *args) - - -def objects_are_equal(a, b) -> bool: - if type(a) is not type(b): - return False - if isinstance(a, dict): - if set(a.keys()) != set(b.keys()): - return False - for k in a.keys(): - if not objects_are_equal(a[k], b[k]): - return False - return True - elif isinstance(a, (list, tuple, set)): - if len(a) != len(b): - return False - return all(objects_are_equal(x, y) for x, y in zip(a, b)) - elif torch.is_tensor(a): - return ( - a.size() == b.size() - and a.dtype == b.dtype - and a.device == b.device - and torch.all(a == b) - ) - else: - return a == b diff --git a/spaces/gradio/image_classification/DESCRIPTION.md b/spaces/gradio/image_classification/DESCRIPTION.md deleted file mode 100644 index 7b0ce5034235b310b439e8317614e4c52ee81d1c..0000000000000000000000000000000000000000 --- a/spaces/gradio/image_classification/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -Simple image classification in Pytorch with Gradio's Image input and Label output. \ No newline at end of file diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/ChatbarSettings.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/ChatbarSettings.tsx deleted file mode 100644 index 4a7f9fb9b12e73e2f9981a48b4d1b1cb8036e4a2..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/ChatbarSettings.tsx +++ /dev/null @@ -1,73 +0,0 @@ -import { IconFileExport, IconSettings } from '@tabler/icons-react'; -import { useContext, useState } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { SettingDialog } from '@/components/Settings/SettingDialog'; - -import { Import } from '../../Settings/Import'; -import { Key } from '../../Settings/Key'; -import { SidebarButton } from '../../Sidebar/SidebarButton'; -import ChatbarContext from '../Chatbar.context'; -import { ClearConversations } from './ClearConversations'; -import { PluginKeys } from './PluginKeys'; - -export const ChatbarSettings = () => { - const { t } = useTranslation('sidebar'); - const [isSettingDialogOpen, setIsSettingDialog] = useState(false); - - const { - state: { - apiKey, - lightMode, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - conversations, - }, - dispatch: homeDispatch, - } = useContext(HomeContext); - - const { - handleClearConversations, - handleImportConversations, - handleExportData, - handleApiKeyChange, - } = useContext(ChatbarContext); - - return ( -
                - {conversations.length > 0 ? ( - - ) : null} - - - - } - onClick={() => handleExportData()} - /> - - } - onClick={() => setIsSettingDialog(true)} - /> - - {!serverSideApiKeyIsSet ? ( - - ) : null} - - {!serverSidePluginKeysSet ? : null} - - { - setIsSettingDialog(false); - }} - /> -
                - ); -}; diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py deleted file mode 100644 index 3678b790f5e025f8943eee49e9dafa2489dce867..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -__version__ = '0.2.5' diff --git a/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py b/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py deleted file mode 100644 index dd5e1006057706f32c6adaeb812bf4834bbdfd28..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py +++ /dev/null @@ -1,32 +0,0 @@ -import queue -import threading -import cv2 - - -class BufferlessVideoCapture: - - def __init__(self, name, width=None, height=None): - self.cap = cv2.VideoCapture(name) - if width is not None and height is not None: - self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) - self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height) - self.q = queue.Queue() - t = threading.Thread(target=self._reader) - t.daemon = True - t.start() - - # read frames as soon as they are available, keeping only most recent one - def _reader(self): - while True: - ret, frame = self.cap.read() - if not ret: - break - if not self.q.empty(): - try: - self.q.get_nowait() # discard previous (unprocessed) frame - except queue.Empty: - pass - self.q.put((ret, frame)) - - def read(self): - return self.q.get() diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" deleted file mode 100644 index cbda23b83d759e6a3a4da5847c37ddff662daab2..0000000000000000000000000000000000000000 --- "a/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" +++ /dev/null @@ -1,166 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -import re -import unicodedata -fast_debug = False -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -def is_paragraph_break(match): - """ - 根据给定的匹配结果来判断换行符是否表示段落分隔。 - 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。 - 也可以根据之前的内容长度来判断段落是否已经足够长。 - """ - prev_char, next_char = match.groups() - - # 句子结束标志 - sentence_endings = ".!?" - - # 设定一个最小段落长度阈值 - min_paragraph_length = 140 - - if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length: - return "\n\n" - else: - return " " - -def normalize_text(text): - """ - 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。 - 例如,将连字 "fi" 转换为 "f" 和 "i"。 - """ - # 对文本进行归一化处理,分解连字 - normalized_text = unicodedata.normalize("NFKD", text) - - # 替换其他特殊字符 - cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text) - - return cleaned_text - -def clean_text(raw_text): - """ - 对从 PDF 提取出的原始文本进行清洗和格式化处理。 - 1. 对原始文本进行归一化处理。 - 2. 替换跨行的连词 - 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换 - """ - # 对文本进行归一化处理 - normalized_text = normalize_text(raw_text) - - # 替换跨行的连词 - text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text) - - # 根据前后相邻字符的特点,找到原文本中的换行符 - newlines = re.compile(r'(\S)\n(\S)') - - # 根据 heuristic 规则,用空格或段落分隔符替换原换行符 - final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text) - - return final_text.strip() - -def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os, fitz - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with fitz.open(fp) as doc: - file_content = "" - for page in doc: - file_content += page.get_text() - file_content = clean_text(file_content) - print(file_content) - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - -@CatchException -def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md b/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md deleted file mode 100644 index 78b573aa6a8c31b305db78c7e8849842daeeb7e8..0000000000000000000000000000000000000000 --- a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md +++ /dev/null @@ -1,11 +0,0 @@ -# Assignment Part 2: Deploying Your Model to a Hugging Face Space - -Now that you've done the hard work of setting up the RetrievalQA chain and sourcing your documents - let's tie it together in a ChainLit application. - -### Duplicating the Space - -Since this is our first assignment, all you'll need to do is duplicate this space and add your own `OPENAI_API_KEY` as a secret in the space. - -### Conclusion - -Now that you've shipped an LLM-powered application, it's time to share! 🚀 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py deleted file mode 100644 index 3ffcbdbd19518bce877a776582a7caeddc18108e..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py +++ /dev/null @@ -1,1143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import colorsys -import logging -import math -import numpy as np -from enum import Enum, unique -import cv2 -import matplotlib as mpl -import matplotlib.colors as mplc -import matplotlib.figure as mplfigure -import pycocotools.mask as mask_util -import torch -from fvcore.common.file_io import PathManager -from matplotlib.backends.backend_agg import FigureCanvasAgg -from PIL import Image - -from detectron2.structures import BitMasks, Boxes, BoxMode, Keypoints, PolygonMasks, RotatedBoxes - -from .colormap import random_color - -logger = logging.getLogger(__name__) - -__all__ = ["ColorMode", "VisImage", "Visualizer"] - - -_SMALL_OBJECT_AREA_THRESH = 1000 -_LARGE_MASK_AREA_THRESH = 120000 -_OFF_WHITE = (1.0, 1.0, 240.0 / 255) -_BLACK = (0, 0, 0) -_RED = (1.0, 0, 0) - -_KEYPOINT_THRESHOLD = 0.05 - - -@unique -class ColorMode(Enum): - """ - Enum of different color modes to use for instance visualizations. - """ - - IMAGE = 0 - """ - Picks a random color for every instance and overlay segmentations with low opacity. - """ - SEGMENTATION = 1 - """ - Let instances of the same category have similar colors - (from metadata.thing_colors), and overlay them with - high opacity. This provides more attention on the quality of segmentation. - """ - IMAGE_BW = 2 - """ - Same as IMAGE, but convert all areas without masks to gray-scale. - Only available for drawing per-instance mask predictions. - """ - - -class GenericMask: - """ - Attribute: - polygons (list[ndarray]): list[ndarray]: polygons for this mask. - Each ndarray has format [x, y, x, y, ...] - mask (ndarray): a binary mask - """ - - def __init__(self, mask_or_polygons, height, width): - self._mask = self._polygons = self._has_holes = None - self.height = height - self.width = width - - m = mask_or_polygons - if isinstance(m, dict): - # RLEs - assert "counts" in m and "size" in m - if isinstance(m["counts"], list): # uncompressed RLEs - h, w = m["size"] - assert h == height and w == width - m = mask_util.frPyObjects(m, h, w) - self._mask = mask_util.decode(m)[:, :] - return - - if isinstance(m, list): # list[ndarray] - self._polygons = [np.asarray(x).reshape(-1) for x in m] - return - - if isinstance(m, np.ndarray): # assumed to be a binary mask - assert m.shape[1] != 2, m.shape - assert m.shape == (height, width), m.shape - self._mask = m.astype("uint8") - return - - raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m))) - - @property - def mask(self): - if self._mask is None: - self._mask = self.polygons_to_mask(self._polygons) - return self._mask - - @property - def polygons(self): - if self._polygons is None: - self._polygons, self._has_holes = self.mask_to_polygons(self._mask) - return self._polygons - - @property - def has_holes(self): - if self._has_holes is None: - if self._mask is not None: - self._polygons, self._has_holes = self.mask_to_polygons(self._mask) - else: - self._has_holes = False # if original format is polygon, does not have holes - return self._has_holes - - def mask_to_polygons(self, mask): - # cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level - # hierarchy. External contours (boundary) of the object are placed in hierarchy-1. - # Internal contours (holes) are placed in hierarchy-2. - # cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours. - mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr - res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) - hierarchy = res[-1] - if hierarchy is None: # empty mask - return [], False - has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0 - res = res[-2] - res = [x.flatten() for x in res] - res = [x for x in res if len(x) >= 6] - return res, has_holes - - def polygons_to_mask(self, polygons): - rle = mask_util.frPyObjects(polygons, self.height, self.width) - rle = mask_util.merge(rle) - return mask_util.decode(rle)[:, :] - - def area(self): - return self.mask.sum() - - def bbox(self): - p = mask_util.frPyObjects(self.polygons, self.height, self.width) - p = mask_util.merge(p) - bbox = mask_util.toBbox(p) - bbox[2] += bbox[0] - bbox[3] += bbox[1] - return bbox - - -class _PanopticPrediction: - def __init__(self, panoptic_seg, segments_info): - self._seg = panoptic_seg - - self._sinfo = {s["id"]: s for s in segments_info} # seg id -> seg info - segment_ids, areas = torch.unique(panoptic_seg, sorted=True, return_counts=True) - areas = areas.numpy() - sorted_idxs = np.argsort(-areas) - self._seg_ids, self._seg_areas = segment_ids[sorted_idxs], areas[sorted_idxs] - self._seg_ids = self._seg_ids.tolist() - for sid, area in zip(self._seg_ids, self._seg_areas): - if sid in self._sinfo: - self._sinfo[sid]["area"] = float(area) - - def non_empty_mask(self): - """ - Returns: - (H, W) array, a mask for all pixels that have a prediction - """ - empty_ids = [] - for id in self._seg_ids: - if id not in self._sinfo: - empty_ids.append(id) - if len(empty_ids) == 0: - return np.zeros(self._seg.shape, dtype=np.uint8) - assert ( - len(empty_ids) == 1 - ), ">1 ids corresponds to no labels. This is currently not supported" - return (self._seg != empty_ids[0]).numpy().astype(np.bool) - - def semantic_masks(self): - for sid in self._seg_ids: - sinfo = self._sinfo.get(sid) - if sinfo is None or sinfo["isthing"]: - # Some pixels (e.g. id 0 in PanopticFPN) have no instance or semantic predictions. - continue - yield (self._seg == sid).numpy().astype(np.bool), sinfo - - def instance_masks(self): - for sid in self._seg_ids: - sinfo = self._sinfo.get(sid) - if sinfo is None or not sinfo["isthing"]: - continue - mask = (self._seg == sid).numpy().astype(np.bool) - if mask.sum() > 0: - yield mask, sinfo - - -def _create_text_labels(classes, scores, class_names): - """ - Args: - classes (list[int] or None): - scores (list[float] or None): - class_names (list[str] or None): - - Returns: - list[str] or None - """ - labels = None - if classes is not None and class_names is not None and len(class_names) > 1: - labels = [class_names[i] for i in classes] - if scores is not None: - if labels is None: - labels = ["{:.0f}%".format(s * 100) for s in scores] - else: - labels = ["{} {:.0f}%".format(l, s * 100) for l, s in zip(labels, scores)] - return labels - - -class VisImage: - def __init__(self, img, scale=1.0): - """ - Args: - img (ndarray): an RGB image of shape (H, W, 3). - scale (float): scale the input image - """ - self.img = img - self.scale = scale - self.width, self.height = img.shape[1], img.shape[0] - self._setup_figure(img) - - def _setup_figure(self, img): - """ - Args: - Same as in :meth:`__init__()`. - - Returns: - fig (matplotlib.pyplot.figure): top level container for all the image plot elements. - ax (matplotlib.pyplot.Axes): contains figure elements and sets the coordinate system. - """ - fig = mplfigure.Figure(frameon=False) - self.dpi = fig.get_dpi() - # add a small 1e-2 to avoid precision lost due to matplotlib's truncation - # (https://github.com/matplotlib/matplotlib/issues/15363) - fig.set_size_inches( - (self.width * self.scale + 1e-2) / self.dpi, - (self.height * self.scale + 1e-2) / self.dpi, - ) - self.canvas = FigureCanvasAgg(fig) - # self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig) - ax = fig.add_axes([0.0, 0.0, 1.0, 1.0]) - ax.axis("off") - ax.set_xlim(0.0, self.width) - ax.set_ylim(self.height) - - self.fig = fig - self.ax = ax - - def save(self, filepath): - """ - Args: - filepath (str): a string that contains the absolute path, including the file name, where - the visualized image will be saved. - """ - if filepath.lower().endswith(".jpg") or filepath.lower().endswith(".png"): - # faster than matplotlib's imshow - cv2.imwrite(filepath, self.get_image()[:, :, ::-1]) - else: - # support general formats (e.g. pdf) - self.ax.imshow(self.img, interpolation="nearest") - self.fig.savefig(filepath) - - def get_image(self): - """ - Returns: - ndarray: - the visualized image of shape (H, W, 3) (RGB) in uint8 type. - The shape is scaled w.r.t the input image using the given `scale` argument. - """ - canvas = self.canvas - s, (width, height) = canvas.print_to_buffer() - if (self.width, self.height) != (width, height): - img = cv2.resize(self.img, (width, height)) - else: - img = self.img - - # buf = io.BytesIO() # works for cairo backend - # canvas.print_rgba(buf) - # width, height = self.width, self.height - # s = buf.getvalue() - - buffer = np.frombuffer(s, dtype="uint8") - - # imshow is slow. blend manually (still quite slow) - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - - try: - import numexpr as ne # fuse them with numexpr - - visualized_image = ne.evaluate("demo * (1 - alpha / 255.0) + rgb * (alpha / 255.0)") - except ImportError: - alpha = alpha.astype("float32") / 255.0 - visualized_image = img * (1 - alpha) + rgb * alpha - - visualized_image = visualized_image.astype("uint8") - - return visualized_image - - -class Visualizer: - def __init__(self, img_rgb, metadata, scale=1.0, instance_mode=ColorMode.IMAGE): - """ - Args: - img_rgb: a numpy array of shape (H, W, C), where H and W correspond to - the height and width of the image respectively. C is the number of - color channels. The image is required to be in RGB format since that - is a requirement of the Matplotlib library. The image is also expected - to be in the range [0, 255]. - metadata (MetadataCatalog): image metadata. - """ - self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8) - self.metadata = metadata - self.output = VisImage(self.img, scale=scale) - self.cpu_device = torch.device("cpu") - - # too small texts are useless, therefore clamp to 9 - self._default_font_size = max( - np.sqrt(self.output.height * self.output.width) // 90, 10 // scale - ) - self._instance_mode = instance_mode - - def draw_instance_predictions(self, predictions): - """ - Draw instance-level prediction results on an image. - - Args: - predictions (Instances): the output of an instance detection/segmentation - model. Following fields will be used to draw: - "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle"). - - Returns: - output (VisImage): image object with visualizations. - """ - boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes if predictions.has("pred_classes") else None - labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None)) - keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None - - if predictions.has("pred_masks"): - masks = np.asarray(predictions.pred_masks) - masks = [GenericMask(x, self.output.height, self.output.width) for x in masks] - else: - masks = None - - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes - ] - alpha = 0.8 - else: - colors = None - alpha = 0.5 - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.img = self._create_grayscale_image( - (predictions.pred_masks.any(dim=0) > 0).numpy() - ) - alpha = 0.3 - - self.overlay_instances( - masks=masks, - boxes=boxes, - labels=labels, - keypoints=keypoints, - assigned_colors=colors, - alpha=alpha, - ) - return self.output - - def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8): - """ - Draw semantic segmentation predictions/labels. - - Args: - sem_seg (Tensor or ndarray): the segmentation of shape (H, W). - Each value is the integer label of the pixel. - area_threshold (int): segments with less than `area_threshold` are not drawn. - alpha (float): the larger it is, the more opaque the segmentations are. - - Returns: - output (VisImage): image object with visualizations. - """ - if isinstance(sem_seg, torch.Tensor): - sem_seg = sem_seg.numpy() - labels, areas = np.unique(sem_seg, return_counts=True) - sorted_idxs = np.argsort(-areas).tolist() - labels = labels[sorted_idxs] - for label in filter(lambda l: l < len(self.metadata.stuff_classes), labels): - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[label]] - except (AttributeError, IndexError): - mask_color = None - - binary_mask = (sem_seg == label).astype(np.uint8) - text = self.metadata.stuff_classes[label] - self.draw_binary_mask( - binary_mask, - color=mask_color, - edge_color=_OFF_WHITE, - text=text, - alpha=alpha, - area_threshold=area_threshold, - ) - return self.output - - def draw_panoptic_seg_predictions( - self, panoptic_seg, segments_info, area_threshold=None, alpha=0.7 - ): - """ - Draw panoptic prediction results on an image. - - Args: - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each - segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - area_threshold (int): stuff segments with less than `area_threshold` are not drawn. - - Returns: - output (VisImage): image object with visualizations. - """ - pred = _PanopticPrediction(panoptic_seg, segments_info) - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.img = self._create_grayscale_image(pred.non_empty_mask()) - - # draw mask for all semantic segments first i.e. "stuff" - for mask, sinfo in pred.semantic_masks(): - category_idx = sinfo["category_id"] - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]] - except AttributeError: - mask_color = None - - text = self.metadata.stuff_classes[category_idx] - self.draw_binary_mask( - mask, - color=mask_color, - edge_color=_OFF_WHITE, - text=text, - alpha=alpha, - area_threshold=area_threshold, - ) - - # draw mask for all instances second - all_instances = list(pred.instance_masks()) - if len(all_instances) == 0: - return self.output - masks, sinfo = list(zip(*all_instances)) - category_ids = [x["category_id"] for x in sinfo] - - try: - scores = [x["score"] for x in sinfo] - except KeyError: - scores = None - labels = _create_text_labels(category_ids, scores, self.metadata.thing_classes) - - try: - colors = [random_color(rgb=True, maximum=1) for k in category_ids] - except AttributeError: - colors = None - self.overlay_instances(masks=masks, labels=labels, assigned_colors=colors, alpha=alpha) - - return self.output - - def draw_dataset_dict(self, dic): - """ - Draw annotations/segmentaions in Detectron2 Dataset format. - - Args: - dic (dict): annotation/segmentation data of one image, in Detectron2 Dataset format. - - Returns: - output (VisImage): image object with visualizations. - """ - annos = dic.get("annotations", None) - if annos: - if "segmentation" in annos[0]: - masks = [x["segmentation"] for x in annos] - else: - masks = None - if "keypoints" in annos[0]: - keypts = [x["keypoints"] for x in annos] - keypts = np.array(keypts).reshape(len(annos), -1, 3) - else: - keypts = None - - boxes = [BoxMode.convert(x["bbox"], x["bbox_mode"], BoxMode.XYXY_ABS) for x in annos] - - labels = [x["category_id"] for x in annos] - colors = None - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in labels - ] - names = self.metadata.get("thing_classes", None) - if names: - labels = [names[i] for i in labels] - labels = [ - "{}".format(i) + ("|crowd" if a.get("iscrowd", 0) else "") - for i, a in zip(labels, annos) - ] - self.overlay_instances( - labels=labels, boxes=boxes, masks=masks, keypoints=keypts, assigned_colors=colors - ) - - sem_seg = dic.get("sem_seg", None) - if sem_seg is None and "sem_seg_file_name" in dic: - with PathManager.open(dic["sem_seg_file_name"], "rb") as f: - sem_seg = Image.open(f) - sem_seg = np.asarray(sem_seg, dtype="uint8") - if sem_seg is not None: - self.draw_sem_seg(sem_seg, area_threshold=0, alpha=0.5) - return self.output - - def overlay_instances( - self, - *, - boxes=None, - labels=None, - masks=None, - keypoints=None, - assigned_colors=None, - alpha=0.5 - ): - """ - Args: - boxes (Boxes, RotatedBoxes or ndarray): either a :class:`Boxes`, - or an Nx4 numpy array of XYXY_ABS format for the N objects in a single image, - or a :class:`RotatedBoxes`, - or an Nx5 numpy array of (x_center, y_center, width, height, angle_degrees) format - for the N objects in a single image, - labels (list[str]): the text to be displayed for each instance. - masks (masks-like object): Supported types are: - - * :class:`detectron2.structures.PolygonMasks`, - :class:`detectron2.structures.BitMasks`. - * list[list[ndarray]]: contains the segmentation masks for all objects in one image. - The first level of the list corresponds to individual instances. The second - level to all the polygon that compose the instance, and the third level - to the polygon coordinates. The third level should have the format of - [x0, y0, x1, y1, ..., xn, yn] (n >= 3). - * list[ndarray]: each ndarray is a binary mask of shape (H, W). - * list[dict]: each dict is a COCO-style RLE. - keypoints (Keypoint or array like): an array-like object of shape (N, K, 3), - where the N is the number of instances and K is the number of keypoints. - The last dimension corresponds to (x, y, visibility or score). - assigned_colors (list[matplotlib.colors]): a list of colors, where each color - corresponds to each mask or box in the image. Refer to 'matplotlib.colors' - for full list of formats that the colors are accepted in. - - Returns: - output (VisImage): image object with visualizations. - """ - num_instances = None - if boxes is not None: - boxes = self._convert_boxes(boxes) - num_instances = len(boxes) - if masks is not None: - masks = self._convert_masks(masks) - if num_instances: - assert len(masks) == num_instances - else: - num_instances = len(masks) - if keypoints is not None: - if num_instances: - assert len(keypoints) == num_instances - else: - num_instances = len(keypoints) - keypoints = self._convert_keypoints(keypoints) - if labels is not None: - assert len(labels) == num_instances - if assigned_colors is None: - assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)] - if num_instances == 0: - return self.output - if boxes is not None and boxes.shape[1] == 5: - return self.overlay_rotated_instances( - boxes=boxes, labels=labels, assigned_colors=assigned_colors - ) - - # Display in largest to smallest order to reduce occlusion. - areas = None - if boxes is not None: - areas = np.prod(boxes[:, 2:] - boxes[:, :2], axis=1) - elif masks is not None: - areas = np.asarray([x.area() for x in masks]) - - if areas is not None: - sorted_idxs = np.argsort(-areas).tolist() - # Re-order overlapped instances in descending order. - boxes = boxes[sorted_idxs] if boxes is not None else None - labels = [labels[k] for k in sorted_idxs] if labels is not None else None - masks = [masks[idx] for idx in sorted_idxs] if masks is not None else None - assigned_colors = [assigned_colors[idx] for idx in sorted_idxs] - keypoints = keypoints[sorted_idxs] if keypoints is not None else None - - for i in range(num_instances): - color = assigned_colors[i] - if boxes is not None: - self.draw_box(boxes[i], edge_color=color) - - if masks is not None: - for segment in masks[i].polygons: - self.draw_polygon(segment.reshape(-1, 2), color, alpha=alpha) - - if labels is not None: - # first get a box - if boxes is not None: - x0, y0, x1, y1 = boxes[i] - text_pos = (x0, y0) # if drawing boxes, put text on the box corner. - horiz_align = "left" - elif masks is not None: - x0, y0, x1, y1 = masks[i].bbox() - - # draw text in the center (defined by median) when box is not drawn - # median is less sensitive to outliers. - text_pos = np.median(masks[i].mask.nonzero(), axis=1)[::-1] - horiz_align = "center" - else: - continue # drawing the box confidence for keypoints isn't very useful. - # for small objects, draw text at the side to avoid occlusion - instance_area = (y1 - y0) * (x1 - x0) - if ( - instance_area < _SMALL_OBJECT_AREA_THRESH * self.output.scale - or y1 - y0 < 40 * self.output.scale - ): - if y1 >= self.output.height - 5: - text_pos = (x1, y0) - else: - text_pos = (x0, y1) - - height_ratio = (y1 - y0) / np.sqrt(self.output.height * self.output.width) - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - font_size = ( - np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) - * 0.5 - * self._default_font_size - ) - self.draw_text( - labels[i], - text_pos, - color=lighter_color, - horizontal_alignment=horiz_align, - font_size=font_size, - ) - - # draw keypoints - if keypoints is not None: - for keypoints_per_instance in keypoints: - self.draw_and_connect_keypoints(keypoints_per_instance) - - return self.output - - def overlay_rotated_instances(self, boxes=None, labels=None, assigned_colors=None): - """ - Args: - boxes (ndarray): an Nx5 numpy array of - (x_center, y_center, width, height, angle_degrees) format - for the N objects in a single image. - labels (list[str]): the text to be displayed for each instance. - assigned_colors (list[matplotlib.colors]): a list of colors, where each color - corresponds to each mask or box in the image. Refer to 'matplotlib.colors' - for full list of formats that the colors are accepted in. - - Returns: - output (VisImage): image object with visualizations. - """ - - num_instances = len(boxes) - - if assigned_colors is None: - assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)] - if num_instances == 0: - return self.output - - # Display in largest to smallest order to reduce occlusion. - if boxes is not None: - areas = boxes[:, 2] * boxes[:, 3] - - sorted_idxs = np.argsort(-areas).tolist() - # Re-order overlapped instances in descending order. - boxes = boxes[sorted_idxs] - labels = [labels[k] for k in sorted_idxs] if labels is not None else None - colors = [assigned_colors[idx] for idx in sorted_idxs] - - for i in range(num_instances): - self.draw_rotated_box_with_label( - boxes[i], edge_color=colors[i], label=labels[i] if labels is not None else None - ) - - return self.output - - def draw_and_connect_keypoints(self, keypoints): - """ - Draws keypoints of an instance and follows the rules for keypoint connections - to draw lines between appropriate keypoints. This follows color heuristics for - line color. - - Args: - keypoints (Tensor): a tensor of shape (K, 3), where K is the number of keypoints - and the last dimension corresponds to (x, y, probability). - - Returns: - output (VisImage): image object with visualizations. - """ - visible = {} - keypoint_names = self.metadata.get("keypoint_names") - for idx, keypoint in enumerate(keypoints): - # draw keypoint - x, y, prob = keypoint - if prob > _KEYPOINT_THRESHOLD: - self.draw_circle((x, y), color=_RED) - if keypoint_names: - keypoint_name = keypoint_names[idx] - visible[keypoint_name] = (x, y) - - if self.metadata.get("keypoint_connection_rules"): - for kp0, kp1, color in self.metadata.keypoint_connection_rules: - if kp0 in visible and kp1 in visible: - x0, y0 = visible[kp0] - x1, y1 = visible[kp1] - color = tuple(x / 255.0 for x in color) - self.draw_line([x0, x1], [y0, y1], color=color) - - # draw lines from nose to mid-shoulder and mid-shoulder to mid-hip - # Note that this strategy is specific to person keypoints. - # For other keypoints, it should just do nothing - try: - ls_x, ls_y = visible["left_shoulder"] - rs_x, rs_y = visible["right_shoulder"] - mid_shoulder_x, mid_shoulder_y = (ls_x + rs_x) / 2, (ls_y + rs_y) / 2 - except KeyError: - pass - else: - # draw line from nose to mid-shoulder - nose_x, nose_y = visible.get("nose", (None, None)) - if nose_x is not None: - self.draw_line([nose_x, mid_shoulder_x], [nose_y, mid_shoulder_y], color=_RED) - - try: - # draw line from mid-shoulder to mid-hip - lh_x, lh_y = visible["left_hip"] - rh_x, rh_y = visible["right_hip"] - except KeyError: - pass - else: - mid_hip_x, mid_hip_y = (lh_x + rh_x) / 2, (lh_y + rh_y) / 2 - self.draw_line([mid_hip_x, mid_shoulder_x], [mid_hip_y, mid_shoulder_y], color=_RED) - return self.output - - """ - Primitive drawing functions: - """ - - def draw_text( - self, - text, - position, - *, - font_size=None, - color="g", - horizontal_alignment="center", - rotation=0 - ): - """ - Args: - text (str): class label - position (tuple): a tuple of the x and y coordinates to place text on image. - font_size (int, optional): font of the text. If not provided, a font size - proportional to the image width is calculated and used. - color: color of the text. Refer to `matplotlib.colors` for full list - of formats that are accepted. - horizontal_alignment (str): see `matplotlib.text.Text` - rotation: rotation angle in degrees CCW - - Returns: - output (VisImage): image object with text drawn. - """ - if not font_size: - font_size = self._default_font_size - - # since the text background is dark, we don't want the text to be dark - color = np.maximum(list(mplc.to_rgb(color)), 0.2) - color[np.argmax(color)] = max(0.8, np.max(color)) - - x, y = position - self.output.ax.text( - x, - y, - text, - size=font_size * self.output.scale, - family="sans-serif", - bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"}, - verticalalignment="top", - horizontalalignment=horizontal_alignment, - color=color, - zorder=10, - rotation=rotation, - ) - return self.output - - def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"): - """ - Args: - box_coord (tuple): a tuple containing x0, y0, x1, y1 coordinates, where x0 and y0 - are the coordinates of the image's top left corner. x1 and y1 are the - coordinates of the image's bottom right corner. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - edge_color: color of the outline of the box. Refer to `matplotlib.colors` - for full list of formats that are accepted. - line_style (string): the string to use to create the outline of the boxes. - - Returns: - output (VisImage): image object with box drawn. - """ - x0, y0, x1, y1 = box_coord - width = x1 - x0 - height = y1 - y0 - - linewidth = max(self._default_font_size / 4, 1) - - self.output.ax.add_patch( - mpl.patches.Rectangle( - (x0, y0), - width, - height, - fill=False, - edgecolor=edge_color, - linewidth=linewidth * self.output.scale, - alpha=alpha, - linestyle=line_style, - ) - ) - return self.output - - def draw_rotated_box_with_label( - self, rotated_box, alpha=0.5, edge_color="g", line_style="-", label=None - ): - """ - Args: - rotated_box (tuple): a tuple containing (cnt_x, cnt_y, w, h, angle), - where cnt_x and cnt_y are the center coordinates of the box. - w and h are the width and height of the box. angle represents how - many degrees the box is rotated CCW with regard to the 0-degree box. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - edge_color: color of the outline of the box. Refer to `matplotlib.colors` - for full list of formats that are accepted. - line_style (string): the string to use to create the outline of the boxes. - label (string): label for rotated box. It will not be rendered when set to None. - - Returns: - output (VisImage): image object with box drawn. - """ - cnt_x, cnt_y, w, h, angle = rotated_box - area = w * h - # use thinner lines when the box is small - linewidth = self._default_font_size / ( - 6 if area < _SMALL_OBJECT_AREA_THRESH * self.output.scale else 3 - ) - - theta = angle * math.pi / 180.0 - c = math.cos(theta) - s = math.sin(theta) - rect = [(-w / 2, h / 2), (-w / 2, -h / 2), (w / 2, -h / 2), (w / 2, h / 2)] - # x: left->right ; y: top->down - rotated_rect = [(s * yy + c * xx + cnt_x, c * yy - s * xx + cnt_y) for (xx, yy) in rect] - for k in range(4): - j = (k + 1) % 4 - self.draw_line( - [rotated_rect[k][0], rotated_rect[j][0]], - [rotated_rect[k][1], rotated_rect[j][1]], - color=edge_color, - linestyle="--" if k == 1 else line_style, - linewidth=linewidth, - ) - - if label is not None: - text_pos = rotated_rect[1] # topleft corner - - height_ratio = h / np.sqrt(self.output.height * self.output.width) - label_color = self._change_color_brightness(edge_color, brightness_factor=0.7) - font_size = ( - np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) * 0.5 * self._default_font_size - ) - self.draw_text(label, text_pos, color=label_color, font_size=font_size, rotation=angle) - - return self.output - - def draw_circle(self, circle_coord, color, radius=3): - """ - Args: - circle_coord (list(int) or tuple(int)): contains the x and y coordinates - of the center of the circle. - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - radius (int): radius of the circle. - - Returns: - output (VisImage): image object with box drawn. - """ - x, y = circle_coord - self.output.ax.add_patch( - mpl.patches.Circle(circle_coord, radius=radius, fill=True, color=color) - ) - return self.output - - def draw_line(self, x_data, y_data, color, linestyle="-", linewidth=None): - """ - Args: - x_data (list[int]): a list containing x values of all the points being drawn. - Length of list should match the length of y_data. - y_data (list[int]): a list containing y values of all the points being drawn. - Length of list should match the length of x_data. - color: color of the line. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - linestyle: style of the line. Refer to `matplotlib.lines.Line2D` - for a full list of formats that are accepted. - linewidth (float or None): width of the line. When it's None, - a default value will be computed and used. - - Returns: - output (VisImage): image object with line drawn. - """ - if linewidth is None: - linewidth = self._default_font_size / 3 - linewidth = max(linewidth, 1) - self.output.ax.add_line( - mpl.lines.Line2D( - x_data, - y_data, - linewidth=linewidth * self.output.scale, - color=color, - linestyle=linestyle, - ) - ) - return self.output - - def draw_binary_mask( - self, binary_mask, color=None, *, edge_color=None, text=None, alpha=0.5, area_threshold=4096 - ): - """ - Args: - binary_mask (ndarray): numpy array of shape (H, W), where H is the image height and - W is the image width. Each value in the array is either a 0 or 1 value of uint8 - type. - color: color of the mask. Refer to `matplotlib.colors` for a full list of - formats that are accepted. If None, will pick a random color. - edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a - full list of formats that are accepted. - text (str): if None, will be drawn in the object's center of mass. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - area_threshold (float): a connected component small than this will not be shown. - - Returns: - output (VisImage): image object with mask drawn. - """ - if color is None: - color = random_color(rgb=True, maximum=1) - if area_threshold is None: - area_threshold = 4096 - - has_valid_segment = False - binary_mask = binary_mask.astype("uint8") # opencv needs uint8 - mask = GenericMask(binary_mask, self.output.height, self.output.width) - shape2d = (binary_mask.shape[0], binary_mask.shape[1]) - - if not mask.has_holes: - # draw polygons for regular masks - for segment in mask.polygons: - area = mask_util.area(mask_util.frPyObjects([segment], shape2d[0], shape2d[1])) - if area < area_threshold: - continue - has_valid_segment = True - segment = segment.reshape(-1, 2) - self.draw_polygon(segment, color=color, edge_color=edge_color, alpha=alpha) - else: - rgba = np.zeros(shape2d + (4,), dtype="float32") - rgba[:, :, :3] = color - rgba[:, :, 3] = (mask.mask == 1).astype("float32") * alpha - has_valid_segment = True - self.output.ax.imshow(rgba) - - if text is not None and has_valid_segment: - # TODO sometimes drawn on wrong objects. the heuristics here can improve. - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - _num_cc, cc_labels, stats, centroids = cv2.connectedComponentsWithStats(binary_mask, 8) - largest_component_id = np.argmax(stats[1:, -1]) + 1 - - # draw text on the largest component, as well as other very large components. - for cid in range(1, _num_cc): - if cid == largest_component_id or stats[cid, -1] > _LARGE_MASK_AREA_THRESH: - # median is more stable than centroid - # center = centroids[largest_component_id] - center = np.median((cc_labels == cid).nonzero(), axis=1)[::-1] - self.draw_text(text, center, color=lighter_color) - return self.output - - def draw_polygon(self, segment, color, edge_color=None, alpha=0.5): - """ - Args: - segment: numpy array of shape Nx2, containing all the points in the polygon. - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a - full list of formats that are accepted. If not provided, a darker shade - of the polygon color will be used instead. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - - Returns: - output (VisImage): image object with polygon drawn. - """ - if edge_color is None: - # make edge color darker than the polygon color - if alpha > 0.8: - edge_color = self._change_color_brightness(color, brightness_factor=-0.7) - else: - edge_color = color - edge_color = mplc.to_rgb(edge_color) + (1,) - - polygon = mpl.patches.Polygon( - segment, - fill=True, - facecolor=mplc.to_rgb(color) + (alpha,), - edgecolor=edge_color, - linewidth=max(self._default_font_size // 15 * self.output.scale, 1), - ) - self.output.ax.add_patch(polygon) - return self.output - - """ - Internal methods: - """ - - def _jitter(self, color): - """ - Randomly modifies given color to produce a slightly different color than the color given. - - Args: - color (tuple[double]): a tuple of 3 elements, containing the RGB values of the color - picked. The values in the list are in the [0.0, 1.0] range. - - Returns: - jittered_color (tuple[double]): a tuple of 3 elements, containing the RGB values of the - color after being jittered. The values in the list are in the [0.0, 1.0] range. - """ - color = mplc.to_rgb(color) - vec = np.random.rand(3) - # better to do it in another color space - vec = vec / np.linalg.norm(vec) * 0.5 - res = np.clip(vec + color, 0, 1) - return tuple(res) - - def _create_grayscale_image(self, mask=None): - """ - Create a grayscale version of the original image. - The colors in masked area, if given, will be kept. - """ - img_bw = self.img.astype("f4").mean(axis=2) - img_bw = np.stack([img_bw] * 3, axis=2) - if mask is not None: - img_bw[mask] = self.img[mask] - return img_bw - - def _change_color_brightness(self, color, brightness_factor): - """ - Depending on the brightness_factor, gives a lighter or darker color i.e. a color with - less or more saturation than the original color. - - Args: - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - brightness_factor (float): a value in [-1.0, 1.0] range. A lightness factor of - 0 will correspond to no change, a factor in [-1.0, 0) range will result in - a darker color and a factor in (0, 1.0] range will result in a lighter color. - - Returns: - modified_color (tuple[double]): a tuple containing the RGB values of the - modified color. Each value in the tuple is in the [0.0, 1.0] range. - """ - assert brightness_factor >= -1.0 and brightness_factor <= 1.0 - color = mplc.to_rgb(color) - polygon_color = colorsys.rgb_to_hls(*mplc.to_rgb(color)) - modified_lightness = polygon_color[1] + (brightness_factor * polygon_color[1]) - modified_lightness = 0.0 if modified_lightness < 0.0 else modified_lightness - modified_lightness = 1.0 if modified_lightness > 1.0 else modified_lightness - modified_color = colorsys.hls_to_rgb(polygon_color[0], modified_lightness, polygon_color[2]) - return modified_color - - def _convert_boxes(self, boxes): - """ - Convert different format of boxes to an NxB array, where B = 4 or 5 is the box dimension. - """ - if isinstance(boxes, Boxes) or isinstance(boxes, RotatedBoxes): - return boxes.tensor.numpy() - else: - return np.asarray(boxes) - - def _convert_masks(self, masks_or_polygons): - """ - Convert different format of masks or polygons to a tuple of masks and polygons. - - Returns: - list[GenericMask]: - """ - - m = masks_or_polygons - if isinstance(m, PolygonMasks): - m = m.polygons - if isinstance(m, BitMasks): - m = m.tensor.numpy() - if isinstance(m, torch.Tensor): - m = m.numpy() - ret = [] - for x in m: - if isinstance(x, GenericMask): - ret.append(x) - else: - ret.append(GenericMask(x, self.output.height, self.output.width)) - return ret - - def _convert_keypoints(self, keypoints): - if isinstance(keypoints, Keypoints): - keypoints = keypoints.tensor - keypoints = np.asarray(keypoints) - return keypoints - - def get_output(self): - """ - Returns: - output (VisImage): the image output containing the visualizations added - to the image. - """ - return self.output diff --git a/spaces/huggan/BigGAN/app.py b/spaces/huggan/BigGAN/app.py deleted file mode 100644 index 5ede9a5e1854f0f01c6151ed5b9d3cd0df8ab610..0000000000000000000000000000000000000000 --- a/spaces/huggan/BigGAN/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import torch -import gradio as gr -import numpy as np -import nltk -nltk.download('wordnet') -nltk.download('omw-1.4') -from PIL import Image -from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample, - save_as_images, display_in_terminal) -initial_archi = 'biggan-deep-128' #@param ['biggan-deep-128', 'biggan-deep-256', 'biggan-deep-512'] {allow-input: true} -initial_class = 'dog' - -gan_model = BigGAN.from_pretrained(initial_archi) - -def generate_images (initial_archi, initial_class, batch_size): - truncation = 0.4 - class_vector = one_hot_from_names(initial_class, batch_size=batch_size) - noise_vector = truncated_noise_sample(truncation=truncation, batch_size=batch_size) - - # All in tensors - noise_vector = torch.from_numpy(noise_vector) - class_vector = torch.from_numpy(class_vector) - - # If you have a GPU, put everything on cuda - #noise_vector = noise_vector.to('cuda') - #class_vector = class_vector.to('cuda') - #gan_model.to('cuda') - - # Generate an image - with torch.no_grad(): - output = gan_model(noise_vector, class_vector, truncation) - - # If you have a GPU put back on CPU - output = output.to('cpu') - save_as_images(output) - return output - -def convert_to_images(obj): - """ Convert an output tensor from BigGAN in a list of images. - Params: - obj: tensor or numpy array of shape (batch_size, channels, height, width) - Output: - list of Pillow Images of size (height, width) - """ - try: - import PIL - except ImportError: - raise ImportError("Please install Pillow to use images: pip install Pillow") - - if not isinstance(obj, np.ndarray): - obj = obj.detach().numpy() - - obj = obj.transpose((0, 2, 3, 1)) - obj = np.clip(((obj + 1) / 2.0) * 256, 0, 255) - - img = [] - for i, out in enumerate(obj): - out_array = np.asarray(np.uint8(out), dtype=np.uint8) - img.append(PIL.Image.fromarray(out_array)) - return img - -def inference(initial_archi, initial_class): - output = generate_images (initial_archi, initial_class, 1) - PIL_output = convert_to_images(output) - return PIL_output[0] - - - -title = "BigGAN" -description = "BigGAN using various architecture models to generate images." -article="Coming soon" - -examples = [ - ["biggan-deep-128", "dog"], - ["biggan-deep-256", 'dog'], - ["biggan-deep-512", 'dog'] -] - -gr.Interface(inference, - inputs=[gr.inputs.Dropdown(["biggan-deep-128", "biggan-deep-256", "biggan-deep-512"]), "text"], - outputs= [gr.outputs.Image(type="pil",label="output")], - examples=examples, - title=title, - description=description, - article=article).launch( debug=True) \ No newline at end of file diff --git a/spaces/huggingface-course/audio-course-u7-assessment/app.py b/spaces/huggingface-course/audio-course-u7-assessment/app.py deleted file mode 100644 index 8aab31ed928b22bfdd41d702d544fea16b882772..0000000000000000000000000000000000000000 --- a/spaces/huggingface-course/audio-course-u7-assessment/app.py +++ /dev/null @@ -1,120 +0,0 @@ -import os - -import gradio as gr -import soundfile as sf -import torch -from gradio_client import Client -from huggingface_hub import Repository -from pandas import read_csv - -from transformers import pipeline - - -# load the results file from the private repo -USERNAMES_DATASET_ID = "huggingface-course/audio-course-u7-hands-on" -HF_TOKEN = os.environ.get("HF_TOKEN") - -usernames_url = os.path.join("https://huggingface.co/datasets", USERNAMES_DATASET_ID) - -usernames_repo = Repository(local_dir="usernames", clone_from=usernames_url, use_auth_token=HF_TOKEN) -usernames_repo.git_pull() - -CSV_RESULTS_FILE = os.path.join("usernames", "usernames.csv") -all_results = read_csv(CSV_RESULTS_FILE) - -# load the LID checkpoint -device = "cuda:0" if torch.cuda.is_available() else "cpu" -pipe = pipeline("audio-classification", model="facebook/mms-lid-126", device=device) - -# define some constants -TITLE = "🤗 Audio Transformers Course: Unit 7 Assessment" -DESCRIPTION = """ -Check that you have successfully completed the hands-on exercise for Unit 7 of the 🤗 Audio Transformers Course by submitting your demo to this Space. - -As a reminder, you should start with the template Space provided at [`course-demos/speech-to-speech-translation`](https://huggingface.co/spaces/course-demos/speech-to-speech-translation), -and update the Space to translate from any language X to a **non-English** language Y. Your demo should take as input an audio file, and return as output another audio file, -matching the signature of the [`speech_to_speech_translation`](https://huggingface.co/spaces/course-demos/speech-to-speech-translation/blob/3946ba6705a6632a63de8672ac52a482ab74b3fc/app.py#L35) -function in the template demo. - -To submit your demo for assessment, give the repo id or URL to your demo. For the template demo, this would be `course-demos/speech-to-speech-translation`. -You should ensure that the visibility of your demo is set to **public**. This Space will submit a test file to your demo, and check that the output is -non-English audio. If your demo successfully returns an audio file, and this audio file is classified as being non-English, you will pass the Unit and -get a green tick next to your name on the overall [course progress space](https://huggingface.co/spaces/MariaK/Check-my-progress-Audio-Course) ✅ - -If you experience any issues with using this checker, [open an issue](https://huggingface.co/spaces/huggingface-course/audio-course-u7-assessment/discussions/new) -on this Space and tag [`@sanchit-gandhi`](https://huggingface.co/sanchit-gandhi). -""" -THRESHOLD = 0.5 -PASS_MESSAGE = "Congratulations USER! Your demo passed the assessment!" - - -def verify_demo(repo_id): - if "/" not in repo_id: - raise gr.Error(f"Ensure you pass a valid repo id to the assessor, got `{repo_id}`") - - split_repo_id = repo_id.split("/") - user_name = split_repo_id[-2] - - if len(split_repo_id) > 2: - repo_id = "/".join(split_repo_id[-2:]) - - if (all_results["username"] == user_name).any(): - raise gr.Error(f"Username {user_name} has already passed the assessment!") - - try: - client = Client(repo_id, hf_token=HF_TOKEN) - except Exception as e: - raise gr.Error("Error with loading Space. First check that your Space has been built and is running." - "Then check that your Space takes an audio file as input and returns an audio as output. If it is working" - f"as expected and the error persists, open an issue on this Space. Error: {e}" - ) - - try: - audio_file = client.predict("test_short.wav", api_name="/predict") - except Exception as e: - raise gr.Error( - f"Error with querying Space, check that your Space takes an audio file as input and returns an audio as output: {e}" - ) - - audio, sampling_rate = sf.read(audio_file) - - language_prediction = pipe({"array": audio, "sampling_rate": sampling_rate}) - - label_outputs = {} - for pred in language_prediction: - label_outputs[pred["label"]] = pred["score"] - - top_prediction = language_prediction[0] - - if top_prediction["score"] < THRESHOLD: - raise gr.Error( - f"Model made random predictions - predicted {top_prediction['label']} with probability {top_prediction['score']}" - ) - elif top_prediction["label"] == "eng": - raise gr.Error( - "Model generated an English audio - ensure the model is set to generate audio in a non-English langauge, e.g. Dutch" - ) - - # save and upload new evaluated usernames - all_results.loc[len(all_results)] = {"username": user_name} - all_results.to_csv(CSV_RESULTS_FILE, index=False) - usernames_repo.push_to_hub() - - message = PASS_MESSAGE.replace("USER", user_name) - - return message, "test_short.wav", (sampling_rate, audio), label_outputs - - -demo = gr.Interface( - fn=verify_demo, - inputs=gr.Textbox(placeholder="course-demos/speech-to-speech-translation", label="Repo id or URL of your demo"), - outputs=[ - gr.Textbox(label="Status"), - gr.Audio(label="Source Speech", type="filepath"), - gr.Audio(label="Generated Speech", type="numpy"), - gr.Label(label="Language prediction"), - ], - title=TITLE, - description=DESCRIPTION, -) -demo.launch() diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css deleted file mode 100644 index 2a426e9f12c93e5a53be15ac59c24639845f0552..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css +++ /dev/null @@ -1,25 +0,0 @@ -@tailwind base; -@tailwind components; -@tailwind utilities; - -/* Firefox */ -.x-scroll { - scrollbar-width: thin; - scrollbar-color: white #2F6DCB; -} - -/* Chrome, Edge, and Safari */ -.x-scroll::-webkit-scrollbar { - width: 4px; -} - -.x-scroll::-webkit-scrollbar-track { - background: white; - border-radius: 100px; -} - -.x-scroll::-webkit-scrollbar-thumb { - background-color: #2F6DCB; - border-radius: 100px; - border: 2px solid #2F6DCB; -} \ No newline at end of file diff --git a/spaces/hugginglearners/kvasir-seg/app.py b/spaces/hugginglearners/kvasir-seg/app.py deleted file mode 100644 index 364c694a70390fbb571b118c8e16ae528eb2d0a4..0000000000000000000000000000000000000000 --- a/spaces/hugginglearners/kvasir-seg/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -from huggingface_hub import from_pretrained_fastai - -def label_func(fn): return path/'masks1b-binary'/f'{fn.stem}.png' - -repo_id = "hugginglearners/kvasir-seg" -learn = from_pretrained_fastai(repo_id) - -def predict(img): - img = PILImage.create(img) - pred, _, _ = learn.predict(img) - return PILMask.create(pred*255) - -interface_options = { - "title": "kvasir-seg fastai segmentation", - "description": "Demonstration of segmentation of gastrointestinal polyp images. This app is for reference only. It should not be used for medical diagnosis. Model was trained on Kvasir SEG dataset (https://datasets.simula.no/kvasir-seg/)", - "layout": "horizontal", - "examples": [ - "cju5eftctcdbj08712gdp989f.jpg", - "cju42qet0lsq90871e50xbnuv.jpg", - "cju8b0jr0r2oi0801jiquetd5.jpg" - ], - "allow_flagging": "never" -} - -demo = gr.Interface( - fn=predict, - inputs=gr.Image(shape=(224, 224)), - outputs=gr.Image(shape=(224, 224)), - cache_examples=False, - **interface_options, -) - -launch_options = { - "enable_queue": True, - "share": False, -} - -demo.launch(**launch_options) \ No newline at end of file diff --git a/spaces/hysts/mistral-7b/README.md b/spaces/hysts/mistral-7b/README.md deleted file mode 100644 index 4f27308922118ed3265204cf2e40abb316d9e9b4..0000000000000000000000000000000000000000 --- a/spaces/hysts/mistral-7b/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Mistral-7B -emoji: 🐨 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 4.1.1 -app_file: app.py -pinned: false -license: mit -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hzy123/bingo/src/pages/api/create.ts b/spaces/hzy123/bingo/src/pages/api/create.ts deleted file mode 100644 index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/pages/api/create.ts +++ /dev/null @@ -1,31 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/idosal/oai-proxy/src/proxy/openai.ts b/spaces/idosal/oai-proxy/src/proxy/openai.ts deleted file mode 100644 index 8e0d134802305ff21421d7d81ebd30266c06d0e6..0000000000000000000000000000000000000000 --- a/spaces/idosal/oai-proxy/src/proxy/openai.ts +++ /dev/null @@ -1,58 +0,0 @@ -import { Request, Router } from "express"; -import * as http from "http"; -import { createProxyMiddleware, fixRequestBody } from "http-proxy-middleware"; -import { logger } from "../logger"; -import { Key, keys } from "../keys"; -import { handleResponse, onError } from "./common"; - -/** - * Modifies the request body to add a randomly selected API key. - */ -const rewriteRequest = (proxyReq: http.ClientRequest, req: Request) => { - let key: Key; - - try { - key = keys.get(req.body?.model || "gpt-3.5")!; - } catch (err) { - proxyReq.destroy(err as any); - return; - } - - req.key = key; - proxyReq.setHeader("Authorization", `Bearer ${key.key}`); - - if (req.method === "POST" && req.body) { - if (req.body?.stream) { - req.body.stream = false; - const updatedBody = JSON.stringify(req.body); - proxyReq.setHeader("Content-Length", Buffer.byteLength(updatedBody)); - (req as any).rawBody = Buffer.from(updatedBody); - } - - // body-parser and http-proxy-middleware don't play nice together - fixRequestBody(proxyReq, req); - } -}; - -const openaiProxy = createProxyMiddleware({ - target: "https://api.openai.com", - changeOrigin: true, - on: { - proxyReq: rewriteRequest, - proxyRes: handleResponse, - error: onError, - }, - selfHandleResponse: true, - logger, -}); - -const openaiRouter = Router(); -openaiRouter.post("/v1/chat/completions", openaiProxy); -// openaiRouter.post("/v1/completions", openaiProxy); // TODO: Implement Davinci -openaiRouter.get("/v1/models", openaiProxy); -openaiRouter.use((req, res) => { - logger.warn(`Blocked openai proxy request: ${req.method} ${req.path}`); - res.status(404).json({ error: "Not found" }); -}); - -export const openai = openaiRouter; diff --git a/spaces/innnky/vits-nyaru/text/symbols.py b/spaces/innnky/vits-nyaru/text/symbols.py deleted file mode 100644 index 149fe0acb988d845b5699a62e22751a2fc2f46e3..0000000000000000000000000000000000000000 --- a/spaces/innnky/vits-nyaru/text/symbols.py +++ /dev/null @@ -1,33 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' - - -# # japanese_cleaners2 -# _pad = '_' -# _punctuation = ',.!?-~…' -# _letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' - - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md deleted file mode 100644 index 4b2da1da3260afd1c20c4ad397774dffabfcd01e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md +++ /dev/null @@ -1,36 +0,0 @@ -

                Licence recovery my files v5.2.1 1964


                DOWNLOADhttps://urlin.us/2uEwmT



                - -: 2.74 MB - -licence recovery my files v5.1.5 1997: 1.18 MB - -licence recovery my files v4.20 beta 2002: 0.99 MB - -licence recovery my files v4.20 beta 2002: 0.98 MB - -licence recovery my files v4.0.5.2002: 0.96 MB - -licence recovery my files v4.0.5.1999: 0.95 MB - -licence recovery my files v3.2.0.2002: 0.94 MB - -licence recovery my files v3.2.0.2000: 0.93 MB - -licence recovery my files v3.2.0.1999: 0.93 MB - -licence recovery my files v3.2.0.1996: 0.92 MB - -licence recovery my files v3.1.0.2000: 0.91 MB - -licence recovery my files v3.1.0.1996: 0.90 MB - -licence recovery my files v3.1.0.1994: 0.90 MB - -licence recovery my files v3.0.1.1996: 0.90 MB - -licence recovery my files v3.0.0.1994: 0.89 MB - -licence recovery my files v3.0. 4fefd39f24
                -
                -
                -

                diff --git a/spaces/inreVtussa/clothingai/Examples/CRACK Icecream Screen Recorder Pro 5.76 Activator [CracksMind] 2021.md b/spaces/inreVtussa/clothingai/Examples/CRACK Icecream Screen Recorder Pro 5.76 Activator [CracksMind] 2021.md deleted file mode 100644 index 27896403ea6e2f27b65ecb221fb689179b4ec62b..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/CRACK Icecream Screen Recorder Pro 5.76 Activator [CracksMind] 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

                CRACK Icecream Screen Recorder Pro 5.76 Activator [CracksMind]


                Download Filehttps://tiurll.com/2uCl4R



                -
                -CRACK Icecream Screen Recorder Pro 5.76 Activator [CracksMind] · Torrent Alpha Blondy Dernier Album 2013 · Jai Maa Vaishanav Devi 720p ... 1fdad05405
                -
                -
                -

                diff --git a/spaces/instantnoodle/Fruits-classifier/README.md b/spaces/instantnoodle/Fruits-classifier/README.md deleted file mode 100644 index 2cba7c192a23333b043873985d34c5a525f160e9..0000000000000000000000000000000000000000 --- a/spaces/instantnoodle/Fruits-classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fruits Classifier -emoji: 🏃 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/LifeSim/src/components/ui/badge.tsx b/spaces/jbilcke-hf/LifeSim/src/components/ui/badge.tsx deleted file mode 100644 index 8a05c5e844f6551efb3b35a0a23c748a9a6639b4..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const badgeVariants = cva( - "inline-flex items-center rounded-full border border-stone-200 px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-stone-400 focus:ring-offset-2 dark:border-stone-800 dark:focus:ring-stone-800", - { - variants: { - variant: { - default: - "border-transparent bg-stone-900 text-stone-50 hover:bg-stone-900/80 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/80", - secondary: - "border-transparent bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80", - destructive: - "border-transparent bg-red-500 text-stone-50 hover:bg-red-500/80 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/80", - outline: "text-stone-950 dark:text-stone-50", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
                - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/jbilcke-hf/speech-recognition-server-1/app.py b/spaces/jbilcke-hf/speech-recognition-server-1/app.py deleted file mode 100644 index d5b1a9c64b3ef4b6adba7e456822347a10ebec15..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/speech-recognition-server-1/app.py +++ /dev/null @@ -1,202 +0,0 @@ -import os -os.system("pip install git+https://github.com/openai/whisper.git") -import gradio as gr -import whisper - -from share_btn import community_icon_html, loading_icon_html, share_js - -model = whisper.load_model("small") - - - -def inference(audio): - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - _, probs = model.detect_language(mel) - - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - - print(result.text) - return result.text, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - - - - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; margin-top: 1.5rem !important; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; - } - #share-btn * { - all: unset; - } -""" - -block = gr.Blocks(css=css) - - - -with block: - gr.HTML( - """ -
                -
                - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                - Whisper -

                -
                -

                - Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. This demo cuts audio after around 30 secs. -

                -

                You can skip the queue by using google colab for the space: Open In Colab

                -
                - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - audio = gr.Audio( - label="Input Audio", - show_label=False, - source="microphone", - type="filepath" - ) - - btn = gr.Button("Transcribe") - text = gr.Textbox(show_label=False, elem_id="result-textarea") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - - - - btn.click(inference, inputs=[audio], outputs=[text, community_icon, loading_icon, share_button], api_name="transcribe") - share_button.click(None, [], [], _js=share_js) - - gr.HTML(''' - - ''') - -block.launch() \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py deleted file mode 100644 index 5f592662f92d1b0862a3ef76304e7b28b46ecf80..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import logging -from copy import deepcopy -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer.transformer_predictor import TransformerPredictor -from .pixel_decoder import build_pixel_decoder - - -@SEM_SEG_HEADS_REGISTRY.register() -class MaskFormerHead(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, - state_dict, - prefix, - local_metadata, - strict, - missing_keys, - unexpected_keys, - error_msgs, - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "sem_seg_head" in k and not k.startswith(prefix + "predictor"): - newk = k.replace(prefix, prefix + "pixel_decoder.") - # logger.debug(f"{k} ==> {newk}") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - num_classes: int, - pixel_decoder: nn.Module, - loss_weight: float = 1.0, - ignore_value: int = -1, - # extra parameters - transformer_predictor: nn.Module, - transformer_in_feature: str, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - num_classes: number of classes to predict - pixel_decoder: the pixel decoder module - loss_weight: loss weight - ignore_value: category id to be ignored during training. - transformer_predictor: the transformer decoder that makes prediction - transformer_in_feature: input feature name to the transformer_predictor - """ - super().__init__() - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - self.ignore_value = ignore_value - self.common_stride = 4 - self.loss_weight = loss_weight - - self.pixel_decoder = pixel_decoder - self.predictor = transformer_predictor - self.transformer_in_feature = transformer_in_feature - - self.num_classes = num_classes - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - return { - "input_shape": { - k: v - for k, v in input_shape.items() - if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - }, - "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, - "pixel_decoder": build_pixel_decoder(cfg, input_shape), - "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT, - "transformer_in_feature": cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE, - "transformer_predictor": TransformerPredictor( - cfg, - cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - if cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE == "transformer_encoder" - else input_shape[cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE].channels, - mask_classification=True, - ), - } - - def forward(self, features): - return self.layers(features) - - def layers(self, features): - ( - mask_features, - transformer_encoder_features, - ) = self.pixel_decoder.forward_features(features) - if self.transformer_in_feature == "transformer_encoder": - assert ( - transformer_encoder_features is not None - ), "Please use the TransformerEncoderPixelDecoder." - predictions = self.predictor(transformer_encoder_features, mask_features) - else: - predictions = self.predictor( - features[self.transformer_in_feature], mask_features - ) - return predictions diff --git a/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md b/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md deleted file mode 100644 index 68082a484185912887992b49dfe6fa1d7382b5c7..0000000000000000000000000000000000000000 --- a/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: BERT Advanced CNN Hate Speech Detection -license: mit -emoji: 😠 -app_file: app.py -sdk: gradio -colorFrom: yellow -colorTo: gray ---- - -# BERT + Advanced 5-layer CNN for Hate Speech Classification - -Bert Hate Speech Classification is a project that aims to classify hate speech from [Davidson Dataset](https://github.com/t-davidson/hate-speech-and-offensive-language). The project is built using BERT and adding Advanced 5-Layer CNN to improve the performance of the model. - -This project was the final class project for the Data Mining course offered by National Cheng Kung University and taught by Professor [Eric Hsueh-Chan Lu (呂學展)](https://www.geomatics.ncku.edu.tw/laboratory.php?tpl=19) - -## Dataset - -The Davidson Dataset consist of three different labels, which are: Hate Speech (0), Offensive Language (1), and Neither (2). The dataset is unbalanced, with the majority of the data is labeled as Offensive Language. The dataset is also noisy, with some of the data is mislabeled. The maximum word length of the dataset is 87 words. - -## Contributors - -| Name | Role | The Worked Distribution | Deployment | -| ---------------------- | --------------- | ----------------------- | -------------------------------------------------------- | -| Cendra Deyana Putra | Model Developer | `Model Builder` | [@data_mining/cendra](https://github.com/Cendra123) | -| Aunuun Jeffry Mahbuubi | Model Deployer | `Model Deployer` | [@data_mining/jeffry](https://github.com/jeffrymahbuubi) | \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/jjeamin/ArcaneStyleTransfer/app.py b/spaces/jjeamin/ArcaneStyleTransfer/app.py deleted file mode 100644 index 7b4dec4856a48b04212558032e5d6edecb4ed21d..0000000000000000000000000000000000000000 --- a/spaces/jjeamin/ArcaneStyleTransfer/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import os -os.system("pip freeze") - -import torch -import PIL -import gradio as gr -import torch -from utils import align_face -from torchvision import transforms -from huggingface_hub import hf_hub_download - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -image_size = 512 -transform_size = 1024 - -means = [0.5, 0.5, 0.5] -stds = [0.5, 0.5, 0.5] - -img_transforms = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(means, stds)]) - -model_path = hf_hub_download(repo_id="jjeamin/ArcaneStyleTransfer", filename="pytorch_model.bin") - -if 'cuda' in device: - style_transfer = torch.jit.load(model_path).eval().cuda().half() - t_stds = torch.tensor(stds).cuda().half()[:,None,None] - t_means = torch.tensor(means).cuda().half()[:,None,None] -else: - style_transfer = torch.jit.load(model_path).eval().cpu() - t_stds = torch.tensor(stds).cpu()[:,None,None] - t_means = torch.tensor(means).cpu()[:,None,None] - -def tensor2im(var): - return var.mul(t_stds).add(t_means).mul(255.).clamp(0,255).permute(1,2,0) - -def proc_pil_img(input_image): - if 'cuda' in device: - transformed_image = img_transforms(input_image)[None,...].cuda().half() - else: - transformed_image = img_transforms(input_image)[None,...].cpu() - - with torch.no_grad(): - result_image = style_transfer(transformed_image)[0] - output_image = tensor2im(result_image) - output_image = output_image.detach().cpu().numpy().astype('uint8') - output_image = PIL.Image.fromarray(output_image) - return output_image - -def process(im, is_align): - im = PIL.ImageOps.exif_transpose(im) - - if is_align == 'True': - im = align_face(im, output_size=image_size, transform_size=transform_size) - else: - pass - - res = proc_pil_img(im) - - return res - -gr.Interface( - process, - inputs=[gr.inputs.Image(type="pil", label="Input", shape=(image_size, image_size)), gr.inputs.Radio(['True','False'], type="value", default='True', label='face align')], - outputs=gr.outputs.Image(type="pil", label="Output"), - title="Arcane Style Transfer", - description="Gradio demo for Arcane Style Transfer", - article = "

                Github Repo by jjeamin

                visitor badge

                ", - examples=[['billie.png', 'True'], ['gongyoo.jpeg', 'True'], ['IU.png', 'True'], ['elon.png', 'True']], - enable_queue=True, - allow_flagging=False, - allow_screenshot=False - ).launch(enable_queue=True) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py deleted file mode 100644 index 266b46f011bbd3e0adec375928ad600f592ecc4f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_import_DSA.py +++ /dev/null @@ -1,554 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/PublicKey/test_import_DSA.py: Self-test for importing DSA keys -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -import unittest -import re - -from Crypto.PublicKey import DSA -from Crypto.SelfTest.st_common import * -from Crypto.Util.py3compat import * - -from binascii import unhexlify - -class ImportKeyTests(unittest.TestCase): - - y = 92137165128186062214622779787483327510946462589285775188003362705875131352591574106484271700740858696583623951844732128165434284507709057439633739849986759064015013893156866539696757799934634945787496920169462601722830899660681779448742875054459716726855443681559131362852474817534616736104831095601710736729 - p = 162452170958135306109773853318304545923250830605675936228618290525164105310663722368377131295055868997377338797580997938253236213714988311430600065853662861806894003694743806769284131194035848116051021923956699231855223389086646903420682639786976554552864568460372266462812137447840653688476258666833303658691 - q = 988791743931120302950649732173330531512663554851 - g = 85583152299197514738065570254868711517748965097380456700369348466136657764813442044039878840094809620913085570225318356734366886985903212775602770761953571967834823306046501307810937486758039063386311593890777319935391363872375452381836756832784184928202587843258855704771836753434368484556809100537243908232 - x = 540873410045082450874416847965843801027716145253 - - def setUp(self): - - # It is easier to write test vectors in text form, - # and convert them to byte strigs dynamically here - for mname, mvalue in ImportKeyTests.__dict__.items(): - if mname[:4] in ('der_', 'pem_', 'ssh_'): - if mname[:4] == 'der_': - mvalue = unhexlify(tobytes(mvalue)) - mvalue = tobytes(mvalue) - setattr(self, mname, mvalue) - - # 1. SubjectPublicKeyInfo - der_public=\ - '308201b73082012b06072a8648ce3804013082011e02818100e756ee1717f4b6'+\ - '794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a2757695ec91'+\ - '5697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8b81b47'+\ - '9a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656cecb4c'+\ - '8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad32f48c'+\ - 'd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb7eaeae'+\ - '3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466cf444f3'+\ - '4b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b92370040a'+\ - 'ca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074b41c56'+\ - 'ae43fd300d89262e4efd89943f99a651b03888038185000281810083352a69a1'+\ - '32f34843d2a0eb995bff4e2f083a73f0049d2c91ea2f0ce43d144abda48199e4'+\ - 'b003c570a8af83303d45105f606c5c48d925a40ed9c2630c2fa4cdbf838539de'+\ - 'b9a29f919085f2046369f627ca84b2cb1e2c7940564b670f963ab1164d4e2ca2'+\ - 'bf6ffd39f12f548928bf4d2d1b5e6980b4f1be4c92a91986fba559' - - def testImportKey1(self): - key_obj = DSA.importKey(self.der_public) - self.assertFalse(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - - def testExportKey1(self): - tup = (self.y, self.g, self.p, self.q) - key = DSA.construct(tup) - encoded = key.export_key('DER') - self.assertEqual(self.der_public, encoded) - - # 2. - pem_public="""\ ------BEGIN PUBLIC KEY----- -MIIBtzCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/ -j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtH -mjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2 -qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzrfq6u -NxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa -5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxW -rkP9MA2JJi5O/YmUP5mmUbA4iAOBhQACgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPw -BJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTne -uaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmA -tPG+TJKpGYb7pVk= ------END PUBLIC KEY-----""" - - def testImportKey2(self): - for pem in (self.pem_public, tostr(self.pem_public)): - key_obj = DSA.importKey(pem) - self.assertFalse(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - - def testExportKey2(self): - tup = (self.y, self.g, self.p, self.q) - key = DSA.construct(tup) - encoded = key.export_key('PEM') - self.assertEqual(self.pem_public, encoded) - - # 3. OpenSSL/OpenSSH format - der_private=\ - '308201bb02010002818100e756ee1717f4b6794c7c214724a19763742c45572b'+\ - '4b3f8ff3b44f3be9f44ce039a2757695ec915697da74ef914fcd1b05660e2419'+\ - 'c761d639f45d2d79b802dbd23e7ab8b81b479a380e1f30932584ba2a0b955032'+\ - '342ebc83cb5ca906e7b0d7cd6fe656cecb4c8b5a77123a8c6750a481e3b06057'+\ - 'aff6aa6eba620b832d60c3021500ad32f48cd3ae0c45a198a61fa4b5e2032076'+\ - '3b2302818079dfdc3d614fe635fceb7eaeae3718dc2efefb45282993ac6749dc'+\ - '83c223d8c1887296316b3b0b54466cf444f34b82e3554d0b90a778faaf1306f0'+\ - '25dae6a3e36c7f93dd5bac4052b92370040aca70b8d5820599711900efbc9618'+\ - '12c355dd9beffe0981da85c5548074b41c56ae43fd300d89262e4efd89943f99'+\ - 'a651b038880281810083352a69a132f34843d2a0eb995bff4e2f083a73f0049d'+\ - '2c91ea2f0ce43d144abda48199e4b003c570a8af83303d45105f606c5c48d925'+\ - 'a40ed9c2630c2fa4cdbf838539deb9a29f919085f2046369f627ca84b2cb1e2c'+\ - '7940564b670f963ab1164d4e2ca2bf6ffd39f12f548928bf4d2d1b5e6980b4f1'+\ - 'be4c92a91986fba55902145ebd9a3f0b82069d98420986b314215025756065' - - def testImportKey3(self): - key_obj = DSA.importKey(self.der_private) - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey3(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - encoded = key.export_key('DER', pkcs8=False) - self.assertEqual(self.der_private, encoded) - - # 4. - pem_private="""\ ------BEGIN DSA PRIVATE KEY----- -MIIBuwIBAAKBgQDnVu4XF/S2eUx8IUckoZdjdCxFVytLP4/ztE876fRM4DmidXaV -7JFWl9p075FPzRsFZg4kGcdh1jn0XS15uALb0j56uLgbR5o4Dh8wkyWEuioLlVAy -NC68g8tcqQbnsNfNb+ZWzstMi1p3EjqMZ1CkgeOwYFev9qpuumILgy1gwwIVAK0y -9IzTrgxFoZimH6S14gMgdjsjAoGAed/cPWFP5jX8636urjcY3C7++0UoKZOsZ0nc -g8Ij2MGIcpYxazsLVEZs9ETzS4LjVU0LkKd4+q8TBvAl2uaj42x/k91brEBSuSNw -BArKcLjVggWZcRkA77yWGBLDVd2b7/4JgdqFxVSAdLQcVq5D/TANiSYuTv2JlD+Z -plGwOIgCgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LAD -xXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4s -eUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVkCFF69mj8L -ggadmEIJhrMUIVAldWBl ------END DSA PRIVATE KEY-----""" - - def testImportKey4(self): - for pem in (self.pem_private, tostr(self.pem_private)): - key_obj = DSA.importKey(pem) - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey4(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - encoded = key.export_key('PEM', pkcs8=False) - self.assertEqual(self.pem_private, encoded) - - # 5. PKCS8 (unencrypted) - der_pkcs8=\ - '3082014a0201003082012b06072a8648ce3804013082011e02818100e756ee17'+\ - '17f4b6794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a27576'+\ - '95ec915697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8'+\ - 'b81b479a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656'+\ - 'cecb4c8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad'+\ - '32f48cd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb'+\ - '7eaeae3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466c'+\ - 'f444f34b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b923'+\ - '70040aca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074'+\ - 'b41c56ae43fd300d89262e4efd89943f99a651b03888041602145ebd9a3f0b82'+\ - '069d98420986b314215025756065' - - def testImportKey5(self): - key_obj = DSA.importKey(self.der_pkcs8) - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey5(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - encoded = key.export_key('DER') - self.assertEqual(self.der_pkcs8, encoded) - encoded = key.export_key('DER', pkcs8=True) - self.assertEqual(self.der_pkcs8, encoded) - - # 6. - pem_pkcs8="""\ ------BEGIN PRIVATE KEY----- -MIIBSgIBADCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVX -K0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4 -uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47Bg -V6/2qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzr -fq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG -8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0 -tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAQWAhRevZo/C4IGnZhCCYazFCFQJXVgZQ== ------END PRIVATE KEY-----""" - - def testImportKey6(self): - for pem in (self.pem_pkcs8, tostr(self.pem_pkcs8)): - key_obj = DSA.importKey(pem) - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey6(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - encoded = key.export_key('PEM') - self.assertEqual(self.pem_pkcs8, encoded) - encoded = key.export_key('PEM', pkcs8=True) - self.assertEqual(self.pem_pkcs8, encoded) - - # 7. OpenSSH/RFC4253 - ssh_pub="""ssh-dss AAAAB3NzaC1kc3MAAACBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2qm66YguDLWDDAAAAFQCtMvSM064MRaGYph+kteIDIHY7IwAAAIB539w9YU/mNfzrfq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAAAAIEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVk=""" - - def testImportKey7(self): - for ssh in (self.ssh_pub, tostr(self.ssh_pub)): - key_obj = DSA.importKey(ssh) - self.assertFalse(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - - def testExportKey7(self): - tup = (self.y, self.g, self.p, self.q) - key = DSA.construct(tup) - encoded = key.export_key('OpenSSH') - self.assertEqual(self.ssh_pub, encoded) - - # 8. Encrypted OpenSSL/OpenSSH - pem_private_encrypted="""\ ------BEGIN DSA PRIVATE KEY----- -Proc-Type: 4,ENCRYPTED -DEK-Info: AES-128-CBC,70B6908939D65E9F2EB999E8729788CE - -4V6GHRDpCrdZ8MBjbyp5AlGUrjvr2Pn2e2zVxy5RBt4FBj9/pa0ae0nnyUPMLSUU -kKyOR0topRYTVRLElm4qVrb5uNZ3hRwfbklr+pSrB7O9eHz9V5sfOQxyODS07JxK -k1OdOs70/ouMXLF9EWfAZOmWUccZKHNblUwg1p1UrZIz5jXw4dUE/zqhvXh6d+iC -ADsICaBCjCrRQJKDp50h3+ndQjkYBKVH+pj8TiQ79U7lAvdp3+iMghQN6YXs9mdI -gFpWw/f97oWM4GHZFqHJ+VSMNFjBiFhAvYV587d7Lk4dhD8sCfbxj42PnfRgUItc -nnPqHxmhMQozBWzYM4mQuo3XbF2WlsNFbOzFVyGhw1Bx1s91qvXBVWJh2ozrW0s6 -HYDV7ZkcTml/4kjA/d+mve6LZ8kuuR1qCiZx6rkffhh1gDN/1Xz3HVvIy/dQ+h9s -5zp7PwUoWbhqp3WCOr156P6gR8qo7OlT6wMh33FSXK/mxikHK136fV2shwTKQVII -rJBvXpj8nACUmi7scKuTWGeUoXa+dwTZVVe+b+L2U1ZM7+h/neTJiXn7u99PFUwu -xVJtxaV37m3aXxtCsPnbBg== ------END DSA PRIVATE KEY-----""" - - def testImportKey8(self): - for pem in (self.pem_private_encrypted, tostr(self.pem_private_encrypted)): - key_obj = DSA.importKey(pem, "PWDTEST") - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey8(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - encoded = key.export_key('PEM', pkcs8=False, passphrase="PWDTEST") - key = DSA.importKey(encoded, "PWDTEST") - self.assertEqual(self.y, key.y) - self.assertEqual(self.p, key.p) - self.assertEqual(self.q, key.q) - self.assertEqual(self.g, key.g) - self.assertEqual(self.x, key.x) - - # 9. Encrypted PKCS8 - # pbeWithMD5AndDES-CBC - pem_pkcs8_encrypted="""\ ------BEGIN ENCRYPTED PRIVATE KEY----- -MIIBcTAbBgkqhkiG9w0BBQMwDgQI0GC3BJ/jSw8CAggABIIBUHc1cXZpExIE9tC7 -7ryiW+5ihtF2Ekurq3e408GYSAu5smJjN2bvQXmzRFBz8W38K8eMf1sbWroZ4+zn -kZSbb9nSm5kAa8lR2+oF2k+WRswMR/PTC3f/D9STO2X0QxdrzKgIHEcSGSHp5jTx -aVvbkCDHo9vhBTl6S3ogZ48As/MEro76+9igUwJ1jNhIQZPJ7e20QH5qDpQFFJN4 -CKl2ENSEuwGiqBszItFy4dqH0g63ZGZV/xt9wSO9Rd7SK/EbA/dklOxBa5Y/VItM -gnIhs9XDMoGYyn6F023EicNJm6g/bVQk81BTTma4tm+12TKGdYm+QkeZvCOMZylr -Wv67cKwO3cAXt5C3QXMDgYR64XvuaT5h7C0igMp2afSXJlnbHEbFxQVJlv83T4FM -eZ4k+NQDbEL8GiHmFxzDWQAuPPZKJWEEEV2p/To+WOh+kSDHQw== ------END ENCRYPTED PRIVATE KEY-----""" - - def testImportKey9(self): - for pem in (self.pem_pkcs8_encrypted, tostr(self.pem_pkcs8_encrypted)): - key_obj = DSA.importKey(pem, "PWDTEST") - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - # 10. Encrypted PKCS8 - # pkcs5PBES2 / - # pkcs5PBKDF2 (rounds=1000, salt=D725BF1B6B8239F4) / - # des-EDE3-CBC (iv=27A1C66C42AFEECE) - # - der_pkcs8_encrypted=\ - '30820196304006092a864886f70d01050d3033301b06092a864886f70d01050c'+\ - '300e0408d725bf1b6b8239f4020203e8301406082a864886f70d0307040827a1'+\ - 'c66c42afeece048201505cacfde7bf8edabb3e0d387950dc872662ea7e9b1ed4'+\ - '400d2e7e6186284b64668d8d0328c33a9d9397e6f03df7cb68268b0a06b4e22f'+\ - '7d132821449ecf998a8b696dbc6dd2b19e66d7eb2edfeb4153c1771d49702395'+\ - '4f36072868b5fcccf93413a5ac4b2eb47d4b3f681c6bd67ae363ed776f45ae47'+\ - '174a00098a7c930a50f820b227ddf50f9742d8e950d02586ff2dac0e3c372248'+\ - 'e5f9b6a7a02f4004f20c87913e0f7b52bccc209b95d478256a890b31d4c9adec'+\ - '21a4d157a179a93a3dad06f94f3ce486b46dfa7fc15fd852dd7680bbb2f17478'+\ - '7e71bd8dbaf81eca7518d76c1d26256e95424864ba45ca5d47d7c5a421be02fa'+\ - 'b94ab01e18593f66cf9094eb5c94b9ecf3aa08b854a195cf87612fbe5e96c426'+\ - '2b0d573e52dc71ba3f5e468c601e816c49b7d32c698b22175e89aaef0c443770'+\ - '5ef2f88a116d99d8e2869a4fd09a771b84b49e4ccb79aadcb1c9' - - def testImportKey10(self): - key_obj = DSA.importKey(self.der_pkcs8_encrypted, "PWDTEST") - self.assertTrue(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - self.assertEqual(self.x, key_obj.x) - - def testExportKey10(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - randfunc = BytesIO(unhexlify(b("27A1C66C42AFEECE") + b("D725BF1B6B8239F4"))).read - encoded = key.export_key('DER', pkcs8=True, passphrase="PWDTEST", randfunc=randfunc) - self.assertEqual(self.der_pkcs8_encrypted, encoded) - - # ---- - - def testImportError1(self): - self.assertRaises(ValueError, DSA.importKey, self.der_pkcs8_encrypted, "wrongpwd") - - def testExportError2(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - self.assertRaises(ValueError, key.export_key, 'DER', pkcs8=False, passphrase="PWDTEST") - - def test_import_key(self): - """Verify importKey is an alias to import_key""" - - key_obj = DSA.import_key(self.der_public) - self.assertFalse(key_obj.has_private()) - self.assertEqual(self.y, key_obj.y) - self.assertEqual(self.p, key_obj.p) - self.assertEqual(self.q, key_obj.q) - self.assertEqual(self.g, key_obj.g) - - def test_exportKey(self): - tup = (self.y, self.g, self.p, self.q, self.x) - key = DSA.construct(tup) - self.assertEqual(key.exportKey(), key.export_key()) - - - def test_import_empty(self): - self.assertRaises(ValueError, DSA.import_key, b'') - - -class ImportKeyFromX509Cert(unittest.TestCase): - - def test_x509v1(self): - - # Sample V1 certificate with a 1024 bit DSA key - x509_v1_cert = """ ------BEGIN CERTIFICATE----- -MIIDUjCCArsCAQIwDQYJKoZIhvcNAQEFBQAwfjENMAsGA1UEChMEQWNtZTELMAkG -A1UECxMCUkQxHDAaBgkqhkiG9w0BCQEWDXNwYW1AYWNtZS5vcmcxEzARBgNVBAcT -Ck1ldHJvcG9saXMxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYDVQQGEwJVUzENMAsG -A1UEAxMEdGVzdDAeFw0xNDA3MTEyMDM4NDNaFw0xNzA0MDYyMDM4NDNaME0xCzAJ -BgNVBAYTAlVTMREwDwYDVQQIEwhOZXcgWW9yazENMAsGA1UEChMEQWNtZTELMAkG -A1UECxMCUkQxDzANBgNVBAMTBnBvbGFuZDCCAbYwggErBgcqhkjOOAQBMIIBHgKB -gQDOrN4Ox4+t3T6wKeHfhzArhcrNEFMQ4Ss+4PIKyimDy9Bn64WPkL1B/9dvYIga -23GLu6tVJmXo6EdJnVOHEMhr99EeOwuDWWeP7Awq7RSlKEejokr4BEzMTW/tExSD -cO6/GI7xzh0eTH+VTTPDfyrJMYCkh0rJAfCP+5xrmPNetwIVALtXYOV1yoRrzJ2Q -M5uEjidH6GiZAoGAfUqA1SAm5g5U68SILMVX9l5rq0OpB0waBMpJQ31/R/yXNDqo -c3gGWZTOJFU4IzwNpGhrGNADUByz/lc1SAOAdEJIr0JVrhbGewQjB4pWqoLGbBKz -RoavTNDc/zD7SYa12evWDHADwvlXoeQg+lWop1zS8OqaDC7aLGKpWN3/m8kDgYQA -AoGAKoirPAfcp1rbbl4y2FFAIktfW8f4+T7d2iKSg73aiVfujhNOt1Zz1lfC0NI2 -eonLWO3tAM4XGKf1TLjb5UXngGn40okPsaA81YE6ZIKm20ywjlOY3QkAEdMaLVY3 -9PJvM8RGB9m7pLKxyHfGMfF40MVN4222zKeGp7xhM0CNiCUwDQYJKoZIhvcNAQEF -BQADgYEAfbNZfpYa2KlALEM1FZnwvQDvJHntHz8LdeJ4WM7CXDlKi67wY2HKM30w -s2xej75imkVOFd1kF2d0A8sjfriXLVIt1Hwq9ANZomhu4Edx0xpH8tqdh/bDtnM2 -TmduZNY9OWkb07h0CtWD6Zt8fhRllVsSSrlWd/2or7FXNC5weFQ= ------END CERTIFICATE----- - """.strip() - - # DSA public key as dumped by openssl - y_str = """ -2a:88:ab:3c:07:dc:a7:5a:db:6e:5e:32:d8:51:40: -22:4b:5f:5b:c7:f8:f9:3e:dd:da:22:92:83:bd:da: -89:57:ee:8e:13:4e:b7:56:73:d6:57:c2:d0:d2:36: -7a:89:cb:58:ed:ed:00:ce:17:18:a7:f5:4c:b8:db: -e5:45:e7:80:69:f8:d2:89:0f:b1:a0:3c:d5:81:3a: -64:82:a6:db:4c:b0:8e:53:98:dd:09:00:11:d3:1a: -2d:56:37:f4:f2:6f:33:c4:46:07:d9:bb:a4:b2:b1: -c8:77:c6:31:f1:78:d0:c5:4d:e3:6d:b6:cc:a7:86: -a7:bc:61:33:40:8d:88:25 - """ - p_str = """ -00:ce:ac:de:0e:c7:8f:ad:dd:3e:b0:29:e1:df:87: -30:2b:85:ca:cd:10:53:10:e1:2b:3e:e0:f2:0a:ca: -29:83:cb:d0:67:eb:85:8f:90:bd:41:ff:d7:6f:60: -88:1a:db:71:8b:bb:ab:55:26:65:e8:e8:47:49:9d: -53:87:10:c8:6b:f7:d1:1e:3b:0b:83:59:67:8f:ec: -0c:2a:ed:14:a5:28:47:a3:a2:4a:f8:04:4c:cc:4d: -6f:ed:13:14:83:70:ee:bf:18:8e:f1:ce:1d:1e:4c: -7f:95:4d:33:c3:7f:2a:c9:31:80:a4:87:4a:c9:01: -f0:8f:fb:9c:6b:98:f3:5e:b7 - """ - q_str = """ -00:bb:57:60:e5:75:ca:84:6b:cc:9d:90:33:9b:84: -8e:27:47:e8:68:99 - """ - g_str = """ -7d:4a:80:d5:20:26:e6:0e:54:eb:c4:88:2c:c5:57: -f6:5e:6b:ab:43:a9:07:4c:1a:04:ca:49:43:7d:7f: -47:fc:97:34:3a:a8:73:78:06:59:94:ce:24:55:38: -23:3c:0d:a4:68:6b:18:d0:03:50:1c:b3:fe:57:35: -48:03:80:74:42:48:af:42:55:ae:16:c6:7b:04:23: -07:8a:56:aa:82:c6:6c:12:b3:46:86:af:4c:d0:dc: -ff:30:fb:49:86:b5:d9:eb:d6:0c:70:03:c2:f9:57: -a1:e4:20:fa:55:a8:a7:5c:d2:f0:ea:9a:0c:2e:da: -2c:62:a9:58:dd:ff:9b:c9 - """ - - key = DSA.importKey(x509_v1_cert) - for comp_name in ('y', 'p', 'q', 'g'): - comp_str = locals()[comp_name + "_str"] - comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) - self.assertEqual(getattr(key, comp_name), comp) - self.assertFalse(key.has_private()) - - def test_x509v3(self): - - # Sample V3 certificate with a 1024 bit DSA key - x509_v3_cert = """ ------BEGIN CERTIFICATE----- -MIIFhjCCA26gAwIBAgIBAzANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJVUzEL -MAkGA1UECAwCTUQxEjAQBgNVBAcMCUJhbHRpbW9yZTEQMA4GA1UEAwwHVGVzdCBD -QTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTAeFw0xNDA3MTMyMDUz -MjBaFw0xNzA0MDgyMDUzMjBaMEAxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJNRDES -MBAGA1UEBwwJQmFsdGltb3JlMRAwDgYDVQQDDAdhdXN0cmlhMIIBtjCCASsGByqG -SM44BAEwggEeAoGBALfd8gyEpVPA0ZI69Kp3nyJcu5N0ZZ3K1K9hleQLNqKEcZOh -7a/C2J1TPdmHTLJ0rAwBZ1nWxnARSgRphziGDFspKCYQwYcSMz8KoFgvXbXpuchy -oFACiQ2LqZnc5MakuLQtLcQciSYGYj3zmZdYMoa904F1aDWr+DxQI6DVC3/bAhUA -hqXMCJ6fQK3G2O9S3/CC/yVZXCsCgYBRXROl3R2khX7l10LQjDEgo3B1IzjXU/jP -McMBl6XO+nBJXxr/scbq8Ajiv7LTnGpSjgryHtvfj887kfvo8QbSS3kp3vq5uSqI -ui7E7r3jguWaLj616AG1HWOctXJUjqsiabZwsp2h09gHTzmHEXBOmiARu8xFxKAH -xsuo7onAbwOBhAACgYBylWjWSnKHE8mHx1A5m/0GQx6xnhWIe3+MJAnEhRGxA2J4 -SCsfWU0OwglIQToh1z5uUU9oDi9cYgNPBevOFRnDhc2yaJY6VAYnI+D+6J5IU6Yd -0iaG/iSc4sV4bFr0axcPpse3SN0XaQxiKeSFBfFnoMqL+dd9Gb3QPZSllBcVD6OB -1TCB0jAdBgNVHQ4EFgQUx5wN0Puotv388M9Tp/fsPbZpzAUwHwYDVR0jBBgwFoAU -a0hkif3RMaraiWtsOOZZlLu9wJwwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwSgYD -VR0RBEMwQYILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNvbYIQbWFpbC5leGFt -cGxlLmNvbYIPZnRwLmV4YW1wbGUuY29tMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NM -IEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOCAgEAyWf1TiJI -aNEIA9o/PG8/JiGASTS2/HBVTJbkq03k6NkJVk/GxC1DPziTUJ+CdWlHWcAi1EOW -Ach3QxNDRrVfCOfCMDgElIO1094/reJgdFYG00LRi8QkRJuxANV7YS4tLudhyHJC -kR2lhdMNmEuzWK+s2y+5cLrdm7qdvdENQCcV67uvGPx4sc+EaE7x13SczKjWBtbo -QCs6JTOW+EkPRl4Zo27K4OIZ43/J+GxvwU9QUVH3wPVdbbLNw+QeTFBYMTEcxyc4 -kv50HPBFaithziXBFyvdIs19FjkFzu0Uz/e0zb1+vMzQlJMD94HVOrMnIj5Sb2cL -KKdYXS4uhxFJmdV091Xur5JkYYwEzuaGav7J3zOzYutrIGTgDluLCvA+VQkRcTsy -jZ065SkY/v+38QHp+cmm8WRluupJTs8wYzVp6Fu0iFaaK7ztFmaZmHpiPIfDFjva -aCIgzzT5NweJd/b71A2SyzHXJ14zBXsr1PMylMp2TpHIidhuuNuQL6I0HaollB4M -Z3FsVBMhVDw4Z76qnFPr8mZE2tar33hSlJI/3pS/bBiukuBk8U7VB0X8OqaUnP3C -7b2Z4G8GtqDVcKGMzkvMjT4n9rKd/Le+qHSsQOGO9W/0LB7UDAZSwUsfAPnoBgdS -5t9tIomLCOstByXi+gGZue1TcdCa3Ph4kO0= ------END CERTIFICATE----- - """.strip() - - # DSA public key as dumped by openssl - y_str = """ -72:95:68:d6:4a:72:87:13:c9:87:c7:50:39:9b:fd: -06:43:1e:b1:9e:15:88:7b:7f:8c:24:09:c4:85:11: -b1:03:62:78:48:2b:1f:59:4d:0e:c2:09:48:41:3a: -21:d7:3e:6e:51:4f:68:0e:2f:5c:62:03:4f:05:eb: -ce:15:19:c3:85:cd:b2:68:96:3a:54:06:27:23:e0: -fe:e8:9e:48:53:a6:1d:d2:26:86:fe:24:9c:e2:c5: -78:6c:5a:f4:6b:17:0f:a6:c7:b7:48:dd:17:69:0c: -62:29:e4:85:05:f1:67:a0:ca:8b:f9:d7:7d:19:bd: -d0:3d:94:a5:94:17:15:0f - """ - p_str = """ -00:b7:dd:f2:0c:84:a5:53:c0:d1:92:3a:f4:aa:77: -9f:22:5c:bb:93:74:65:9d:ca:d4:af:61:95:e4:0b: -36:a2:84:71:93:a1:ed:af:c2:d8:9d:53:3d:d9:87: -4c:b2:74:ac:0c:01:67:59:d6:c6:70:11:4a:04:69: -87:38:86:0c:5b:29:28:26:10:c1:87:12:33:3f:0a: -a0:58:2f:5d:b5:e9:b9:c8:72:a0:50:02:89:0d:8b: -a9:99:dc:e4:c6:a4:b8:b4:2d:2d:c4:1c:89:26:06: -62:3d:f3:99:97:58:32:86:bd:d3:81:75:68:35:ab: -f8:3c:50:23:a0:d5:0b:7f:db - """ - q_str = """ -00:86:a5:cc:08:9e:9f:40:ad:c6:d8:ef:52:df:f0: -82:ff:25:59:5c:2b - """ - g_str = """ -51:5d:13:a5:dd:1d:a4:85:7e:e5:d7:42:d0:8c:31: -20:a3:70:75:23:38:d7:53:f8:cf:31:c3:01:97:a5: -ce:fa:70:49:5f:1a:ff:b1:c6:ea:f0:08:e2:bf:b2: -d3:9c:6a:52:8e:0a:f2:1e:db:df:8f:cf:3b:91:fb: -e8:f1:06:d2:4b:79:29:de:fa:b9:b9:2a:88:ba:2e: -c4:ee:bd:e3:82:e5:9a:2e:3e:b5:e8:01:b5:1d:63: -9c:b5:72:54:8e:ab:22:69:b6:70:b2:9d:a1:d3:d8: -07:4f:39:87:11:70:4e:9a:20:11:bb:cc:45:c4:a0: -07:c6:cb:a8:ee:89:c0:6f - """ - - key = DSA.importKey(x509_v3_cert) - for comp_name in ('y', 'p', 'q', 'g'): - comp_str = locals()[comp_name + "_str"] - comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) - self.assertEqual(getattr(key, comp_name), comp) - self.assertFalse(key.has_private()) - - -if __name__ == '__main__': - unittest.main() - -def get_tests(config={}): - tests = [] - tests += list_test_cases(ImportKeyTests) - tests += list_test_cases(ImportKeyFromX509Cert) - return tests - -if __name__ == '__main__': - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/plot.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/plot.py deleted file mode 100644 index e0a7ca50d3f317d7c3219b77ff84f0f8bb310c6d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/varLib/plot.py +++ /dev/null @@ -1,238 +0,0 @@ -"""Visualize DesignSpaceDocument and resulting VariationModel.""" - -from fontTools.varLib.models import VariationModel, supportScalar -from fontTools.designspaceLib import DesignSpaceDocument -from matplotlib import pyplot -from mpl_toolkits.mplot3d import axes3d -from itertools import cycle -import math -import logging -import sys - -log = logging.getLogger(__name__) - - -def stops(support, count=10): - a, b, c = support - - return ( - [a + (b - a) * i / count for i in range(count)] - + [b + (c - b) * i / count for i in range(count)] - + [c] - ) - - -def _plotLocationsDots(locations, axes, subplot, **kwargs): - for loc, color in zip(locations, cycle(pyplot.cm.Set1.colors)): - if len(axes) == 1: - subplot.plot([loc.get(axes[0], 0)], [1.0], "o", color=color, **kwargs) - elif len(axes) == 2: - subplot.plot( - [loc.get(axes[0], 0)], - [loc.get(axes[1], 0)], - [1.0], - "o", - color=color, - **kwargs, - ) - else: - raise AssertionError(len(axes)) - - -def plotLocations(locations, fig, names=None, **kwargs): - n = len(locations) - cols = math.ceil(n**0.5) - rows = math.ceil(n / cols) - - if names is None: - names = [None] * len(locations) - - model = VariationModel(locations) - names = [names[model.reverseMapping[i]] for i in range(len(names))] - - axes = sorted(locations[0].keys()) - if len(axes) == 1: - _plotLocations2D(model, axes[0], fig, cols, rows, names=names, **kwargs) - elif len(axes) == 2: - _plotLocations3D(model, axes, fig, cols, rows, names=names, **kwargs) - else: - raise ValueError("Only 1 or 2 axes are supported") - - -def _plotLocations2D(model, axis, fig, cols, rows, names, **kwargs): - subplot = fig.add_subplot(111) - for i, (support, color, name) in enumerate( - zip(model.supports, cycle(pyplot.cm.Set1.colors), cycle(names)) - ): - if name is not None: - subplot.set_title(name) - subplot.set_xlabel(axis) - pyplot.xlim(-1.0, +1.0) - - Xs = support.get(axis, (-1.0, 0.0, +1.0)) - X, Y = [], [] - for x in stops(Xs): - y = supportScalar({axis: x}, support) - X.append(x) - Y.append(y) - subplot.plot(X, Y, color=color, **kwargs) - - _plotLocationsDots(model.locations, [axis], subplot) - - -def _plotLocations3D(model, axes, fig, rows, cols, names, **kwargs): - ax1, ax2 = axes - - axis3D = fig.add_subplot(111, projection="3d") - for i, (support, color, name) in enumerate( - zip(model.supports, cycle(pyplot.cm.Set1.colors), cycle(names)) - ): - if name is not None: - axis3D.set_title(name) - axis3D.set_xlabel(ax1) - axis3D.set_ylabel(ax2) - pyplot.xlim(-1.0, +1.0) - pyplot.ylim(-1.0, +1.0) - - Xs = support.get(ax1, (-1.0, 0.0, +1.0)) - Ys = support.get(ax2, (-1.0, 0.0, +1.0)) - for x in stops(Xs): - X, Y, Z = [], [], [] - for y in Ys: - z = supportScalar({ax1: x, ax2: y}, support) - X.append(x) - Y.append(y) - Z.append(z) - axis3D.plot(X, Y, Z, color=color, **kwargs) - for y in stops(Ys): - X, Y, Z = [], [], [] - for x in Xs: - z = supportScalar({ax1: x, ax2: y}, support) - X.append(x) - Y.append(y) - Z.append(z) - axis3D.plot(X, Y, Z, color=color, **kwargs) - - _plotLocationsDots(model.locations, [ax1, ax2], axis3D) - - -def plotDocument(doc, fig, **kwargs): - doc.normalize() - locations = [s.location for s in doc.sources] - names = [s.name for s in doc.sources] - plotLocations(locations, fig, names, **kwargs) - - -def _plotModelFromMasters2D(model, masterValues, fig, **kwargs): - assert len(model.axisOrder) == 1 - axis = model.axisOrder[0] - - axis_min = min(loc.get(axis, 0) for loc in model.locations) - axis_max = max(loc.get(axis, 0) for loc in model.locations) - - import numpy as np - - X = np.arange(axis_min, axis_max, (axis_max - axis_min) / 100) - Y = [] - - for x in X: - loc = {axis: x} - v = model.interpolateFromMasters(loc, masterValues) - Y.append(v) - - subplot = fig.add_subplot(111) - subplot.plot(X, Y, "-", **kwargs) - - -def _plotModelFromMasters3D(model, masterValues, fig, **kwargs): - assert len(model.axisOrder) == 2 - axis1, axis2 = model.axisOrder[0], model.axisOrder[1] - - axis1_min = min(loc.get(axis1, 0) for loc in model.locations) - axis1_max = max(loc.get(axis1, 0) for loc in model.locations) - axis2_min = min(loc.get(axis2, 0) for loc in model.locations) - axis2_max = max(loc.get(axis2, 0) for loc in model.locations) - - import numpy as np - - X = np.arange(axis1_min, axis1_max, (axis1_max - axis1_min) / 100) - Y = np.arange(axis2_min, axis2_max, (axis2_max - axis2_min) / 100) - X, Y = np.meshgrid(X, Y) - Z = [] - - for row_x, row_y in zip(X, Y): - z_row = [] - Z.append(z_row) - for x, y in zip(row_x, row_y): - loc = {axis1: x, axis2: y} - v = model.interpolateFromMasters(loc, masterValues) - z_row.append(v) - Z = np.array(Z) - - axis3D = fig.add_subplot(111, projection="3d") - axis3D.plot_surface(X, Y, Z, **kwargs) - - -def plotModelFromMasters(model, masterValues, fig, **kwargs): - """Plot a variation model and set of master values corresponding - to the locations to the model into a pyplot figure. Variation - model must have axisOrder of size 1 or 2.""" - if len(model.axisOrder) == 1: - _plotModelFromMasters2D(model, masterValues, fig, **kwargs) - elif len(model.axisOrder) == 2: - _plotModelFromMasters3D(model, masterValues, fig, **kwargs) - else: - raise ValueError("Only 1 or 2 axes are supported") - - -def main(args=None): - from fontTools import configLogger - - if args is None: - args = sys.argv[1:] - - # configure the library logger (for >= WARNING) - configLogger() - # comment this out to enable debug messages from logger - # log.setLevel(logging.DEBUG) - - if len(args) < 1: - print("usage: fonttools varLib.plot source.designspace", file=sys.stderr) - print(" or") - print("usage: fonttools varLib.plot location1 location2 ...", file=sys.stderr) - print(" or") - print( - "usage: fonttools varLib.plot location1=value1 location2=value2 ...", - file=sys.stderr, - ) - sys.exit(1) - - fig = pyplot.figure() - fig.set_tight_layout(True) - - if len(args) == 1 and args[0].endswith(".designspace"): - doc = DesignSpaceDocument() - doc.read(args[0]) - plotDocument(doc, fig) - else: - axes = [chr(c) for c in range(ord("A"), ord("Z") + 1)] - if "=" not in args[0]: - locs = [dict(zip(axes, (float(v) for v in s.split(",")))) for s in args] - plotLocations(locs, fig) - else: - locations = [] - masterValues = [] - for arg in args: - loc, v = arg.split("=") - locations.append(dict(zip(axes, (float(v) for v in loc.split(","))))) - masterValues.append(float(v)) - model = VariationModel(locations, axes[: len(locations[0])]) - plotModelFromMasters(model, masterValues, fig) - - pyplot.show() - - -if __name__ == "__main__": - import sys - - sys.exit(main()) diff --git a/spaces/jsr90/laMoinsChere/README.md b/spaces/jsr90/laMoinsChere/README.md deleted file mode 100644 index c925c9c37dbe92fc2cf0c2dba5785e5e31db044b..0000000000000000000000000000000000000000 --- a/spaces/jsr90/laMoinsChere/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LaMoinsChere -emoji: ⛽ -colorFrom: red -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -This application shows a real time list of gas prices in french stations, given a department and a fuel type. diff --git a/spaces/justest/gpt4free/testing/aiservice/AiService.py b/spaces/justest/gpt4free/testing/aiservice/AiService.py deleted file mode 100644 index 287a39ef68f209a426c2381e2b7806c06148bb09..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/testing/aiservice/AiService.py +++ /dev/null @@ -1,62 +0,0 @@ -import os,sys -import requests -# from ...typing import get_type_hints - -url = "https://aiservice.vercel.app/api/chat/answer" -model = ['gpt-3.5-turbo'] -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - base = '' - for message in messages: - base += '%s: %s\n' % (message['role'], message['content']) - base += 'assistant:' - - headers = { - "accept": "*/*", - "content-type": "text/plain;charset=UTF-8", - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "Referer": "https://aiservice.vercel.app/chat", - } - data = { - "input": base - } - response = requests.post(url, headers=headers, json=data) - if response.status_code == 200: - _json = response.json() - yield _json['data'] - else: - print(f"Error Occurred::{response.status_code}") - return None - - - -# params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ -# '(%s)' % ', '.join( -# [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) - - -# Temporary For ChatCompletion Class -class ChatCompletion: - @staticmethod - def create(model: str, messages: list, provider: None or str, stream: bool = False, auth: str = False, **kwargs): - kwargs['auth'] = auth - - if provider and needs_auth and not auth: - print( - f'ValueError: {provider} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr) - sys.exit(1) - - try: - return (_create_completion(model, messages, stream, **kwargs) - if stream else ''.join(_create_completion(model, messages, stream, **kwargs))) - except TypeError as e: - print(e) - arg: str = str(e).split("'")[1] - print( - f"ValueError: {provider} does not support '{arg}' argument", file=sys.stderr) - sys.exit(1) \ No newline at end of file diff --git a/spaces/kazuk/youtube-whisper-09/README.md b/spaces/kazuk/youtube-whisper-09/README.md deleted file mode 100644 index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-09/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kcagle/AutoGPT/autogpt/config/singleton.py b/spaces/kcagle/AutoGPT/autogpt/config/singleton.py deleted file mode 100644 index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/config/singleton.py +++ /dev/null @@ -1,24 +0,0 @@ -"""The singleton metaclass for ensuring only one instance of a class.""" -import abc - - -class Singleton(abc.ABCMeta, type): - """ - Singleton metaclass for ensuring only one instance of a class. - """ - - _instances = {} - - def __call__(cls, *args, **kwargs): - """Call method for the singleton metaclass.""" - if cls not in cls._instances: - cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) - return cls._instances[cls] - - -class AbstractSingleton(abc.ABC, metaclass=Singleton): - """ - Abstract singleton class for ensuring only one instance of a class. - """ - - pass diff --git a/spaces/kdrkdrkdr/AzusaTTS/text/__init__.py b/spaces/kdrkdrkdr/AzusaTTS/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/AzusaTTS/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/AiService.py b/spaces/kepl/gpt/g4f/Provider/Providers/AiService.py deleted file mode 100644 index ef8265ff8f5cae4d87fea24369373ae74491d2bc..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/AiService.py +++ /dev/null @@ -1,40 +0,0 @@ -import os -import requests -from ...typing import get_type_hints - -url = "https://aiservice.vercel.app/api/chat/answer" -model = ['gpt-3.5-turbo'] -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - base = '' - for message in messages: - base += '%s: %s\n' % (message['role'], message['content']) - base += 'assistant:' - - headers = { - "accept": "*/*", - "content-type": "text/plain;charset=UTF-8", - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "Referer": "https://aiservice.vercel.app/chat", - } - data = { - "input": base - } - response = requests.post(url, headers=headers, json=data) - if response.status_code == 200: - _json = response.json() - yield _json['data'] - else: - print(f"Error Occurred::{response.status_code}") - return None - - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/keras-io/cct/app.py b/spaces/keras-io/cct/app.py deleted file mode 100644 index c53cf5f61f0f69f94e7d5c1be87809d08142649d..0000000000000000000000000000000000000000 --- a/spaces/keras-io/cct/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -from huggingface_hub import from_pretrained_keras -import tensorflow as tf - - -CLASSES = { - 0: "airplane", - 1: "automobile", - 2: "bird", - 3: "cat", - 4: "deer", - 5: "dog", - 6: "frog", - 7: "horse", - 8: "ship", - 9: "truck", -} -IMAGE_SIZE = 32 - - -model = from_pretrained_keras("keras-io/cct") - - -def reshape_image(image): - image = tf.convert_to_tensor(image) - image.set_shape([None, None, 3]) - image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) - image = tf.expand_dims(image, axis=0) - return image - - -def classify_image(input_image): - input_image = reshape_image(input_image) - logits = model.predict(input_image).flatten() - predictions = tf.nn.softmax(logits) - output_labels = {CLASSES[i]: float(predictions[i]) for i in CLASSES.keys()} - return output_labels - - -# Gradio Interface -examples = [["./bird.png"], ["./cat.png"], ["./dog.png"], ["./horse.png"]] -title = "Image Classification using Compact Convolutional Transformer (CCT)" -description = """ -Upload an image or select one from the examples and ask the model to label it! -
                -The model was trained on the CIFAR-10 dataset. Therefore, it is able to recognise these 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. -
                -
                -

                - Model: https://huggingface.co/keras-io/cct -
                - Keras Example: https://keras.io/examples/vision/cct/ -

                -
                -""" -article = """ -
                - Space by Edoardo Abati -
                - Keras example by Sayak Paul -
                -""" - -interface = gr.Interface( - fn=classify_image, - inputs=gr.inputs.Image(), - outputs=gr.outputs.Label(), - examples=examples, - title=title, - description=description, - article=article, - allow_flagging="never", -) -interface.launch(enable_queue=True) diff --git a/spaces/kevinwang676/SadTalker/src/face3d/util/__init__.py b/spaces/kevinwang676/SadTalker/src/face3d/util/__init__.py deleted file mode 100644 index 04eecb58b62f8c9d11d17606c6241d278a48b9b9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/util/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -"""This package includes a miscellaneous collection of useful helper functions.""" -from src.face3d.util import * - diff --git a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py b/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py deleted file mode 100644 index 119a27df498e76f5270bdf30da501730837a212d..0000000000000000000000000000000000000000 --- a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/utils/model_list.py +++ /dev/null @@ -1,48 +0,0 @@ -stable_model_list = [ - "runwayml/stable-diffusion-v1-5", - "stabilityai/stable-diffusion-2-1", - "prompthero/openjourney-v4", - "wavymulder/Analog-Diffusion", - "dreamlike-art/dreamlike-diffusion-1.0", - "gsdf/Counterfeit-V2.5", - "dreamlike-art/dreamlike-photoreal-2.0" - - -] - -controlnet_canny_model_list = [ - "lllyasviel/sd-controlnet-canny", - "thibaud/controlnet-sd21-canny-diffusers", -] - -controlnet_depth_model_list = [ - "lllyasviel/sd-controlnet-depth", - "thibaud/controlnet-sd21-depth-diffusers", -] - -controlnet_pose_model_list = [ - "lllyasviel/sd-controlnet-openpose", - "thibaud/controlnet-sd21-openpose-diffusers", -] - -controlnet_hed_model_list = [ - "lllyasviel/sd-controlnet-hed", - "thibaud/controlnet-sd21-hed-diffusers", -] - -controlnet_scribble_model_list = [ - "lllyasviel/sd-controlnet-scribble", - "thibaud/controlnet-sd21-scribble-diffusers", -] -stable_inpiant_model_list = [ - "stabilityai/stable-diffusion-2-inpainting", - "runwayml/stable-diffusion-inpainting", -] - -controlnet_mlsd_model_list = [ - "lllyasviel/sd-controlnet-mlsd", -] - -controlnet_seg_model_list = [ - "lllyasviel/sd-controlnet-seg", -] diff --git a/spaces/krunalss/firstllm/README.md b/spaces/krunalss/firstllm/README.md deleted file mode 100644 index b5da3a9c6ee3df3596c19e5dc994168e9c2f7475..0000000000000000000000000000000000000000 --- a/spaces/krunalss/firstllm/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Firstllm -emoji: 📈 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py deleted file mode 100644 index 08d7e035315677856fd2cd0be2044689b57619bf..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/middleware/trustedhost.py +++ /dev/null @@ -1,3 +0,0 @@ -from starlette.middleware.trustedhost import ( # noqa - TrustedHostMiddleware as TrustedHostMiddleware, -) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/help.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/help.py deleted file mode 100644 index 4334e5001af3416a256add1ec6d32c422d015c8d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/help.py +++ /dev/null @@ -1,35 +0,0 @@ -import pkgutil -import sys -import fontTools -import importlib -import os -from pathlib import Path - - -def main(): - """Show this help""" - path = fontTools.__path__ - descriptions = {} - for pkg in sorted( - mod.name - for mod in pkgutil.walk_packages([fontTools.__path__[0]], prefix="fontTools.") - ): - try: - imports = __import__(pkg, globals(), locals(), ["main"]) - except ImportError as e: - continue - try: - description = imports.main.__doc__ - if description: - pkg = pkg.replace("fontTools.", "").replace(".__main__", "") - # show the docstring's first line only - descriptions[pkg] = description.splitlines()[0] - except AttributeError as e: - pass - for pkg, description in descriptions.items(): - print("fonttools %-12s %s" % (pkg, description), file=sys.stderr) - - -if __name__ == "__main__": - print("fonttools v%s\n" % fontTools.__version__, file=sys.stderr) - main() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/linkify_it/tlds.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/linkify_it/tlds.py deleted file mode 100644 index 7f8053ded999e6da51d64b54f6dbf2b77b26ac95..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/linkify_it/tlds.py +++ /dev/null @@ -1,1517 +0,0 @@ -"""TLDS - -Version 2020110600, Last Updated Fri Nov 6 07:07:02 2020 UTC - -References: - http://data.iana.org/TLD/tlds-alpha-by-domain.txt -""" -TLDS = [ - "AAA", - "AARP", - "ABARTH", - "ABB", - "ABBOTT", - "ABBVIE", - "ABC", - "ABLE", - "ABOGADO", - "ABUDHABI", - "AC", - "ACADEMY", - "ACCENTURE", - "ACCOUNTANT", - "ACCOUNTANTS", - "ACO", - "ACTOR", - "AD", - "ADAC", - "ADS", - "ADULT", - "AE", - "AEG", - "AERO", - "AETNA", - "AF", - "AFAMILYCOMPANY", - "AFL", - "AFRICA", - "AG", - "AGAKHAN", - "AGENCY", - "AI", - "AIG", - "AIRBUS", - "AIRFORCE", - "AIRTEL", - "AKDN", - "AL", - "ALFAROMEO", - "ALIBABA", - "ALIPAY", - "ALLFINANZ", - "ALLSTATE", - "ALLY", - "ALSACE", - "ALSTOM", - "AM", - "AMAZON", - "AMERICANEXPRESS", - "AMERICANFAMILY", - "AMEX", - "AMFAM", - "AMICA", - "AMSTERDAM", - "ANALYTICS", - "ANDROID", - "ANQUAN", - "ANZ", - "AO", - "AOL", - "APARTMENTS", - "APP", - "APPLE", - "AQ", - "AQUARELLE", - "AR", - "ARAB", - "ARAMCO", - "ARCHI", - "ARMY", - "ARPA", - "ART", - "ARTE", - "AS", - "ASDA", - "ASIA", - "ASSOCIATES", - "AT", - "ATHLETA", - "ATTORNEY", - "AU", - "AUCTION", - "AUDI", - "AUDIBLE", - "AUDIO", - "AUSPOST", - "AUTHOR", - "AUTO", - "AUTOS", - "AVIANCA", - "AW", - "AWS", - "AX", - "AXA", - "AZ", - "AZURE", - "BA", - "BABY", - "BAIDU", - "BANAMEX", - "BANANAREPUBLIC", - "BAND", - "BANK", - "BAR", - "BARCELONA", - "BARCLAYCARD", - "BARCLAYS", - "BAREFOOT", - "BARGAINS", - "BASEBALL", - "BASKETBALL", - "BAUHAUS", - "BAYERN", - "BB", - "BBC", - "BBT", - "BBVA", - "BCG", - "BCN", - "BD", - "BE", - "BEATS", - "BEAUTY", - "BEER", - "BENTLEY", - "BERLIN", - "BEST", - "BESTBUY", - "BET", - "BF", - "BG", - "BH", - "BHARTI", - "BI", - "BIBLE", - "BID", - "BIKE", - "BING", - "BINGO", - "BIO", - "BIZ", - "BJ", - "BLACK", - "BLACKFRIDAY", - "BLOCKBUSTER", - "BLOG", - "BLOOMBERG", - "BLUE", - "BM", - "BMS", - "BMW", - "BN", - "BNPPARIBAS", - "BO", - "BOATS", - "BOEHRINGER", - "BOFA", - "BOM", - "BOND", - "BOO", - "BOOK", - "BOOKING", - "BOSCH", - "BOSTIK", - "BOSTON", - "BOT", - "BOUTIQUE", - "BOX", - "BR", - "BRADESCO", - "BRIDGESTONE", - "BROADWAY", - "BROKER", - "BROTHER", - "BRUSSELS", - "BS", - "BT", - "BUDAPEST", - "BUGATTI", - "BUILD", - "BUILDERS", - "BUSINESS", - "BUY", - "BUZZ", - "BV", - "BW", - "BY", - "BZ", - "BZH", - "CA", - "CAB", - "CAFE", - "CAL", - "CALL", - "CALVINKLEIN", - "CAM", - "CAMERA", - "CAMP", - "CANCERRESEARCH", - "CANON", - "CAPETOWN", - "CAPITAL", - "CAPITALONE", - "CAR", - "CARAVAN", - "CARDS", - "CARE", - "CAREER", - "CAREERS", - "CARS", - "CASA", - "CASE", - "CASEIH", - "CASH", - "CASINO", - "CAT", - "CATERING", - "CATHOLIC", - "CBA", - "CBN", - "CBRE", - "CBS", - "CC", - "CD", - "CEB", - "CENTER", - "CEO", - "CERN", - "CF", - "CFA", - "CFD", - "CG", - "CH", - "CHANEL", - "CHANNEL", - "CHARITY", - "CHASE", - "CHAT", - "CHEAP", - "CHINTAI", - "CHRISTMAS", - "CHROME", - "CHURCH", - "CI", - "CIPRIANI", - "CIRCLE", - "CISCO", - "CITADEL", - "CITI", - "CITIC", - "CITY", - "CITYEATS", - "CK", - "CL", - "CLAIMS", - "CLEANING", - "CLICK", - "CLINIC", - "CLINIQUE", - "CLOTHING", - "CLOUD", - "CLUB", - "CLUBMED", - "CM", - "CN", - "CO", - "COACH", - "CODES", - "COFFEE", - "COLLEGE", - "COLOGNE", - "COM", - "COMCAST", - "COMMBANK", - "COMMUNITY", - "COMPANY", - "COMPARE", - "COMPUTER", - "COMSEC", - "CONDOS", - "CONSTRUCTION", - "CONSULTING", - "CONTACT", - "CONTRACTORS", - "COOKING", - "COOKINGCHANNEL", - "COOL", - "COOP", - "CORSICA", - "COUNTRY", - "COUPON", - "COUPONS", - "COURSES", - "CPA", - "CR", - "CREDIT", - "CREDITCARD", - "CREDITUNION", - "CRICKET", - "CROWN", - "CRS", - "CRUISE", - "CRUISES", - "CSC", - "CU", - "CUISINELLA", - "CV", - "CW", - "CX", - "CY", - "CYMRU", - "CYOU", - "CZ", - "DABUR", - "DAD", - "DANCE", - "DATA", - "DATE", - "DATING", - "DATSUN", - "DAY", - "DCLK", - "DDS", - "DE", - "DEAL", - "DEALER", - "DEALS", - "DEGREE", - "DELIVERY", - "DELL", - "DELOITTE", - "DELTA", - "DEMOCRAT", - "DENTAL", - "DENTIST", - "DESI", - "DESIGN", - "DEV", - "DHL", - "DIAMONDS", - "DIET", - "DIGITAL", - "DIRECT", - "DIRECTORY", - "DISCOUNT", - "DISCOVER", - "DISH", - "DIY", - "DJ", - "DK", - "DM", - "DNP", - "DO", - "DOCS", - "DOCTOR", - "DOG", - "DOMAINS", - "DOT", - "DOWNLOAD", - "DRIVE", - "DTV", - "DUBAI", - "DUCK", - "DUNLOP", - "DUPONT", - "DURBAN", - "DVAG", - "DVR", - "DZ", - "EARTH", - "EAT", - "EC", - "ECO", - "EDEKA", - "EDU", - "EDUCATION", - "EE", - "EG", - "EMAIL", - "EMERCK", - "ENERGY", - "ENGINEER", - "ENGINEERING", - "ENTERPRISES", - "EPSON", - "EQUIPMENT", - "ER", - "ERICSSON", - "ERNI", - "ES", - "ESQ", - "ESTATE", - "ET", - "ETISALAT", - "EU", - "EUROVISION", - "EUS", - "EVENTS", - "EXCHANGE", - "EXPERT", - "EXPOSED", - "EXPRESS", - "EXTRASPACE", - "FAGE", - "FAIL", - "FAIRWINDS", - "FAITH", - "FAMILY", - "FAN", - "FANS", - "FARM", - "FARMERS", - "FASHION", - "FAST", - "FEDEX", - "FEEDBACK", - "FERRARI", - "FERRERO", - "FI", - "FIAT", - "FIDELITY", - "FIDO", - "FILM", - "FINAL", - "FINANCE", - "FINANCIAL", - "FIRE", - "FIRESTONE", - "FIRMDALE", - "FISH", - "FISHING", - "FIT", - "FITNESS", - "FJ", - "FK", - "FLICKR", - "FLIGHTS", - "FLIR", - "FLORIST", - "FLOWERS", - "FLY", - "FM", - "FO", - "FOO", - "FOOD", - "FOODNETWORK", - "FOOTBALL", - "FORD", - "FOREX", - "FORSALE", - "FORUM", - "FOUNDATION", - "FOX", - "FR", - "FREE", - "FRESENIUS", - "FRL", - "FROGANS", - "FRONTDOOR", - "FRONTIER", - "FTR", - "FUJITSU", - "FUJIXEROX", - "FUN", - "FUND", - "FURNITURE", - "FUTBOL", - "FYI", - "GA", - "GAL", - "GALLERY", - "GALLO", - "GALLUP", - "GAME", - "GAMES", - "GAP", - "GARDEN", - "GAY", - "GB", - "GBIZ", - "GD", - "GDN", - "GE", - "GEA", - "GENT", - "GENTING", - "GEORGE", - "GF", - "GG", - "GGEE", - "GH", - "GI", - "GIFT", - "GIFTS", - "GIVES", - "GIVING", - "GL", - "GLADE", - "GLASS", - "GLE", - "GLOBAL", - "GLOBO", - "GM", - "GMAIL", - "GMBH", - "GMO", - "GMX", - "GN", - "GODADDY", - "GOLD", - "GOLDPOINT", - "GOLF", - "GOO", - "GOODYEAR", - "GOOG", - "GOOGLE", - "GOP", - "GOT", - "GOV", - "GP", - "GQ", - "GR", - "GRAINGER", - "GRAPHICS", - "GRATIS", - "GREEN", - "GRIPE", - "GROCERY", - "GROUP", - "GS", - "GT", - "GU", - "GUARDIAN", - "GUCCI", - "GUGE", - "GUIDE", - "GUITARS", - "GURU", - "GW", - "GY", - "HAIR", - "HAMBURG", - "HANGOUT", - "HAUS", - "HBO", - "HDFC", - "HDFCBANK", - "HEALTH", - "HEALTHCARE", - "HELP", - "HELSINKI", - "HERE", - "HERMES", - "HGTV", - "HIPHOP", - "HISAMITSU", - "HITACHI", - "HIV", - "HK", - "HKT", - "HM", - "HN", - "HOCKEY", - "HOLDINGS", - "HOLIDAY", - "HOMEDEPOT", - "HOMEGOODS", - "HOMES", - "HOMESENSE", - "HONDA", - "HORSE", - "HOSPITAL", - "HOST", - "HOSTING", - "HOT", - "HOTELES", - "HOTELS", - "HOTMAIL", - "HOUSE", - "HOW", - "HR", - "HSBC", - "HT", - "HU", - "HUGHES", - "HYATT", - "HYUNDAI", - "IBM", - "ICBC", - "ICE", - "ICU", - "ID", - "IE", - "IEEE", - "IFM", - "IKANO", - "IL", - "IM", - "IMAMAT", - "IMDB", - "IMMO", - "IMMOBILIEN", - "IN", - "INC", - "INDUSTRIES", - "INFINITI", - "INFO", - "ING", - "INK", - "INSTITUTE", - "INSURANCE", - "INSURE", - "INT", - "INTERNATIONAL", - "INTUIT", - "INVESTMENTS", - "IO", - "IPIRANGA", - "IQ", - "IR", - "IRISH", - "IS", - "ISMAILI", - "IST", - "ISTANBUL", - "IT", - "ITAU", - "ITV", - "IVECO", - "JAGUAR", - "JAVA", - "JCB", - "JCP", - "JE", - "JEEP", - "JETZT", - "JEWELRY", - "JIO", - "JLL", - "JM", - "JMP", - "JNJ", - "JO", - "JOBS", - "JOBURG", - "JOT", - "JOY", - "JP", - "JPMORGAN", - "JPRS", - "JUEGOS", - "JUNIPER", - "KAUFEN", - "KDDI", - "KE", - "KERRYHOTELS", - "KERRYLOGISTICS", - "KERRYPROPERTIES", - "KFH", - "KG", - "KH", - "KI", - "KIA", - "KIM", - "KINDER", - "KINDLE", - "KITCHEN", - "KIWI", - "KM", - "KN", - "KOELN", - "KOMATSU", - "KOSHER", - "KP", - "KPMG", - "KPN", - "KR", - "KRD", - "KRED", - "KUOKGROUP", - "KW", - "KY", - "KYOTO", - "KZ", - "LA", - "LACAIXA", - "LAMBORGHINI", - "LAMER", - "LANCASTER", - "LANCIA", - "LAND", - "LANDROVER", - "LANXESS", - "LASALLE", - "LAT", - "LATINO", - "LATROBE", - "LAW", - "LAWYER", - "LB", - "LC", - "LDS", - "LEASE", - "LECLERC", - "LEFRAK", - "LEGAL", - "LEGO", - "LEXUS", - "LGBT", - "LI", - "LIDL", - "LIFE", - "LIFEINSURANCE", - "LIFESTYLE", - "LIGHTING", - "LIKE", - "LILLY", - "LIMITED", - "LIMO", - "LINCOLN", - "LINDE", - "LINK", - "LIPSY", - "LIVE", - "LIVING", - "LIXIL", - "LK", - "LLC", - "LLP", - "LOAN", - "LOANS", - "LOCKER", - "LOCUS", - "LOFT", - "LOL", - "LONDON", - "LOTTE", - "LOTTO", - "LOVE", - "LPL", - "LPLFINANCIAL", - "LR", - "LS", - "LT", - "LTD", - "LTDA", - "LU", - "LUNDBECK", - "LUPIN", - "LUXE", - "LUXURY", - "LV", - "LY", - "MA", - "MACYS", - "MADRID", - "MAIF", - "MAISON", - "MAKEUP", - "MAN", - "MANAGEMENT", - "MANGO", - "MAP", - "MARKET", - "MARKETING", - "MARKETS", - "MARRIOTT", - "MARSHALLS", - "MASERATI", - "MATTEL", - "MBA", - "MC", - "MCKINSEY", - "MD", - "ME", - "MED", - "MEDIA", - "MEET", - "MELBOURNE", - "MEME", - "MEMORIAL", - "MEN", - "MENU", - "MERCKMSD", - "MG", - "MH", - "MIAMI", - "MICROSOFT", - "MIL", - "MINI", - "MINT", - "MIT", - "MITSUBISHI", - "MK", - "ML", - "MLB", - "MLS", - "MM", - "MMA", - "MN", - "MO", - "MOBI", - "MOBILE", - "MODA", - "MOE", - "MOI", - "MOM", - "MONASH", - "MONEY", - "MONSTER", - "MORMON", - "MORTGAGE", - "MOSCOW", - "MOTO", - "MOTORCYCLES", - "MOV", - "MOVIE", - "MP", - "MQ", - "MR", - "MS", - "MSD", - "MT", - "MTN", - "MTR", - "MU", - "MUSEUM", - "MUTUAL", - "MV", - "MW", - "MX", - "MY", - "MZ", - "NA", - "NAB", - "NAGOYA", - "NAME", - "NATIONWIDE", - "NATURA", - "NAVY", - "NBA", - "NC", - "NE", - "NEC", - "NET", - "NETBANK", - "NETFLIX", - "NETWORK", - "NEUSTAR", - "NEW", - "NEWHOLLAND", - "NEWS", - "NEXT", - "NEXTDIRECT", - "NEXUS", - "NF", - "NFL", - "NG", - "NGO", - "NHK", - "NI", - "NICO", - "NIKE", - "NIKON", - "NINJA", - "NISSAN", - "NISSAY", - "NL", - "NO", - "NOKIA", - "NORTHWESTERNMUTUAL", - "NORTON", - "NOW", - "NOWRUZ", - "NOWTV", - "NP", - "NR", - "NRA", - "NRW", - "NTT", - "NU", - "NYC", - "NZ", - "OBI", - "OBSERVER", - "OFF", - "OFFICE", - "OKINAWA", - "OLAYAN", - "OLAYANGROUP", - "OLDNAVY", - "OLLO", - "OM", - "OMEGA", - "ONE", - "ONG", - "ONL", - "ONLINE", - "ONYOURSIDE", - "OOO", - "OPEN", - "ORACLE", - "ORANGE", - "ORG", - "ORGANIC", - "ORIGINS", - "OSAKA", - "OTSUKA", - "OTT", - "OVH", - "PA", - "PAGE", - "PANASONIC", - "PARIS", - "PARS", - "PARTNERS", - "PARTS", - "PARTY", - "PASSAGENS", - "PAY", - "PCCW", - "PE", - "PET", - "PF", - "PFIZER", - "PG", - "PH", - "PHARMACY", - "PHD", - "PHILIPS", - "PHONE", - "PHOTO", - "PHOTOGRAPHY", - "PHOTOS", - "PHYSIO", - "PICS", - "PICTET", - "PICTURES", - "PID", - "PIN", - "PING", - "PINK", - "PIONEER", - "PIZZA", - "PK", - "PL", - "PLACE", - "PLAY", - "PLAYSTATION", - "PLUMBING", - "PLUS", - "PM", - "PN", - "PNC", - "POHL", - "POKER", - "POLITIE", - "PORN", - "POST", - "PR", - "PRAMERICA", - "PRAXI", - "PRESS", - "PRIME", - "PRO", - "PROD", - "PRODUCTIONS", - "PROF", - "PROGRESSIVE", - "PROMO", - "PROPERTIES", - "PROPERTY", - "PROTECTION", - "PRU", - "PRUDENTIAL", - "PS", - "PT", - "PUB", - "PW", - "PWC", - "PY", - "QA", - "QPON", - "QUEBEC", - "QUEST", - "QVC", - "RACING", - "RADIO", - "RAID", - "RE", - "READ", - "REALESTATE", - "REALTOR", - "REALTY", - "RECIPES", - "RED", - "REDSTONE", - "REDUMBRELLA", - "REHAB", - "REISE", - "REISEN", - "REIT", - "RELIANCE", - "REN", - "RENT", - "RENTALS", - "REPAIR", - "REPORT", - "REPUBLICAN", - "REST", - "RESTAURANT", - "REVIEW", - "REVIEWS", - "REXROTH", - "RICH", - "RICHARDLI", - "RICOH", - "RIL", - "RIO", - "RIP", - "RMIT", - "RO", - "ROCHER", - "ROCKS", - "RODEO", - "ROGERS", - "ROOM", - "RS", - "RSVP", - "RU", - "RUGBY", - "RUHR", - "RUN", - "RW", - "RWE", - "RYUKYU", - "SA", - "SAARLAND", - "SAFE", - "SAFETY", - "SAKURA", - "SALE", - "SALON", - "SAMSCLUB", - "SAMSUNG", - "SANDVIK", - "SANDVIKCOROMANT", - "SANOFI", - "SAP", - "SARL", - "SAS", - "SAVE", - "SAXO", - "SB", - "SBI", - "SBS", - "SC", - "SCA", - "SCB", - "SCHAEFFLER", - "SCHMIDT", - "SCHOLARSHIPS", - "SCHOOL", - "SCHULE", - "SCHWARZ", - "SCIENCE", - "SCJOHNSON", - "SCOT", - "SD", - "SE", - "SEARCH", - "SEAT", - "SECURE", - "SECURITY", - "SEEK", - "SELECT", - "SENER", - "SERVICES", - "SES", - "SEVEN", - "SEW", - "SEX", - "SEXY", - "SFR", - "SG", - "SH", - "SHANGRILA", - "SHARP", - "SHAW", - "SHELL", - "SHIA", - "SHIKSHA", - "SHOES", - "SHOP", - "SHOPPING", - "SHOUJI", - "SHOW", - "SHOWTIME", - "SHRIRAM", - "SI", - "SILK", - "SINA", - "SINGLES", - "SITE", - "SJ", - "SK", - "SKI", - "SKIN", - "SKY", - "SKYPE", - "SL", - "SLING", - "SM", - "SMART", - "SMILE", - "SN", - "SNCF", - "SO", - "SOCCER", - "SOCIAL", - "SOFTBANK", - "SOFTWARE", - "SOHU", - "SOLAR", - "SOLUTIONS", - "SONG", - "SONY", - "SOY", - "SPA", - "SPACE", - "SPORT", - "SPOT", - "SPREADBETTING", - "SR", - "SRL", - "SS", - "ST", - "STADA", - "STAPLES", - "STAR", - "STATEBANK", - "STATEFARM", - "STC", - "STCGROUP", - "STOCKHOLM", - "STORAGE", - "STORE", - "STREAM", - "STUDIO", - "STUDY", - "STYLE", - "SU", - "SUCKS", - "SUPPLIES", - "SUPPLY", - "SUPPORT", - "SURF", - "SURGERY", - "SUZUKI", - "SV", - "SWATCH", - "SWIFTCOVER", - "SWISS", - "SX", - "SY", - "SYDNEY", - "SYSTEMS", - "SZ", - "TAB", - "TAIPEI", - "TALK", - "TAOBAO", - "TARGET", - "TATAMOTORS", - "TATAR", - "TATTOO", - "TAX", - "TAXI", - "TC", - "TCI", - "TD", - "TDK", - "TEAM", - "TECH", - "TECHNOLOGY", - "TEL", - "TEMASEK", - "TENNIS", - "TEVA", - "TF", - "TG", - "TH", - "THD", - "THEATER", - "THEATRE", - "TIAA", - "TICKETS", - "TIENDA", - "TIFFANY", - "TIPS", - "TIRES", - "TIROL", - "TJ", - "TJMAXX", - "TJX", - "TK", - "TKMAXX", - "TL", - "TM", - "TMALL", - "TN", - "TO", - "TODAY", - "TOKYO", - "TOOLS", - "TOP", - "TORAY", - "TOSHIBA", - "TOTAL", - "TOURS", - "TOWN", - "TOYOTA", - "TOYS", - "TR", - "TRADE", - "TRADING", - "TRAINING", - "TRAVEL", - "TRAVELCHANNEL", - "TRAVELERS", - "TRAVELERSINSURANCE", - "TRUST", - "TRV", - "TT", - "TUBE", - "TUI", - "TUNES", - "TUSHU", - "TV", - "TVS", - "TW", - "TZ", - "UA", - "UBANK", - "UBS", - "UG", - "UK", - "UNICOM", - "UNIVERSITY", - "UNO", - "UOL", - "UPS", - "US", - "UY", - "UZ", - "VA", - "VACATIONS", - "VANA", - "VANGUARD", - "VC", - "VE", - "VEGAS", - "VENTURES", - "VERISIGN", - "VERSICHERUNG", - "VET", - "VG", - "VI", - "VIAJES", - "VIDEO", - "VIG", - "VIKING", - "VILLAS", - "VIN", - "VIP", - "VIRGIN", - "VISA", - "VISION", - "VIVA", - "VIVO", - "VLAANDEREN", - "VN", - "VODKA", - "VOLKSWAGEN", - "VOLVO", - "VOTE", - "VOTING", - "VOTO", - "VOYAGE", - "VU", - "VUELOS", - "WALES", - "WALMART", - "WALTER", - "WANG", - "WANGGOU", - "WATCH", - "WATCHES", - "WEATHER", - "WEATHERCHANNEL", - "WEBCAM", - "WEBER", - "WEBSITE", - "WED", - "WEDDING", - "WEIBO", - "WEIR", - "WF", - "WHOSWHO", - "WIEN", - "WIKI", - "WILLIAMHILL", - "WIN", - "WINDOWS", - "WINE", - "WINNERS", - "WME", - "WOLTERSKLUWER", - "WOODSIDE", - "WORK", - "WORKS", - "WORLD", - "WOW", - "WS", - "WTC", - "WTF", - "XBOX", - "XEROX", - "XFINITY", - "XIHUAN", - "XIN", - "XN--11B4C3D", - "XN--1CK2E1B", - "XN--1QQW23A", - "XN--2SCRJ9C", - "XN--30RR7Y", - "XN--3BST00M", - "XN--3DS443G", - "XN--3E0B707E", - "XN--3HCRJ9C", - "XN--3OQ18VL8PN36A", - "XN--3PXU8K", - "XN--42C2D9A", - "XN--45BR5CYL", - "XN--45BRJ9C", - "XN--45Q11C", - "XN--4GBRIM", - "XN--54B7FTA0CC", - "XN--55QW42G", - "XN--55QX5D", - "XN--5SU34J936BGSG", - "XN--5TZM5G", - "XN--6FRZ82G", - "XN--6QQ986B3XL", - "XN--80ADXHKS", - "XN--80AO21A", - "XN--80AQECDR1A", - "XN--80ASEHDB", - "XN--80ASWG", - "XN--8Y0A063A", - "XN--90A3AC", - "XN--90AE", - "XN--90AIS", - "XN--9DBQ2A", - "XN--9ET52U", - "XN--9KRT00A", - "XN--B4W605FERD", - "XN--BCK1B9A5DRE4C", - "XN--C1AVG", - "XN--C2BR7G", - "XN--CCK2B3B", - "XN--CCKWCXETD", - "XN--CG4BKI", - "XN--CLCHC0EA0B2G2A9GCD", - "XN--CZR694B", - "XN--CZRS0T", - "XN--CZRU2D", - "XN--D1ACJ3B", - "XN--D1ALF", - "XN--E1A4C", - "XN--ECKVDTC9D", - "XN--EFVY88H", - "XN--FCT429K", - "XN--FHBEI", - "XN--FIQ228C5HS", - "XN--FIQ64B", - "XN--FIQS8S", - "XN--FIQZ9S", - "XN--FJQ720A", - "XN--FLW351E", - "XN--FPCRJ9C3D", - "XN--FZC2C9E2C", - "XN--FZYS8D69UVGM", - "XN--G2XX48C", - "XN--GCKR3F0F", - "XN--GECRJ9C", - "XN--GK3AT1E", - "XN--H2BREG3EVE", - "XN--H2BRJ9C", - "XN--H2BRJ9C8C", - "XN--HXT814E", - "XN--I1B6B1A6A2E", - "XN--IMR513N", - "XN--IO0A7I", - "XN--J1AEF", - "XN--J1AMH", - "XN--J6W193G", - "XN--JLQ480N2RG", - "XN--JLQ61U9W7B", - "XN--JVR189M", - "XN--KCRX77D1X4A", - "XN--KPRW13D", - "XN--KPRY57D", - "XN--KPUT3I", - "XN--L1ACC", - "XN--LGBBAT1AD8J", - "XN--MGB9AWBF", - "XN--MGBA3A3EJT", - "XN--MGBA3A4F16A", - "XN--MGBA7C0BBN0A", - "XN--MGBAAKC7DVF", - "XN--MGBAAM7A8H", - "XN--MGBAB2BD", - "XN--MGBAH1A3HJKRD", - "XN--MGBAI9AZGQP6J", - "XN--MGBAYH7GPA", - "XN--MGBBH1A", - "XN--MGBBH1A71E", - "XN--MGBC0A9AZCG", - "XN--MGBCA7DZDO", - "XN--MGBCPQ6GPA1A", - "XN--MGBERP4A5D4AR", - "XN--MGBGU82A", - "XN--MGBI4ECEXP", - "XN--MGBPL2FH", - "XN--MGBT3DHD", - "XN--MGBTX2B", - "XN--MGBX4CD0AB", - "XN--MIX891F", - "XN--MK1BU44C", - "XN--MXTQ1M", - "XN--NGBC5AZD", - "XN--NGBE9E0A", - "XN--NGBRX", - "XN--NODE", - "XN--NQV7F", - "XN--NQV7FS00EMA", - "XN--NYQY26A", - "XN--O3CW4H", - "XN--OGBPF8FL", - "XN--OTU796D", - "XN--P1ACF", - "XN--P1AI", - "XN--PGBS0DH", - "XN--PSSY2U", - "XN--Q7CE6A", - "XN--Q9JYB4C", - "XN--QCKA1PMC", - "XN--QXA6A", - "XN--QXAM", - "XN--RHQV96G", - "XN--ROVU88B", - "XN--RVC1E0AM3E", - "XN--S9BRJ9C", - "XN--SES554G", - "XN--T60B56A", - "XN--TCKWE", - "XN--TIQ49XQYJ", - "XN--UNUP4Y", - "XN--VERMGENSBERATER-CTB", - "XN--VERMGENSBERATUNG-PWB", - "XN--VHQUV", - "XN--VUQ861B", - "XN--W4R85EL8FHU5DNRA", - "XN--W4RS40L", - "XN--WGBH1C", - "XN--WGBL6A", - "XN--XHQ521B", - "XN--XKC2AL3HYE2A", - "XN--XKC2DL3A5EE0H", - "XN--Y9A3AQ", - "XN--YFRO4I67O", - "XN--YGBI2AMMX", - "XN--ZFR164B", - "XXX", - "XYZ", - "YACHTS", - "YAHOO", - "YAMAXUN", - "YANDEX", - "YE", - "YODOBASHI", - "YOGA", - "YOKOHAMA", - "YOU", - "YOUTUBE", - "YT", - "YUN", - "ZA", - "ZAPPOS", - "ZARA", - "ZERO", - "ZIP", - "ZM", - "ZONE", - "ZUERICH", - "ZW", -] diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/hr.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/hr.py deleted file mode 100644 index 22c6972262f621126f998e2fc544718243623139..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_block/hr.py +++ /dev/null @@ -1,53 +0,0 @@ -"""Horizontal rule - -At least 3 of these characters on a line * - _ -""" -import logging - -from ..common.utils import isSpace -from .state_block import StateBlock - -LOGGER = logging.getLogger(__name__) - - -def hr(state: StateBlock, startLine: int, endLine: int, silent: bool): - LOGGER.debug("entering hr: %s, %s, %s, %s", state, startLine, endLine, silent) - - pos = state.bMarks[startLine] + state.tShift[startLine] - maximum = state.eMarks[startLine] - - # if it's indented more than 3 spaces, it should be a code block - if state.sCount[startLine] - state.blkIndent >= 4: - return False - - marker = state.srcCharCode[pos] - pos += 1 - - # Check hr marker: /* * */ /* - */ /* _ */ - if marker != 0x2A and marker != 0x2D and marker != 0x5F: - return False - - # markers can be mixed with spaces, but there should be at least 3 of them - - cnt = 1 - while pos < maximum: - ch = state.srcCharCode[pos] - pos += 1 - if ch != marker and not isSpace(ch): - return False - if ch == marker: - cnt += 1 - - if cnt < 3: - return False - - if silent: - return True - - state.line = startLine + 1 - - token = state.push("hr", "hr", 0) - token.map = [startLine, state.line] - token.markup = chr(marker) * (cnt + 1) - - return True diff --git a/spaces/kyleledbetter/responsibleGPT/README.md b/spaces/kyleledbetter/responsibleGPT/README.md deleted file mode 100644 index e7cf47f63f7349f92b70bd65a53feb4e059cbcfe..0000000000000000000000000000000000000000 --- a/spaces/kyleledbetter/responsibleGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ResponsibleGPT -emoji: 🦀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/api/util.py b/spaces/leogabraneth/text-generation-webui-main/extensions/api/util.py deleted file mode 100644 index b90df9bcc9defa774cf47164b652a83a52ab892c..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/api/util.py +++ /dev/null @@ -1,154 +0,0 @@ -import asyncio -import functools -import threading -import time -import traceback -from threading import Thread -from typing import Callable, Optional - -from modules import shared -from modules.chat import load_character_memoized -from modules.presets import load_preset_memoized - -# We use a thread local to store the asyncio lock, so that each thread -# has its own lock. This isn't strictly necessary, but it makes it -# such that if we can support multiple worker threads in the future, -# thus handling multiple requests in parallel. -api_tls = threading.local() - - -def build_parameters(body, chat=False): - - generate_params = { - 'max_new_tokens': int(body.get('max_new_tokens', body.get('max_length', 200))), - 'auto_max_new_tokens': bool(body.get('auto_max_new_tokens', False)), - 'max_tokens_second': int(body.get('max_tokens_second', 0)), - 'do_sample': bool(body.get('do_sample', True)), - 'temperature': float(body.get('temperature', 0.5)), - 'top_p': float(body.get('top_p', 1)), - 'typical_p': float(body.get('typical_p', body.get('typical', 1))), - 'epsilon_cutoff': float(body.get('epsilon_cutoff', 0)), - 'eta_cutoff': float(body.get('eta_cutoff', 0)), - 'tfs': float(body.get('tfs', 1)), - 'top_a': float(body.get('top_a', 0)), - 'repetition_penalty': float(body.get('repetition_penalty', body.get('rep_pen', 1.1))), - 'presence_penalty': float(body.get('presence_penalty', body.get('presence_pen', 0))), - 'frequency_penalty': float(body.get('frequency_penalty', body.get('frequency_pen', 0))), - 'repetition_penalty_range': int(body.get('repetition_penalty_range', 0)), - 'encoder_repetition_penalty': float(body.get('encoder_repetition_penalty', 1.0)), - 'top_k': int(body.get('top_k', 0)), - 'min_length': int(body.get('min_length', 0)), - 'no_repeat_ngram_size': int(body.get('no_repeat_ngram_size', 0)), - 'num_beams': int(body.get('num_beams', 1)), - 'penalty_alpha': float(body.get('penalty_alpha', 0)), - 'length_penalty': float(body.get('length_penalty', 1)), - 'early_stopping': bool(body.get('early_stopping', False)), - 'mirostat_mode': int(body.get('mirostat_mode', 0)), - 'mirostat_tau': float(body.get('mirostat_tau', 5)), - 'mirostat_eta': float(body.get('mirostat_eta', 0.1)), - 'grammar_string': str(body.get('grammar_string', '')), - 'guidance_scale': float(body.get('guidance_scale', 1)), - 'negative_prompt': str(body.get('negative_prompt', '')), - 'seed': int(body.get('seed', -1)), - 'add_bos_token': bool(body.get('add_bos_token', True)), - 'truncation_length': int(body.get('truncation_length', body.get('max_context_length', 2048))), - 'custom_token_bans': str(body.get('custom_token_bans', '')), - 'ban_eos_token': bool(body.get('ban_eos_token', False)), - 'skip_special_tokens': bool(body.get('skip_special_tokens', True)), - 'custom_stopping_strings': '', # leave this blank - 'stopping_strings': body.get('stopping_strings', []), - } - - preset_name = body.get('preset', 'None') - if preset_name not in ['None', None, '']: - preset = load_preset_memoized(preset_name) - generate_params.update(preset) - - if chat: - character = body.get('character') - instruction_template = body.get('instruction_template', shared.settings['instruction_template']) - if str(instruction_template) == "None": - instruction_template = "Vicuna-v1.1" - if str(character) == "None": - character = "Assistant" - - name1, name2, _, greeting, context, _ = load_character_memoized(character, str(body.get('your_name', shared.settings['name1'])), '', instruct=False) - name1_instruct, name2_instruct, _, _, context_instruct, turn_template = load_character_memoized(instruction_template, '', '', instruct=True) - generate_params.update({ - 'mode': str(body.get('mode', 'chat')), - 'name1': str(body.get('name1', name1)), - 'name2': str(body.get('name2', name2)), - 'context': str(body.get('context', context)), - 'greeting': str(body.get('greeting', greeting)), - 'name1_instruct': str(body.get('name1_instruct', name1_instruct)), - 'name2_instruct': str(body.get('name2_instruct', name2_instruct)), - 'context_instruct': str(body.get('context_instruct', context_instruct)), - 'turn_template': str(body.get('turn_template', turn_template)), - 'chat-instruct_command': str(body.get('chat_instruct_command', body.get('chat-instruct_command', shared.settings['chat-instruct_command']))), - 'history': body.get('history', {'internal': [], 'visible': []}) - }) - - return generate_params - - -def try_start_cloudflared(port: int, tunnel_id: str, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None): - Thread(target=_start_cloudflared, args=[ - port, tunnel_id, max_attempts, on_start], daemon=True).start() - - -def _start_cloudflared(port: int, tunnel_id: str, max_attempts: int = 3, on_start: Optional[Callable[[str], None]] = None): - try: - from flask_cloudflared import _run_cloudflared - except ImportError: - print('You should install flask_cloudflared manually') - raise Exception( - 'flask_cloudflared not installed. Make sure you installed the requirements.txt for this extension.') - - for _ in range(max_attempts): - try: - if tunnel_id is not None: - public_url = _run_cloudflared(port, port + 1, tunnel_id=tunnel_id) - else: - public_url = _run_cloudflared(port, port + 1) - - if on_start: - on_start(public_url) - - return - except Exception: - traceback.print_exc() - time.sleep(3) - - raise Exception('Could not start cloudflared.') - - -def _get_api_lock(tls) -> asyncio.Lock: - """ - The streaming and blocking API implementations each run on their own - thread, and multiplex requests using asyncio. If multiple outstanding - requests are received at once, we will try to acquire the shared lock - shared.generation_lock multiple times in succession in the same thread, - which will cause a deadlock. - - To avoid this, we use this wrapper function to block on an asyncio - lock, and then try and grab the shared lock only while holding - the asyncio lock. - """ - if not hasattr(tls, "asyncio_lock"): - tls.asyncio_lock = asyncio.Lock() - - return tls.asyncio_lock - - -def with_api_lock(func): - """ - This decorator should be added to all streaming API methods which - require access to the shared.generation_lock. It ensures that the - tls.asyncio_lock is acquired before the method is called, and - released afterwards. - """ - @functools.wraps(func) - async def api_wrapper(*args, **kwargs): - async with _get_api_lock(api_tls): - return await func(*args, **kwargs) - return api_wrapper diff --git a/spaces/leonelhs/rembg/README.md b/spaces/leonelhs/rembg/README.md deleted file mode 100644 index 96f6acb9dc014c6a3df26b14dcae02b2f75719a0..0000000000000000000000000000000000000000 --- a/spaces/leonelhs/rembg/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rembg -emoji: 🦀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lewisliuX123/wechatglm_demo/.github/ISSUE_TEMPLATE.md b/spaces/lewisliuX123/wechatglm_demo/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index eac1f87e98b7e7d1af099769e5d4d8973002441f..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatglm_demo/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,28 +0,0 @@ -### 前置确认 - -1. 运行于国内网络环境,未开代理 -2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装 -3. 在已有 issue 中未搜索到类似问题 -4. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题 - - -### 问题描述 - -> 简要说明、截图、复现步骤等,也可以是需求或想法 - - - - -### 终端日志 (如有报错) - -``` -[在此处粘贴终端日志] -``` - - - -### 环境 - - - 操作系统类型 (Mac/Windows/Linux): - - Python版本 ( 执行 `python3 -V` ): - - pip版本 ( 依赖问题此项必填,执行 `pip3 -V`): diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/utils/plot.py b/spaces/lewiswu1209/MockingBird/synthesizer/utils/plot.py deleted file mode 100644 index efdb5670b4f26f2110988e818ff8ad9ff7238cef..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/utils/plot.py +++ /dev/null @@ -1,115 +0,0 @@ -import matplotlib -matplotlib.use("Agg") -import matplotlib.pyplot as plt -import numpy as np - - -def split_title_line(title_text, max_words=5): - """ - A function that splits any string based on specific character - (returning it with the string), with maximum number of words on it - """ - seq = title_text.split() - return "\n".join([" ".join(seq[i:i + max_words]) for i in range(0, len(seq), max_words)]) - -def plot_alignment(alignment, path, title=None, split_title=False, max_len=None): - if max_len is not None: - alignment = alignment[:, :max_len] - - fig = plt.figure(figsize=(8, 6)) - ax = fig.add_subplot(111) - - im = ax.imshow( - alignment, - aspect="auto", - origin="lower", - interpolation="none") - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - - if split_title: - title = split_title_line(title) - - plt.xlabel(xlabel) - plt.title(title) - plt.ylabel("Encoder timestep") - plt.tight_layout() - plt.savefig(path, format="png") - plt.close() - - -def plot_spectrogram(pred_spectrogram, path, title=None, split_title=False, target_spectrogram=None, max_len=None, auto_aspect=False): - if max_len is not None: - target_spectrogram = target_spectrogram[:max_len] - pred_spectrogram = pred_spectrogram[:max_len] - - if split_title: - title = split_title_line(title) - - fig = plt.figure(figsize=(10, 8)) - # Set common labels - fig.text(0.5, 0.18, title, horizontalalignment="center", fontsize=16) - - #target spectrogram subplot - if target_spectrogram is not None: - ax1 = fig.add_subplot(311) - ax2 = fig.add_subplot(312) - - if auto_aspect: - im = ax1.imshow(np.rot90(target_spectrogram), aspect="auto", interpolation="none") - else: - im = ax1.imshow(np.rot90(target_spectrogram), interpolation="none") - ax1.set_title("Target Mel-Spectrogram") - fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax1) - ax2.set_title("Predicted Mel-Spectrogram") - else: - ax2 = fig.add_subplot(211) - - if auto_aspect: - im = ax2.imshow(np.rot90(pred_spectrogram), aspect="auto", interpolation="none") - else: - im = ax2.imshow(np.rot90(pred_spectrogram), interpolation="none") - fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax2) - - plt.tight_layout() - plt.savefig(path, format="png") - plt.close() - - -def plot_spectrogram_and_trace(pred_spectrogram, path, title=None, split_title=False, target_spectrogram=None, max_len=None, auto_aspect=False, sw=None, step=0): - if max_len is not None: - target_spectrogram = target_spectrogram[:max_len] - pred_spectrogram = pred_spectrogram[:max_len] - - if split_title: - title = split_title_line(title) - - fig = plt.figure(figsize=(10, 8)) - # Set common labels - fig.text(0.5, 0.18, title, horizontalalignment="center", fontsize=16) - - #target spectrogram subplot - if target_spectrogram is not None: - ax1 = fig.add_subplot(311) - ax2 = fig.add_subplot(312) - - if auto_aspect: - im = ax1.imshow(np.rot90(target_spectrogram), aspect="auto", interpolation="none") - else: - im = ax1.imshow(np.rot90(target_spectrogram), interpolation="none") - ax1.set_title("Target Mel-Spectrogram") - fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax1) - ax2.set_title("Predicted Mel-Spectrogram") - else: - ax2 = fig.add_subplot(211) - - if auto_aspect: - im = ax2.imshow(np.rot90(pred_spectrogram), aspect="auto", interpolation="none") - else: - im = ax2.imshow(np.rot90(pred_spectrogram), interpolation="none") - fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax2) - - plt.tight_layout() - plt.savefig(path, format="png") - sw.add_figure("spectrogram", fig, step) - plt.close() \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Works Calendar Windows 10 BEST.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Works Calendar Windows 10 BEST.md deleted file mode 100644 index b1c3c30ee3894bc56fd0eb5ca3ac735892a3b1f5..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Works Calendar Windows 10 BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

                microsoft works calendar windows 10


                DOWNLOAD ○○○ https://bytlly.com/2uGyKy



                - -Open the Microsoft Works Calendar application, then select the calendar's tab in the right pane. 2. Click the "File" menu, then click "Export" followed by "vCalendar. 1fdad05405
                -
                -
                -

                diff --git a/spaces/lint/anime_controlnet/main.py b/spaces/lint/anime_controlnet/main.py deleted file mode 100644 index f34cd52a8fe2fafff4b7c86c4b198f098c3b177b..0000000000000000000000000000000000000000 --- a/spaces/lint/anime_controlnet/main.py +++ /dev/null @@ -1,49 +0,0 @@ - -from argparse import Namespace -from multiprocessing import cpu_count -from src.lab import Lab - -args = Namespace( - - pretrained_model_name_or_path="lint/liquidfix", - controlnet_weights_path="lint/anime_control/anime_merge", - #controlnet_weights_path=None, # - vae_path="lint/anime_vae", - - # dataset args - train_data_dir="/mnt/g/data/anybooru/train", - valid_data_dir="/mnt/g/data/anybooru/valid", - resolution=512, - from_hf_hub=False, - controlnet_hint_key="canny", # set this to "canny" to train with canny hint, or None to pass - - # training args - # options are ["zero convolutions", "input hint blocks"], otherwise trains whole controlnet - training_stage = "", - learning_rate=5e-6, - num_train_epochs=1000, - max_train_steps=None, - seed=3434554, - max_grad_norm=1.0, - gradient_accumulation_steps=1, - - # VRAM args - batch_size=1, - mixed_precision="fp16", # set to "fp16" for mixed-precision training. - gradient_checkpointing=True, # set this to True to lower the memory usage. - use_8bit_adam=True, # use 8bit optimizer from bitsandbytes - enable_xformers_memory_efficient_attention=True, - allow_tf32=True, - dataloader_num_workers=cpu_count(), - - # logging args - output_dir="./models", - report_to='tensorboard', - image_logging_steps=600, # disabled when 0. costs additional VRAM to log images - save_whole_pipeline=True, - checkpointing_steps=6000, -) - -if __name__ == '__main__': - lab = Lab(args) - lab.train(args.num_train_epochs) diff --git a/spaces/luost26/DiffAb/diffab/modules/common/structure.py b/spaces/luost26/DiffAb/diffab/modules/common/structure.py deleted file mode 100644 index afac456640869cf205cfeeeca7c656e1d5ff2d00..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/modules/common/structure.py +++ /dev/null @@ -1,77 +0,0 @@ -import torch -from torch.nn import Module, Linear, LayerNorm, Sequential, ReLU - -from ..common.geometry import compose_rotation_and_translation, quaternion_to_rotation_matrix, repr_6d_to_rotation_matrix - - -class FrameRotationTranslationPrediction(Module): - - def __init__(self, feat_dim, rot_repr, nn_type='mlp'): - super().__init__() - assert rot_repr in ('quaternion', '6d') - self.rot_repr = rot_repr - if rot_repr == 'quaternion': - out_dim = 3 + 3 - elif rot_repr == '6d': - out_dim = 6 + 3 - - if nn_type == 'linear': - self.nn = Linear(feat_dim, out_dim) - elif nn_type == 'mlp': - self.nn = Sequential( - Linear(feat_dim, feat_dim), ReLU(), - Linear(feat_dim, feat_dim), ReLU(), - Linear(feat_dim, out_dim) - ) - else: - raise ValueError('Unknown nn_type: %s' % nn_type) - - def forward(self, x): - y = self.nn(x) # (..., d+3) - if self.rot_repr == 'quaternion': - quaternion = torch.cat([torch.ones_like(y[..., :1]), y[..., 0:3]], dim=-1) - R_delta = quaternion_to_rotation_matrix(quaternion) - t_delta = y[..., 3:6] - return R_delta, t_delta - elif self.rot_repr == '6d': - R_delta = repr_6d_to_rotation_matrix(y[..., 0:6]) - t_delta = y[..., 6:9] - return R_delta, t_delta - - -class FrameUpdate(Module): - - def __init__(self, node_feat_dim, rot_repr='quaternion', rot_tran_nn_type='mlp'): - super().__init__() - self.transition_mlp = Sequential( - Linear(node_feat_dim, node_feat_dim), ReLU(), - Linear(node_feat_dim, node_feat_dim), ReLU(), - Linear(node_feat_dim, node_feat_dim), - ) - self.transition_layer_norm = LayerNorm(node_feat_dim) - - self.rot_tran = FrameRotationTranslationPrediction(node_feat_dim, rot_repr, nn_type=rot_tran_nn_type) - - def forward(self, R, t, x, mask_generate): - """ - Args: - R: Frame basis matrices, (N, L, 3, 3_index). - t: Frame external (absolute) coordinates, (N, L, 3). Unit: Angstrom. - x: Node-wise features, (N, L, F). - mask_generate: Masks, (N, L). - Returns: - R': Updated basis matrices, (N, L, 3, 3_index). - t': Updated coordinates, (N, L, 3). - """ - x = self.transition_layer_norm(x + self.transition_mlp(x)) - - R_delta, t_delta = self.rot_tran(x) # (N, L, 3, 3), (N, L, 3) - R_new, t_new = compose_rotation_and_translation(R, t, R_delta, t_delta) - - mask_R = mask_generate[:, :, None, None].expand_as(R) - mask_t = mask_generate[:, :, None].expand_as(t) - - R_new = torch.where(mask_R, R_new, R) - t_new = torch.where(mask_t, t_new, t) - - return R_new, t_new diff --git a/spaces/luost26/DiffAb/diffab/utils/transforms/__init__.py b/spaces/luost26/DiffAb/diffab/utils/transforms/__init__.py deleted file mode 100644 index 0c4cd2f33b86e4b0ad55bdb5c5a1f8ed392d9f6c..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/utils/transforms/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Transforms -from .mask import MaskSingleCDR, MaskMultipleCDRs, MaskAntibody -from .merge import MergeChains -from .patch import PatchAroundAnchor - -# Factory -from ._base import get_transform, Compose diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/tune/Makefile b/spaces/ma-xu/LIVE/thrust/dependencies/cub/tune/Makefile deleted file mode 100644 index 926b340fe4af77d77663281c5874e11fe3a41be4..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/tune/Makefile +++ /dev/null @@ -1,192 +0,0 @@ -#/****************************************************************************** -# * Copyright (c) 2011, Duane Merrill. All rights reserved. -# * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved. -# * -# * Redistribution and use in source and binary forms, with or without -# * modification, are permitted provided that the following conditions are met: -# * * Redistributions of source code must retain the above copyright -# * notice, this list of conditions and the following disclaimer. -# * * Redistributions in binary form must reproduce the above copyright -# * notice, this list of conditions and the following disclaimer in the -# * documentation and/or other materials provided with the distribution. -# * * Neither the name of the NVIDIA CORPORATION nor the -# * names of its contributors may be used to endorse or promote products -# * derived from this software without specific prior written permission. -# * -# * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# * -#******************************************************************************/ - -#------------------------------------------------------------------------------- -# Build script for project -#------------------------------------------------------------------------------- - -NVCC = "$(shell which nvcc)" -NVCC_VERSION = $(strip $(shell nvcc --version | grep release | sed 's/.*release //' | sed 's/,.*//')) - -# detect OS -OSUPPER = $(shell uname -s 2>/dev/null | tr [:lower:] [:upper:]) - -#------------------------------------------------------------------------------- -# Libs -#------------------------------------------------------------------------------- - - -#------------------------------------------------------------------------------- -# Includes -#------------------------------------------------------------------------------- - -INC = -I. -I.. -I../test - -#------------------------------------------------------------------------------- -# Libs -#------------------------------------------------------------------------------- - -LIBS += -lcudart - -#------------------------------------------------------------------------------- -# Defines -#------------------------------------------------------------------------------- - -DEFINES = - -#------------------------------------------------------------------------------- -# SM Arch -#------------------------------------------------------------------------------- - -ifdef sm - SM_ARCH = $(sm) -else - SM_ARCH = 200 -endif - -# Only one arch per tuning binary -ifeq (350, $(findstring 350, $(SM_ARCH))) - SM_TARGETS = -arch=sm_35 - SM_ARCH = 350 -endif -ifeq (300, $(findstring 300, $(SM_ARCH))) - SM_TARGETS = -arch=sm_30 - SM_ARCH = 300 -endif -ifeq (200, $(findstring 200, $(SM_ARCH))) - SM_TARGETS = -arch=sm_20 - SM_ARCH = 200 -endif -ifeq (130, $(findstring 130, $(SM_ARCH))) - SM_TARGETS = -arch=sm_13 - SM_ARCH = 130 -endif -ifeq (110, $(findstring 110, $(SM_ARCH))) - SM_TARGETS = -arch=sm_11 - SM_ARCH = 110 -endif -ifeq (100, $(findstring 100, $(SM_ARCH))) - SM_TARGETS = -arch=sm_10 - SM_ARCH = 100 -endif - - -#------------------------------------------------------------------------------- -# Compiler Flags -#------------------------------------------------------------------------------- - -NVCCFLAGS = -Xptxas -v -Xcudafe -\# - -# Help the compiler/linker work with huge numbers of kernels on Windows -ifeq (WIN_NT, $(findstring WIN_NT, $(OSUPPER))) - NVCCFLAGS += -Xcompiler /bigobj -Xcompiler /Zm500 -endif - -# 32/64-bit (32-bit device pointers by default) -ifeq ($(force32), 1) - CPU_ARCH = -m32 - CPU_ARCH_SUFFIX = i386 -else - CPU_ARCH = -m64 - CPU_ARCH_SUFFIX = x86_64 -endif - -# CUDA ABI enable/disable (enabled by default) -ifneq ($(abi), 0) - ABI_SUFFIX = abi -else - NVCCFLAGS += -Xptxas -abi=no - ABI_SUFFIX = noabi -endif - -# NVVM/Open64 middle-end compiler (nvvm by default) -ifeq ($(open64), 1) - NVCCFLAGS += -open64 - PTX_SUFFIX = open64 -else - PTX_SUFFIX = nvvm -endif - -# Verbose toolchain output from nvcc -ifeq ($(verbose), 1) - NVCCFLAGS += -v -endif - -# Keep intermediate compilation artifacts -ifeq ($(keep), 1) - NVCCFLAGS += -keep -endif - -# Data type size to compile a schmoo binary for -ifdef tunesize - TUNE_SIZE = $(tunesize) -else - TUNE_SIZE = 4 -endif - - -SUFFIX = $(TUNE_SIZE)B_sm$(SM_ARCH)_$(PTX_SUFFIX)_$(NVCC_VERSION)_$(ABI_SUFFIX)_$(CPU_ARCH_SUFFIX) - -#------------------------------------------------------------------------------- -# Dependency Lists -#------------------------------------------------------------------------------- - -rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d)) - -DEPS = ./Makefile \ - ../test/test_util.h \ - $(call rwildcard,../cub/,*.cuh) - - -#------------------------------------------------------------------------------- -# make default -#------------------------------------------------------------------------------- - -default: - - -#------------------------------------------------------------------------------- -# make clean -#------------------------------------------------------------------------------- - -clean : - rm -f bin/*$(CPU_ARCH_SUFFIX)* - rm -f *.i* *.cubin *.cu.c *.cudafe* *.fatbin.c *.ptx *.hash *.cu.cpp *.o - - - -#------------------------------------------------------------------------------- -# make tune_device_reduce -#------------------------------------------------------------------------------- - -tune_device_reduce: bin/tune_device_reduce_$(SUFFIX) - -bin/tune_device_reduce_$(SUFFIX) : tune_device_reduce.cu $(DEPS) - mkdir -p bin - $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/tune_device_reduce_$(SUFFIX) tune_device_reduce.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3 -DTUNE_ARCH=$(SM_ARCH) -DTUNE_SIZE=$(TUNE_SIZE) - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/future.h b/spaces/ma-xu/LIVE/thrust/thrust/future.h deleted file mode 100644 index 12bebf8c6e041484b43d5a97759cccd730fc82f3..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/future.h +++ /dev/null @@ -1,179 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/future.h - * \brief `thrust::future`, an asynchronous value type. - */ - -#pragma once - -#include -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 && !defined(THRUST_LEGACY_GCC) - -#include -#include - -#include - -/* -// #include the host system's pointer.h header. -#define __THRUST_HOST_SYSTEM_POINTER_HEADER <__THRUST_HOST_SYSTEM_ROOT/pointer.h> - #include __THRUST_HOST_SYSTEM_POINTER_HEADER -#undef __THRUST_HOST_SYSTEM_POINTER_HEADER -*/ - -// #include the device system's pointer.h header. -#define __THRUST_DEVICE_SYSTEM_POINTER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/pointer.h> - #include __THRUST_DEVICE_SYSTEM_POINTER_HEADER -#undef __THRUST_DEVICE_SYSTEM_POINTER_HEADER - -/* -// #include the host system's future.h header. -#define __THRUST_HOST_SYSTEM_FUTURE_HEADER <__THRUST_HOST_SYSTEM_ROOT/future.h> - #include __THRUST_HOST_SYSTEM_FUTURE_HEADER -#undef __THRUST_HOST_SYSTEM_FUTURE_HEADER -*/ - -// #include the device system's future.h header. -#define __THRUST_DEVICE_SYSTEM_FUTURE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/future.h> - #include __THRUST_DEVICE_SYSTEM_FUTURE_HEADER -#undef __THRUST_DEVICE_SYSTEM_FUTURE_HEADER - -namespace thrust -{ - -/////////////////////////////////////////////////////////////////////////////// - -// `select_unique_(future|event)_type` is a hook for choosing the -// `unique_eager_event`/`unique_eager_future` type for a system. `decltype` is -// used to determine the return type of an ADL call to -// `select_unique_eager_(future|event)_type(system)`; that return type should -// be the correct event/future type for `system`. Overloads should only be -// declared, not defined. - -namespace unimplemented -{ - -struct no_unique_eager_event_type_found {}; - -inline __host__ -no_unique_eager_event_type_found -unique_eager_event_type(...) noexcept; - -struct no_unique_eager_future_type_found {}; - -template -__host__ -no_unique_eager_future_type_found -unique_eager_future_type(...) noexcept; - -} // namespace unimplemented - -namespace unique_eager_event_type_detail -{ - -using unimplemented::unique_eager_event_type; - -template -using select = decltype( - unique_eager_event_type(std::declval()) -); - -} // namespace unique_eager_event_type_detail - -namespace unique_eager_future_type_detail -{ - -using unimplemented::unique_eager_future_type; - -template -using select = decltype( - unique_eager_future_type(std::declval()) -); - -} // namespace unique_eager_future_type_detail - -/////////////////////////////////////////////////////////////////////////////// - -template -using unique_eager_event = unique_eager_event_type_detail::select; - -template -using event = unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using unique_eager_future = unique_eager_future_type_detail::select; - -template -using future = unique_eager_future; - -/* -/////////////////////////////////////////////////////////////////////////////// - -using host_unique_eager_event = unique_eager_event_type_detail::select< - thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::tag ->; -using host_event = host_unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using host_unique_eager_future = unique_eager_future_type_detail::select< - thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::tag, T ->; -template -using host_future = host_unique_eager_future; -*/ - -/////////////////////////////////////////////////////////////////////////////// - -using device_unique_eager_event = unique_eager_event_type_detail::select< - thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::tag ->; - -using device_event = device_unique_eager_event; - -/////////////////////////////////////////////////////////////////////////////// - -template -using device_unique_eager_future = unique_eager_future_type_detail::select< - thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::tag, T ->; - -template -using device_future = device_unique_eager_future; - -/////////////////////////////////////////////////////////////////////////////// - -struct new_stream_t final {}; - -THRUST_INLINE_CONSTANT new_stream_t new_stream{}; - -/////////////////////////////////////////////////////////////////////////////// - -using thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::when_all; - -/////////////////////////////////////////////////////////////////////////////// - -} // end namespace thrust - -#endif - diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py deleted file mode 100644 index 998223a0e0242dc4a5b2fcd74af79dc7232794da..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest -import torch - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, x, y): - adiff = float((x - y).abs().max()) - if (y == 0).all(): - rdiff = 'NaN' - else: - rdiff = float((adiff / y).abs().max()) - - message = ( - 'Tensor close check failed\n' - 'adiff={}\n' - 'rdiff={}\n' - ).format(adiff, rdiff) - self.assertTrue(torch.allclose(x, y, atol=1e-5, rtol=1e-3), message) - diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/base_dataset.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/base_dataset.py deleted file mode 100644 index 5f0ac562eacc926b606f70c9dea680021dec2edc..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/base_dataset.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch.utils.data as data -from PIL import Image -import torchvision.transforms as transforms -import numpy as np -import random - -class BaseDataset(data.Dataset): - def __init__(self): - super(BaseDataset, self).__init__() - - def name(self): - return 'BaseDataset' - - def initialize(self, opt): - pass - -def get_params(opt, size): - w, h = size - new_h = h - new_w = w - if opt.resize_or_crop == 'resize_and_crop': - new_h = new_w = opt.loadSize - - if opt.resize_or_crop == 'scale_width_and_crop': # we scale the shorter side into 256 - - if w 0.5 - return {'crop_pos': (x, y), 'flip': flip} - -def get_transform(opt, params, method=Image.BICUBIC, normalize=True): - transform_list = [] - if 'resize' in opt.resize_or_crop: - osize = [opt.loadSize, opt.loadSize] - transform_list.append(transforms.Scale(osize, method)) - elif 'scale_width' in opt.resize_or_crop: - # transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.loadSize, method))) ## Here , We want the shorter side to match 256, and Scale will finish it. - transform_list.append(transforms.Scale(256,method)) - - if 'crop' in opt.resize_or_crop: - if opt.isTrain: - transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.fineSize))) - else: - if opt.test_random_crop: - transform_list.append(transforms.RandomCrop(opt.fineSize)) - else: - transform_list.append(transforms.CenterCrop(opt.fineSize)) - - ## when testing, for ablation study, choose center_crop directly. - - - - if opt.resize_or_crop == 'none': - base = float(2 ** opt.n_downsample_global) - if opt.netG == 'local': - base *= (2 ** opt.n_local_enhancers) - transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base, method))) - - if opt.isTrain and not opt.no_flip: - transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip']))) - - transform_list += [transforms.ToTensor()] - - if normalize: - transform_list += [transforms.Normalize((0.5, 0.5, 0.5), - (0.5, 0.5, 0.5))] - return transforms.Compose(transform_list) - -def normalize(): - return transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - -def __make_power_2(img, base, method=Image.BICUBIC): - ow, oh = img.size - h = int(round(oh / base) * base) - w = int(round(ow / base) * base) - if (h == oh) and (w == ow): - return img - return img.resize((w, h), method) - -def __scale_width(img, target_width, method=Image.BICUBIC): - ow, oh = img.size - if (ow == target_width): - return img - w = target_width - h = int(target_width * oh / ow) - return img.resize((w, h), method) - -def __crop(img, pos, size): - ow, oh = img.size - x1, y1 = pos - tw = th = size - if (ow > tw or oh > th): - return img.crop((x1, y1, x1 + tw, y1 + th)) - return img - -def __flip(img, flip): - if flip: - return img.transpose(Image.FLIP_LEFT_RIGHT) - return img diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/lstm.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/matthoffner/open-codetree/editor.d.ts b/spaces/matthoffner/open-codetree/editor.d.ts deleted file mode 100644 index 7dc3e2b41ff335e3b42a3dba5c5deb521efea79a..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/open-codetree/editor.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -declare module "react-split"; -declare module "monaco-jsx-highlighter"; diff --git a/spaces/matthoffner/starchat-ui/pages/api/home/home.tsx b/spaces/matthoffner/starchat-ui/pages/api/home/home.tsx deleted file mode 100644 index 6eb4ff4aa7417e29a4d1f2ed001f88d905f3f882..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/pages/api/home/home.tsx +++ /dev/null @@ -1,407 +0,0 @@ -import { useEffect, useRef, useState } from 'react'; -import { useQuery } from 'react-query'; - -import { GetServerSideProps } from 'next'; -import { useTranslation } from 'next-i18next'; -import { serverSideTranslations } from 'next-i18next/serverSideTranslations'; -import Head from 'next/head'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import useErrorService from '@/services/errorService'; -import useApiService from '@/services/useApiService'; - -import { - cleanConversationHistory, - cleanSelectedConversation, -} from '@/utils/app/clean'; -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { - saveConversation, - saveConversations, - updateConversation, -} from '@/utils/app/conversation'; -import { saveFolders } from '@/utils/app/folders'; -import { savePrompts } from '@/utils/app/prompts'; -import { getSettings } from '@/utils/app/settings'; - -import { Conversation } from '@/types/chat'; -import { KeyValuePair } from '@/types/data'; -import { FolderInterface, FolderType } from '@/types/folder'; -import { OpenAIModelID, OpenAIModels, fallbackModelID } from '@/types/openai'; -import { Prompt } from '@/types/prompt'; - -import { Chat } from '@/components/Chat/Chat'; -import { Chatbar } from '@/components/Chatbar/Chatbar'; -import { Navbar } from '@/components/Mobile/Navbar'; -import Promptbar from '@/components/Promptbar'; - -import HomeContext from './home.context'; -import { HomeInitialState, initialState } from './home.state'; - -import { v4 as uuidv4 } from 'uuid'; - -interface Props { - serverSideApiKeyIsSet: boolean; - serverSidePluginKeysSet: boolean; - defaultModelId: OpenAIModelID; -} - -const Home = ({ - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - defaultModelId, -}: Props) => { - const { t } = useTranslation('chat'); - const [initialRender, setInitialRender] = useState(true); - - const contextValue = useCreateReducer({ - initialState, - }); - - const { - state: { - apiKey, - lightMode, - folders, - conversations, - selectedConversation, - prompts, - temperature, - }, - dispatch, - } = contextValue; - - const stopConversationRef = useRef(false); - - - // FETCH MODELS ---------------------------------------------- - - const handleSelectConversation = (conversation: Conversation) => { - dispatch({ - field: 'selectedConversation', - value: conversation, - }); - - saveConversation(conversation); - }; - - // FOLDER OPERATIONS -------------------------------------------- - - const handleCreateFolder = (name: string, type: FolderType) => { - const newFolder: FolderInterface = { - id: uuidv4(), - name, - type, - }; - - const updatedFolders = [...folders, newFolder]; - - dispatch({ field: 'folders', value: updatedFolders }); - saveFolders(updatedFolders); - }; - - const handleDeleteFolder = (folderId: string) => { - const updatedFolders = folders.filter((f) => f.id !== folderId); - dispatch({ field: 'folders', value: updatedFolders }); - saveFolders(updatedFolders); - - const updatedConversations: Conversation[] = conversations.map((c) => { - if (c.folderId === folderId) { - return { - ...c, - folderId: null, - }; - } - - return c; - }); - - dispatch({ field: 'conversations', value: updatedConversations }); - saveConversations(updatedConversations); - - const updatedPrompts: Prompt[] = prompts.map((p) => { - if (p.folderId === folderId) { - return { - ...p, - folderId: null, - }; - } - - return p; - }); - - dispatch({ field: 'prompts', value: updatedPrompts }); - savePrompts(updatedPrompts); - }; - - const handleUpdateFolder = (folderId: string, name: string) => { - const updatedFolders = folders.map((f) => { - if (f.id === folderId) { - return { - ...f, - name, - }; - } - - return f; - }); - - dispatch({ field: 'folders', value: updatedFolders }); - - saveFolders(updatedFolders); - }; - - // CONVERSATION OPERATIONS -------------------------------------------- - - const handleNewConversation = () => { - const lastConversation = conversations[conversations.length - 1]; - - const newConversation: Conversation = { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: lastConversation?.model || { - id: OpenAIModels[defaultModelId].id, - name: OpenAIModels[defaultModelId].name, - maxLength: OpenAIModels[defaultModelId].maxLength, - tokenLimit: OpenAIModels[defaultModelId].tokenLimit, - }, - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }; - - const updatedConversations = [...conversations, newConversation]; - - dispatch({ field: 'selectedConversation', value: newConversation }); - dispatch({ field: 'conversations', value: updatedConversations }); - - saveConversation(newConversation); - saveConversations(updatedConversations); - - dispatch({ field: 'loading', value: false }); - }; - - const handleUpdateConversation = ( - conversation: Conversation, - data: KeyValuePair, - ) => { - const updatedConversation = { - ...conversation, - [data.key]: data.value, - }; - - const { single, all } = updateConversation( - updatedConversation, - conversations, - ); - - dispatch({ field: 'selectedConversation', value: single }); - dispatch({ field: 'conversations', value: all }); - }; - - // EFFECTS -------------------------------------------- - - useEffect(() => { - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - } - }, [selectedConversation]); - - useEffect(() => { - defaultModelId && - dispatch({ field: 'defaultModelId', value: defaultModelId }); - serverSideApiKeyIsSet && - dispatch({ - field: 'serverSideApiKeyIsSet', - value: serverSideApiKeyIsSet, - }); - serverSidePluginKeysSet && - dispatch({ - field: 'serverSidePluginKeysSet', - value: serverSidePluginKeysSet, - }); - }, [defaultModelId, serverSideApiKeyIsSet, serverSidePluginKeysSet]); - - // ON LOAD -------------------------------------------- - - useEffect(() => { - const settings = getSettings(); - if (settings.theme) { - dispatch({ - field: 'lightMode', - value: settings.theme, - }); - } - - const apiKey = "test"; - - if (serverSideApiKeyIsSet) { - dispatch({ field: 'apiKey', value: '' }); - - localStorage.removeItem('apiKey'); - } else if (apiKey) { - dispatch({ field: 'apiKey', value: apiKey }); - } - - const pluginKeys = localStorage.getItem('pluginKeys'); - if (serverSidePluginKeysSet) { - dispatch({ field: 'pluginKeys', value: [] }); - localStorage.removeItem('pluginKeys'); - } else if (pluginKeys) { - dispatch({ field: 'pluginKeys', value: pluginKeys }); - } - - if (window.innerWidth < 640) { - dispatch({ field: 'showChatbar', value: false }); - dispatch({ field: 'showPromptbar', value: false }); - } - - const showChatbar = localStorage.getItem('showChatbar'); - if (showChatbar) { - dispatch({ field: 'showChatbar', value: showChatbar === 'true' }); - } - - const showPromptbar = localStorage.getItem('showPromptbar'); - if (showPromptbar) { - dispatch({ field: 'showPromptbar', value: showPromptbar === 'true' }); - } - - const folders = localStorage.getItem('folders'); - if (folders) { - dispatch({ field: 'folders', value: JSON.parse(folders) }); - } - - const prompts = localStorage.getItem('prompts'); - if (prompts) { - dispatch({ field: 'prompts', value: JSON.parse(prompts) }); - } - - const conversationHistory = localStorage.getItem('conversationHistory'); - if (conversationHistory) { - const parsedConversationHistory: Conversation[] = - JSON.parse(conversationHistory); - const cleanedConversationHistory = cleanConversationHistory( - parsedConversationHistory, - ); - - dispatch({ field: 'conversations', value: cleanedConversationHistory }); - } - - const selectedConversation = localStorage.getItem('selectedConversation'); - if (selectedConversation) { - const parsedSelectedConversation: Conversation = - JSON.parse(selectedConversation); - const cleanedSelectedConversation = cleanSelectedConversation( - parsedSelectedConversation, - ); - - dispatch({ - field: 'selectedConversation', - value: cleanedSelectedConversation, - }); - } else { - const lastConversation = conversations[conversations.length - 1]; - dispatch({ - field: 'selectedConversation', - value: { - id: uuidv4(), - name: t('New Conversation'), - messages: [], - model: OpenAIModels[defaultModelId], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: lastConversation?.temperature ?? DEFAULT_TEMPERATURE, - folderId: null, - }, - }); - } - }, [ - defaultModelId, - dispatch, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - ]); - - return ( - - - Starchat UI - - - - - {selectedConversation && ( -
                -
                - -
                - -
                - - -
                - -
                - - -
                -
                - )} -
                - ); -}; -export default Home; - -export const getServerSideProps: GetServerSideProps = async ({ locale }) => { - const defaultModelId = - (process.env.DEFAULT_MODEL && - Object.values(OpenAIModelID).includes( - process.env.DEFAULT_MODEL as OpenAIModelID, - ) && - process.env.DEFAULT_MODEL) || - fallbackModelID; - - let serverSidePluginKeysSet = false; - - const googleApiKey = process.env.GOOGLE_API_KEY; - const googleCSEId = process.env.GOOGLE_CSE_ID; - - if (googleApiKey && googleCSEId) { - serverSidePluginKeysSet = true; - } - - return { - props: { - serverSideApiKeyIsSet: !!process.env.OPENAI_API_KEY, - defaultModelId, - serverSidePluginKeysSet, - ...(await serverSideTranslations(locale ?? 'en', [ - 'common', - 'chat', - 'sidebar', - 'markdown', - 'promptbar', - 'settings', - ])), - }, - }; -}; diff --git a/spaces/maykcaldas/MAPI_LLM/version.py b/spaces/maykcaldas/MAPI_LLM/version.py deleted file mode 100644 index 8a9ecc2ea99d607e92feae1656ddbf6fdd82a2c1..0000000000000000000000000000000000000000 --- a/spaces/maykcaldas/MAPI_LLM/version.py +++ /dev/null @@ -1 +0,0 @@ -0.0.1 \ No newline at end of file diff --git a/spaces/merve/data-leak/public/third_party/misc.js b/spaces/merve/data-leak/public/third_party/misc.js deleted file mode 100644 index a51b6b5292feaa6ee497806752a0d3d0cb4ef547..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/third_party/misc.js +++ /dev/null @@ -1,38 +0,0 @@ -/* Copyright 2019 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -function lerp(a, b, t){ return a + t*(b - a) } - -function addVec([a0, a1], [b0, b1]){ - return [a0 + b0, a1 + b1] -} - -function phyllotaxis(i, initialRadius=10, initialAngle=Math.PI*(3 - Math.sqrt(5))){ - i = i + Math.random()/20 - - var r = initialRadius*Math.sqrt(Math.random() + i) - var angle = i*initialAngle - - return [r*Math.cos(angle), r*Math.sin(angle)] -} - -var names = { - old_m: 'James John Robert Michael William David Richard Joseph Thomas Charles Christopher Daniel Matthew Anthony Donald Mark Paul Steven Andrew Kenneth Joshua George Kevin Brian Edward Ronald Timothy Jason Jeffrey Ryan Jacob Gary Nicholas Eric Stephen Jonathan Larry Justin Scott Brandon Frank Benjamin Gregory Samuel Raymond Patrick Alexander Jack Dennis Jerry Tyler Aaron Jose Henry Douglas Adam Peter Nathan Zachary Walter Kyle Harold Carl Jeremy Keith Roger Gerald Ethan Arthur Terry Christian Sean Lawrence Austin Joe Noah Jesse Albert Bryan Billy Bruce Willie Jordan Dylan Alan Ralph Gabriel Roy Juan Wayne Eugene Logan Randy Louis Russell Vincent Philip Bobby Johnny Bradley'.split(' '), - old_f: 'Mary Patricia Jennifer Linda Elizabeth Barbara Susan Jessica Sarah Karen Nancy Margaret Lisa Betty Dorothy Sandra Ashley Kimberly Donna Emily Michelle Carol Amanda Melissa Deborah Stephanie Rebecca Laura Sharon Cynthia Kathleen Helen Amy Shirley Angela Anna Brenda Pamela Nicole Ruth Katherine Samantha Christine Emma Catherine Debra Virginia Rachel Carolyn Janet Maria Heather Diane Julie Joyce Victoria Kelly Christina Joan Evelyn Lauren Judith Olivia Frances Martha Cheryl Megan Andrea Hannah Jacqueline Ann Jean Alice Kathryn Gloria Teresa Doris Sara Janice Julia Marie Madison Grace Judy Theresa Beverly Denise Marilyn Amber Danielle Abigail Brittany Rose Diana Natalie Sophia Alexis Lori Kayla Jane'.split(' '), - m: 'Noah Liam Jacob Mason William Ethan Michael Alexander James Elijah Daniel Benjamin Aiden Jayden Logan Matthew David Joseph Lucas Jackson Anthony Joshua Samuel Andrew Gabriel Christopher John Dylan Carter Isaac Ryan Luke Oliver Nathan Henry Owen Caleb Wyatt Christian Sebastian Jack Jonathan Landon Julian Isaiah Hunter Levi Aaron Eli Charles Thomas Connor Brayden Nicholas Jaxon Jeremiah Cameron Evan Adrian Jordan Gavin Grayson Angel Robert Tyler Josiah Austin Colton Brandon Jose Dominic Kevin Zachary Ian Chase Jason Adam Ayden Parker Hudson Cooper Nolan Lincoln Xavier Carson Jace Justin Easton Mateo Asher Bentley Blake Nathaniel Jaxson Leo Kayden Tristan Luis Elias Brody Bryson Juan Vincent Cole Micah Ryder Theodore Carlos Ezra Damian Miles Santiago Max Jesus Leonardo Sawyer Diego Alex Roman Maxwell Eric Greyson Hayden Giovanni Wesley Axel Camden Braxton Ivan Ashton Declan Bryce Timothy Antonio Silas Kaiden Ezekiel Jonah Weston George Harrison Steven Miguel Richard Bryan Kaleb Victor Aidan Jameson Joel Patrick Jaden Colin Everett Preston Maddox Edward Alejandro Kaden Jesse Emmanuel Kyle Brian Emmett Jude Marcus Kingston Kai Alan Malachi Grant Jeremy Riley Jayce Bennett Abel Ryker Caden Brantley Luca Brady Calvin Sean Oscar Jake Maverick Abraham Mark Tucker Nicolas Bradley Kenneth Avery Cayden King Paul Amir Gael Graham Maximus'.split(' '), - f: 'Emma Sophia Olivia Isabella Ava Mia Abigail Emily Madison Charlotte Elizabeth Amelia Chloe Ella Evelyn Avery Sofia Harper Grace Addison Victoria Natalie Lily Aubrey Lillian Zoey Hannah Layla Brooklyn Samantha Zoe Leah Scarlett Riley Camila Savannah Anna Audrey Allison Aria Gabriella Hailey Claire Sarah Aaliyah Kaylee Nevaeh Penelope Alexa Arianna Stella Alexis Bella Nora Ellie Ariana Lucy Mila Peyton Genesis Alyssa Taylor Violet Maya Caroline Madelyn Skylar Serenity Ashley Brianna Kennedy Autumn Eleanor Kylie Sadie Paisley Julia Mackenzie Sophie Naomi Eva Khloe Katherine Gianna Melanie Aubree Piper Ruby Lydia Faith Madeline Alexandra Kayla Hazel Lauren Annabelle Jasmine Aurora Alice Makayla Sydney Bailey Luna Maria Reagan Morgan Isabelle Rylee Kimberly Andrea London Elena Jocelyn Natalia Trinity Eliana Vivian Cora Quinn Liliana Molly Jade Clara Valentina Mary Brielle Hadley Kinsley Willow Brooke Lilly Delilah Payton Mariah Paige Jordyn Nicole Mya Josephine Isabel Lyla Adeline Destiny Ivy Emilia Rachel Angelina Valeria Kendall Sara Ximena Isla Aliyah Reese Vanessa Juliana Mckenzie Amy Laila Adalynn Emery Margaret Eden Gabrielle Kaitlyn Ariel Gracie Brooklynn Melody Jessica Valerie Adalyn Adriana Elise Michelle Rebecca Daisy Everly Katelyn Ryleigh Catherine Norah Alaina Athena Leilani Londyn Eliza Jayla Summer Lila Makenzie Izabella Daniela Stephanie Julianna Rose Alana Harmony Jennifer Hayden'.split(' '), - last: 'SMITH JOHNSON WILLIAMS BROWN JONES GARCIA MILLER DAVIS RODRIGUEZ MARTINEZ HERNANDEZ LOPEZ GONZALEZ WILSON ANDERSON THOMAS TAYLOR MOORE JACKSON MARTIN LEE PEREZ THOMPSON WHITE HARRIS SANCHEZ CLARK RAMIREZ LEWIS ROBINSON WALKER YOUNG ALLEN KING WRIGHT SCOTT TORRES NGUYEN HILL FLORES GREEN ADAMS NELSON BAKER HALL RIVERA CAMPBELL MITCHELL CARTER ROBERTS GOMEZ PHILLIPS EVANS TURNER DIAZ PARKER CRUZ EDWARDS COLLINS REYES STEWART MORRIS MORALES MURPHY COOK ROGERS GUTIERREZ ORTIZ MORGAN COOPER PETERSON BAILEY REED KELLY HOWARD RAMOS KIM COX WARD RICHARDSON WATSON BROOKS CHAVEZ WOOD JAMES BENNETT GRAY MENDOZA RUIZ HUGHES PRICE ALVAREZ CASTILLO SANDERS PATEL MYERS LONG ROSS FOSTER JIMENEZ POWELL JENKINS PERRY RUSSELL SULLIVAN BELL COLEMAN BUTLER HENDERSON BARNES GONZALES FISHER VASQUEZ SIMMONS ROMERO JORDAN PATTERSON ALEXANDER HAMILTON GRAHAM REYNOLDS GRIFFIN WALLACE MORENO WEST COLE HAYES BRYANT HERRERA GIBSON ELLIS TRAN MEDINA AGUILAR STEVENS MURRAY FORD CASTRO MARSHALL OWENS HARRISON FERNANDEZ MCDONALD WOODS WASHINGTON KENNEDY WELLS VARGAS HENRY CHEN FREEMAN WEBB TUCKER GUZMAN BURNS CRAWFORD OLSON SIMPSON PORTER HUNTER GORDON MENDEZ SILVA SHAW SNYDER MASON DIXON MUNOZ HUNT HICKS HOLMES PALMER WAGNER BLACK ROBERTSON BOYD ROSE STONE SALAZAR FOX WARREN MILLS MEYER RICE SCHMIDT GARZA DANIELS FERGUSON NICHOLS STEPHENS SOTO WEAVER RYAN'.split(' ').map(d => d[0] + d.slice(1).toLowerCase()) -} diff --git a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/scatter.js b/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/scatter.js deleted file mode 100644 index f0656aaaf3fdbea7ab8c3f6e87d9f9a864ad6726..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/scatter.js +++ /dev/null @@ -1,232 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initScatter = function(c){ - var rv = {data: [], cur_t: 0} - - var duration = 1 - if (!c.scatters) c.scatters = [rv] - - var [svgbot, ctx, divSel, svg] = c.layers - - var regl = createREGL({ - container: divSel.node(), - // attributes: {antialias: false}, - }) - - - // https://blocks.roadtolarissa.com/1wheel/0a58f8bf5a14f6a534b9043a9c63dd1d - // https://peterbeshai.com/blog/2017-05-26-beautifully-animate-points-with-webgl-and-regl/ - function drawRegl(){ - var {data} = rv - var t0 = performance.now() - - var tmpData = [ - {x: 0, y: 0}, - {x: .5, y: .5}, - {x: 1, y: 1}, - {x: -1, y: -1}, - ] - - var drawPoints = regl({ - vert: ` - precision mediump float; - attribute float x, y, px, py, isVisible; - - attribute vec3 color; - varying vec3 fragColor; - - uniform float interp; - void main() { - float xPos = isVisible < .5 ? -2.0 : mix(px, x, interp); - // float xPos = mix(px, x, interp); - float yPos = mix(py, y, interp); - gl_Position = vec4(xPos, yPos, 0, 1); - - gl_PointSize = ${devicePixelRatio > 3 ? 7 : devicePixelRatio > 1 ? 5 : 2}.0; - - fragColor = color; - }`, - frag: ` - precision mediump float; - varying vec3 fragColor; - void main() { - gl_FragColor = vec4(fragColor, 1.0); - }`, - - - attributes: { - x: data.map(d => d.x/c.width*2 - 1), - y: data.map(d => -d.y/c.height*2 + 1), - px: data.map(d => d.p.x/c.width*2 - 1), - py: data.map(d => -d.p.y/c.height*2 + 1), - color: data.map(d => d.color), - isVisible: data.map(d => c.type != 'c' || d.isVisible ? 1 : 0), - }, - uniforms: { - interp: (ctx, props) => props.interp, - }, - primitive: 'point', - count: data.length, - }) - - drawPoints({interp: 0}) - - if (rv.regltick) rv.regltick.cancel() - rv.regltick = regl.frame(({ time }) => { - var dt = performance.now() - t0 + 8 - var interp = d3.easeCubic(d3.clamp(0, dt/duration, 1)) - - drawPoints({interp}) - if (1 == interp && rv.regltick) rv.regltick.cancel() - - // c.svg.selectAppend('text.debug').text(dt + ' ' + interp) - }) - } - - var centerPathSel = c.svg.selectAppend('path.center') - .st({pointerEvents: 'none', strokeWidth: .3, stroke: '#ccc'}) - - rv.draw = function(c, data, isxy){ - rv.pData = rv.data - rv.data = data - - if (!rv.pData.length) rv.pData = rv.data - - data.forEach((d, i) => { - d.prettyWord = d.word.replace('▁', '') - d.color = util.color2array(d.fill) - // console.log(d.color) - d.i = i - d.p = rv.pData[i] - if (!d.p) debugger - // ctx.fillStyle = d.fill - // ctx.fillRect(d.x - d.s/2, d.y - d.s/2, d.s, d.s) - }) - - - - var tinyTextSel = svg.selectAll('text.tiny') - .data(data.filter(d => d.show), d => d.word) - - tinyTextSel.exit() - .transition().duration(duration) - .translate(d => [rv.data[d.i].x, rv.data[d.i].y]) - .at({fill: d => d.fill, opacity: 0}) - .remove() - - tinyTextSel.enter().append('text.tiny') - .text(d => d.prettyWord) - .at({ - dy: d => d.show[0] == 'u' ? -2 : 10, - dx: d => d.show[1] == 'r' ? 2 : -2, - textAnchor: d => d.show[1] == 'r' ? '' : 'end', - fill: d => d.p.fill, - opacity: 0 - }) - .translate(d => [d.p.x, d.p.y]) - .merge(tinyTextSel) - .transition().duration(duration) - .translate(d => [d.x, d.y]) - .at({fill: d => d.fill, opacity: 1}) - - c.svg.transition().duration(duration) - .attrTween('cur_t', function(){ - rv.cur_t = 0 - drawRegl() - - return t => { - rv.cur_t = t - } - }) - - centerPathSel - .raise() - .transition().duration(duration)//.ease(d3.easeQuadIn) - .at({d: isxy ? - ['M', 0, c.height, 'L', c.width, 0].join(' ') : - ['M', 0, c.y(0) + .5, 'L', c.width, c.y(0) + .5].join(' ') - }) - - setTimeout(() => duration = c.scatters.length > 1 ? 600 : 600, 1) - - // svg.appendMany('text.tiny', data.filter(d => d.show)) - // .text(d => d.prettyWord) - // .translate(d => [d.x, d.y]) - // .at({ - // dy: d => d.show[0] == 'u' ? -2 : 10, - // dx: d => d.show[1] == 'r' ? 2 : -2, - // textAnchor: d => d.show[1] == 'r' ? '' : 'end', - // fill: d => d.fill, - // }) - } - - function addHover(){ - var curHover = '' - var hoverSel = svg.append('g.hover').st({opacity: 0, pointerEvents: 'none'}) - - hoverSel.append('circle') - .at({r: 5, fill: 'none', stroke: '#000'}) - - var hoverTextSel = hoverSel.appendMany('text', [0, 1]) - .at({x: 10, y: 5, stroke: d => d ? '' : '#000'}) - .st({fontFamily: 'monospace'}) - - svg.append('rect') - .at({width: c.width, height: c.height, fill: 'rgba(0,0,0,0)'}) - - svg - .on('mousemove', function(){ - var [x, y] = d3.mouse(this) - - var match = _.minBy(rv.data.filter(d => d.isVisible), d => { - var dx = x - d.x - var dy = y - d.y - - return dx*dx + dy*dy - }) - - if (match && curHover != match.word) setHoverAll(match.word) - }) - .on('mouseout', function(){ - curHover = null - setHoverAll(null) - }) - - function setHoverAll(word){ - c.scatters.forEach(d => d.setHover(word)) - } - - rv.setHover = word => { - var d = _.find(rv.data, {word}) - if (!d){ - hoverSel.st({opacity: 0}) - hoverTextSel.text('') - return - } - curHover = word - - hoverSel.translate([d.x, d.y]).raise().st({opacity: 1}) - hoverTextSel.text(d.prettyWord) - } - } - addHover() - - return rv -} - - -if (window.init) init() diff --git a/spaces/merve/measuring-fairness/public/dataset-worldviews/script.js b/spaces/merve/measuring-fairness/public/dataset-worldviews/script.js deleted file mode 100644 index 3ebba088d65f389af1b446a9ea90fcde674d5fdf..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/dataset-worldviews/script.js +++ /dev/null @@ -1,588 +0,0 @@ - -console.clear(); - -var ttSel = d3.select("body").selectAppend("div.tooltip.tooltip-hidden"); -// For result tables -const columns = ["object", "n", "n correct", "accuracy"]; -const rowHeight = 50; -const rowWidth = 100; -const buffer = 2; - -const classifierBlobWidth = 50; -const classifierBlobHeight = 460; - -function drawShapesWithData(classifier) { - var divHeight = classifier.class == "show-shapes" ? 250 : 490; - - var c = d3.conventions({ - sel: d3.select("." + classifier.class).html(""), - width: 1300, - height: divHeight, - layers: "ds", - }); - - function runClassifier() { - classifier.isClassified = true; - var duration = 3000; - classifierSel.classed("is-classified", true); - graphResultsGroup.classed("is-classified", true); - - drawResults(); - buttonSel.text("Reset"); - - var minX = d3.min(shapeParams, (d) => d.endX - 50); - var timer = d3.timer((ms) => { - if (!classifier.isClassified) { - timer.stop(); - shapeSel.classed("is-classified", false); - return; - } - - var t = d3.easeCubicInOut(ms / duration); - t = d3.clamp(0, t, 1); - - shapeParams.forEach((d, i) => { - d.x = d.startX + (d.endX - d.startX) * t; - d.y = d.startY + (d.endY - d.startY) * t; - d.isClassified = d.x > minX; - }); - - shapeSel - .translate((d) => [d.x, d.y]) - .classed("is-classified", (d) => d.isClassified); - - if (t == 1) { - timer.stop(); - } - }); - } - - function resetClassifier() { - shapeSel.translate((d) => [d.startX, d.startY]); - shapeSel.classed("is-classified", false); - classifier.isClassified = false; - shapeSel - .transition("position") - .duration(0) - .translate((d) => [d.startX, d.startY]); - classifierSel.classed("is-classified", false); - graphResultsGroup.classed("is-classified", false); - if (classifier.class != "show-shapes") { - classifierBlobSel.attr("opacity", 100); - } - - drawResults(); - buttonSel.text("Run Classifier"); - } - - // Add run/reset button - var buttonSel = d3 - .select("." + classifier.class + "-button") - .html("") - .append("button#run") - .at({ - type: "button", - class: "classifier-button", - }) - .text("Run Classifier") - .on("click", () => { - // if already classified, reset - if (classifier.isClassified) { - // Resetting - resetClassifier(); - } else { - runClassifier(); - } - }); - - // Backgrounds for different classifications - var classifierSel = c.svg - .append("g") - .at({ - class: "classifier", - }) - .translate([465, 20]); - - classifierSel - .append("path.classifier-bg-shaded") - .at({ - d: classifierBgPathTop, - // fill: "#ccc", - // stroke: "#000", - }) - .translate([-50, 0]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("shaded") - .translate([160, 15]); - - classifierSel - .append("path.classifier-bg-unshaded") - .at({ - d: classifierBgPathBottom, - }) - .translate([-50, 160]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("unshaded") - .translate([160, 175]); - - // Add the shapes themselves - var shapeSel = c.svg - .appendMany("path.shape", shapeParams) - .at({ - d: (d) => d.path, - class: (d) => "gt-" + d.gt + " " + d.correctness, - }) - .translate(function (d) { - if (classifier.class == "show-shapes") { - return [d.initialX + 35, d.initialY-20]; - } else { - return [d.startX, d.startY]; - } - }) - .call(d3.attachTooltip) - .on("mouseover", (d) => { - ttSel.html(""); - if (classifier.usingLabel != "none") { - ttSel - .append("div") - .html( - `labeled: ${toPropertyString( - d[classifier.usingLabel], - classifier.isRounding - ).slice(0, -1)}` - ); - } - var gtSel = ttSel - .append("div") - .html( - `ground truth: ${d.gt}` - ); - if (classifier.isClassified) { - ttSel - .append("div.labeled-row") - .html( - `classified as: ${d.label}` - ); - - ttSel - .append("div.correct-row") - .classed("is-correct-tooltip", d.correctness == "correct") - .html(`
                ${d.correctness}ly classified `); - } - ttSel.classed("tt-text", true); - }); - - // If we're just showing shapes, ignore everything else - if (classifier.class == "show-shapes") return; - - // Add "classifier" line - var classifierBlobSel = c.svg - .append("g") - .at({ - class: "classifier-blob", - strokeWidth: 0, - }) - .translate([378, 20]); - - classifierBlobSel - .append("line.classifier-blob") - .at({ - class: "line", - x1: 27, - x2: 27, - y1: 0, - y2: 464, - stroke: "#000", - strokeWidth: 1, - }) - .style("stroke-dasharray", "5, 5"); - - classifierBlobSel - .append("text.classifier-blob-text") - .at({ - class: "classifier-blob-text monospace", - textAnchor: "middle", - dominantBaseline: "central", - }) - .text("is_shaded classifier") - .attr("transform", "translate(30,480) rotate(0)"); - - if (classifier.class == "show-shapes") { - classifierBlobSel.classed("is-classified", true); - } - - // Draw the results table with accuracies - // This will be hidden before classifier is run. - var graphResultsGroup = c.svg - .append("g") - .attr("class", "results") - .translate([-20, 19]); - - function drawResults() { - // Write text summary - summarySel = d3 - .select("." + classifier.class + "-summary") - .html(summaries[classifier.class]) - .translate([0, 20]); - summarySel.classed("summary-text", true); - summarySel.classed("is-classified", classifier.isClassified); - - if (!classifier.isClassified) { - c.layers[0].html(""); - classifier.wasClassified = false; - return; - } - - // Access results, which are calculated in shapes.js. - // If there are none, draw nothing. - results = allResults[classifier.class]; - if (!results) return; - - // Figure out which shapes should be highlighted on mouseover - // This depends on whether we're "rounding" edge case examples. - function isMatch(rowName, labelName, isRounding) { - // Not filtering at all - if (rowName == "shape") { - return true; - } - if (isRounding == true) { - // No "other" category - return labelName.includes(toOriginalString(rowName)) - ? true - : false; - } else { - // There is an "other" category, prefixed by "rt_" - if (labelName == toOriginalString(rowName)) { - return true; - } else if ( - labelName.includes("rt_") && - rowName == "other shapes" - ) { - return true; - } - return false; - } - } - - // Color the last row of each table - function getColor(d, i) { - if (i != 3) { - // not last index - return "#e6e6e6"; - } else { - var scaleRowValue = d3 - .scaleLinear() - .domain([0.3, 1.0]) - .range([0, 1]); - return d3.interpolateRdYlGn(scaleRowValue(d)); - } - } - - // Adjust text color for visibility - function getTextColor(d, i) { - if (i != 3) { - // not last index - return "#000000"; - } else { - var bgColor = getColor(d, i); - if (d < 0.3) { - // Alternative: use a brighter color? - // return d3.rgb(bgColor).brighter(-2); - return "#FFCCD8"; - } else { - // Alternative: use a darker color? - // return d3.rgb(bgColor).darker(2); - return "#000000"; - } - } - } - - // Draw results table - var tableSel = c.layers[0] - .html("") - .raise() - .st({ width: 400 }) - .append("div") - .translate([0, 10]) - .append("table.results-table.monospace") - .st({ width: 400 }); - - var header = tableSel - .append("thead") - .append("tr") - .appendMany("th", columns) - .text((d) => d); - - var rowSel = tableSel - .appendMany("tr", results) - .at({ - class: "row monospace", - }) - .on("mouseover", (row) => { - if (classifier.class == "default-classifier") { - return; - } - rowSel.classed("active", (d) => d == row); - shapeSel.classed("shape-row-unhighlighted", function (d) { - return !isMatch( - row.object, - d[classifier.usingLabel], - (isRounding = classifier.isRounding) - ); - }); - }) - .on("mouseout", (row) => { - rowSel.classed("active", function (d) { - if (d == row) { - return false; - } - }); - if (classifier.isClassified) { - shapeSel.classed("shape-row-unhighlighted", 0); - } - }); - - rowSel - .appendMany("td", (result) => - columns.map((column) => result[column]) - ) - .text((d) => d) - .st({ - backgroundColor: getColor, - color: getTextColor, - }); - - header.style("opacity", 0); - rowSel.style("opacity", 0); - - // If the classifier has already been run before, draw results right away. - // Otherwise, wait for other animation to run before drawing results. - var initialDelay = classifier.wasClassified ? 0 : 2000; - classifier.wasClassified = true; - - header - .transition() - .delay(initialDelay) - .duration(1000) - .style("opacity", 1); - rowSel - .transition() - .delay(function (d, i) { - return initialDelay + i * 200; - }) - .duration(1000) - .style("opacity", 1); - } - - // Draw the dropdowns for selecting different labels - function drawDropdown() { - if (!classifier.options) return; - - ["rounding", "category"].forEach(function (classifierType) { - if (!classifier.options[classifierType]) return; - var sel = d3 - .select("#" + classifier.class + "-select-" + classifierType) - .html(""); - sel.classed("dropdown", true); - sel.appendMany("option", classifier.options[classifierType]) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - sel.on("change", function () { - if (classifierType == "rounding") { - classifier.isRounding = toBool(this.value); - } else { - classifier.usingLabel = this.value; - } - updateResults(); - drawResults(); - }); - }); - } - drawDropdown(); - updateResults(); - drawResults(); - - // For continuity, auto-run the second two classifiers - if ( - classifier.class == "second-classifier" || - classifier.class == "final-classifier" - ) { - runClassifier(); - } -} - -// Draw the "Labels Tell Stories" section -function drawConclusion() { - function drawNewspapers() { - d3.select(".conclusion-newspapers").html(function () { - var imgPath = - "img/newspapers_" + - document.getElementById("conclusion-select-category").value; - return ( - 'Newspapers with headlines about bias and fairness in shape data.' - ); - }); - } - - function drawInterface() { - d3.select(".conclusion-interface").html(function () { - var imgPath = - "img/confusing_" + - document.getElementById("conclusion-select-category").value; - return ( - '
                A shape that is difficult to classify with several checkboxes, none of which describe the shape. Next to the interface is a text box with a single question mark in it.
                ' - ); - }); - } - - function drawConclusionSummary() { - classifierSel = d3 - .select(".conclusion-summary") - .html(summaries["conclusion"]); - classifierSel.classed("summary-text is-classified", true); - } - - function drawDropdown() { - var sel = d3.select("#conclusion-select-category").html(""); - sel.classed("dropdown", true); - sel.appendMany("option", conclusionOptions.category) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - // sel.attr('select', 'circles, triangles, and rectangles'); - sel.on("change", function (d) { - makeConclusionUpdates(); - }); - } - - function makeConclusionUpdates() { - updateResults(); - drawNewspapers(); - drawInterface(); - drawConclusionSummary(); - } - drawDropdown(); - makeConclusionUpdates(); -} - -// Handle the parameters everywhere classifiers are drawn -var classifiers = [ - { - // Just the initial display of shapes, not interactive - class: "show-shapes", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "default-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "second-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with their best guess", value: true }, - { label: 'as "other"', value: false }, - ], - }, - }, - { - class: "final-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with our best guess", value: true }, - { label: 'as "other"', value: false }, - ], - category: [ - { - label: "circles, triangles, or rectangles", - value: "shape_name", - }, - { label: "pointy shapes or round shapes", value: "pointiness" }, - { label: "small shapes or big shapes", value: "size" }, - { label: "just shapes", value: "none" }, - ], - }, - }, -]; - -// "Labels Tell Stories" dropdown options -var conclusionOptions = { - category: [ - { label: "circles, triangles, and rectangles", value: "shape_name" }, - { label: "pointy shapes and round shapes", value: "pointiness" }, - { label: "small shapes and big shapes", value: "size" }, - ], -}; - -classifiers.forEach(drawShapesWithData); -drawConclusion(); - -// These images are loaded invisibly so they appear seamlessly on dropdown change -const preloadImages = [ - "img/confusing_pointiness.png", - "img/confusing_pointiness.svg", - "img/confusing_shape_name.png", - "img/confusing_shape_name.svg", - "img/confusing_size.png", - "img/confusing_size.svg", - "img/interface_default.png", - "img/interface_default.svg", - "img/interface_shape_name_false.png", - "img/interface_shape_name_false.svg", - "img/interface_shape_name_true.png", - "img/interface_shape_name_true.svg", - "img/newspapers_pointiness.png", - "img/newspapers_pointiness.svg", - "img/newspapers_shape_name.png", - "img/newspapers_shape_name.svg", - "img/newspapers_size.png", - "img/newspapers_size.svg", -]; - -d3.select(".preload-dropdown-img") - .html("") - .appendMany("img", preloadImages) - .at({ src: (d) => d }); diff --git a/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/post.js b/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/post.js deleted file mode 100644 index e546aef207dab4014e05732814a1f4b2ff78896a..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/fill-in-the-blank/post.js +++ /dev/null @@ -1,44 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -async function post(route, obj){ - var body = JSON.stringify(obj) - var cacheKey = body + route - // if (route == 'embed_zari_cda') return - // if (route != 'embed_group_top') return - // route = 'embed_group' - - if (!window.postCache) postCache = {} - if (postCache[cacheKey]) return postCache[cacheKey] - - - if (cacheKey2filename[cacheKey]){ - var res = await fetch('data/' + cacheKey2filename[cacheKey]) - } else { - // var root = 'http://' + location.hostname + ':5004/' - var root = 'https://helloworld-66dm2fxl4a-uk.a.run.app/' - var res = await fetch(root + route, {method: 'POST', body}) - } - - - var rv = await res.json() - postCache[cacheKey] = rv - - return rv -} - -// copy(postCache) -// data/post-cache.json \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/public/measuring-diversity/index.html b/spaces/merve/uncertainty-calibration/public/measuring-diversity/index.html deleted file mode 100644 index 152d63d665428726e115c623d650d9ad5bef780b..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/measuring-diversity/index.html +++ /dev/null @@ -1,167 +0,0 @@ - - - - - - - - - - - - - - - - - - Measuring Diversity - - - - - - - - - - - - - - - -
                - -
                - -

                Measuring Diversity

                -
                Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
                - - -

                Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels.

                -

                Using the careful quantification outlined in a recent paper, Diversity and Inclusion Metrics in Subset Selection, we can quantify biases and push these systems to return a wider range of results.

                -

                The mathematics of all this is a little easier to follow with abstract shapes. Let’s take a look at some of them:

                -
                - -

                Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?

                -
                - -

                Another diversity metric we care about is the percentage of dots… how close to 35% dots can you get?

                -
                - -

                If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn’t possible to reduce the difference of every metric to zero. One natural approach: find the selection with the lowest mean difference across all the metrics to get as close as possible to all the targets.

                -

                In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the lowest max difference. Try minimizing both below:

                -
                - -

                Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results?

                -

                Ranking Measures

                -

                We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set’s percentage of green, dots and small shapes are shown in the small histograms.

                -
                - -

                At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.

                -
                - -

                Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for intersectionality. The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It’s important to keep in mind what exactly you’re trying to maximize and the dataset that you’re operating on.

                -

                Which Measure is Best?

                -

                In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.

                -

                For example, the doctors on the left have more variance along the shirt color attribute, but they’re less diverse by gender than the doctors on the right. With the shirt color and gender targets we’ve picked, the two subsets have the same mean and max differences However, in most applications, it’s more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color.

                -
                - -

                Just selecting a diverse sample isn’t sufficient either. Diversity and Inclusion Metrics in Subset Selection introduces a way of measuring “inclusion” - how well does the searcher feel represented in the results?

                -

                Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.

                -
                - -

                The context of the query and the searcher also plays in the quality of search results. A search for “work clothing” that shows a mixed palette of colors for men’s clothing and only pink women’s clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women’s clothes might be appropriate to show for a “pink women work clothes” search or if the searcher had previously expressed a preference for pink.

                -

                We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems.

                -

                More Reading

                -

                The Diversity and Inclusion Metrics paper has a Colab with a detailed desciption of the metrics, additional visualizations and a reference Python implementation.

                -

                The difficulties of measuring fairness in general have been well studied; subset selection is still an active area of research. Fairness of Exposure in Rankings proposes a ranking algorithm that incorporates fairness constraints. Toward creating a fairer ranking in search engine results measures diversity bias in actual search results.

                -

                Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the People + AI Guidebook.

                -

                Credits

                -

                Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell* and Timnit Gebru* // March 2021

                -

                *Work done while at Google

                -

                Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.

                -

                More Explorables

                - -

                - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/mikebars/huggingface/assets/index-25971840.js b/spaces/mikebars/huggingface/assets/index-25971840.js deleted file mode 100644 index fca54d8978c67a5778c1e82852d5d371eca6558c..0000000000000000000000000000000000000000 --- a/spaces/mikebars/huggingface/assets/index-25971840.js +++ /dev/null @@ -1,40 +0,0 @@ -var cc=Object.defineProperty;var fc=(e,t,n)=>t in e?cc(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var kt=(e,t,n)=>(fc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const u of i.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var Ir={},dc={get exports(){return Ir},set exports(e){Ir=e}},il={},ee={},pc={get exports(){return ee},set exports(e){ee=e}},T={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var qn=Symbol.for("react.element"),mc=Symbol.for("react.portal"),hc=Symbol.for("react.fragment"),vc=Symbol.for("react.strict_mode"),yc=Symbol.for("react.profiler"),gc=Symbol.for("react.provider"),wc=Symbol.for("react.context"),kc=Symbol.for("react.forward_ref"),Sc=Symbol.for("react.suspense"),Ec=Symbol.for("react.memo"),xc=Symbol.for("react.lazy"),Au=Symbol.iterator;function Cc(e){return e===null||typeof e!="object"?null:(e=Au&&e[Au]||e["@@iterator"],typeof e=="function"?e:null)}var Jo={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},bo=Object.assign,es={};function sn(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}sn.prototype.isReactComponent={};sn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};sn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function ts(){}ts.prototype=sn.prototype;function Qi(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}var Ki=Qi.prototype=new ts;Ki.constructor=Qi;bo(Ki,sn.prototype);Ki.isPureReactComponent=!0;var Vu=Array.isArray,ns=Object.prototype.hasOwnProperty,Xi={current:null},rs={key:!0,ref:!0,__self:!0,__source:!0};function ls(e,t,n){var r,l={},i=null,u=null;if(t!=null)for(r in t.ref!==void 0&&(u=t.ref),t.key!==void 0&&(i=""+t.key),t)ns.call(t,r)&&!rs.hasOwnProperty(r)&&(l[r]=t[r]);var o=arguments.length-2;if(o===1)l.children=n;else if(1]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var Ac=["pipeline_tag","private","gated","downloads","likes"];async function*Vc(e){var r,l;Uc(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...Ac.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||Dc}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw jc(i);const u=await i.json();for(const s of u)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const o=i.headers.get("Link");n=o?$c(o).next:void 0}}function Hu(e){return Array.isArray(e)?e:[e]}var us=class{constructor(e="",t={}){kt(this,"apiKey");kt(this,"defaultOptions");this.apiKey=e,this.defaultOptions=t}async fillMask(e,t){return this.request(e,t)}async summarization(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async questionAnswer(e,t){return await this.request(e,t)}async tableQuestionAnswer(e,t){return await this.request(e,t)}async textClassification(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async textGeneration(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async tokenClassification(e,t){return Hu(await this.request(e,t))}async translation(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async zeroShotClassification(e,t){return Hu(await this.request(e,t))}async conversational(e,t){return await this.request(e,t)}async featureExtraction(e,t){return await this.request(e,t)}async automaticSpeechRecognition(e,t){return await this.request(e,{...t,binary:!0})}async audioClassification(e,t){return await this.request(e,{...t,binary:!0})}async imageClassification(e,t){return await this.request(e,{...t,binary:!0})}async objectDetection(e,t){return await this.request(e,{...t,binary:!0})}async imageSegmentation(e,t){return await this.request(e,{...t,binary:!0})}async textToImage(e,t){return await this.request(e,{...t,blob:!0})}async request(e,t){const n={...this.defaultOptions,...t},{model:r,...l}=e,i={};this.apiKey&&(i.Authorization=`Bearer ${this.apiKey}`),t!=null&&t.binary||(i["Content-Type"]="application/json"),t!=null&&t.binary&&(n.wait_for_model&&(i["X-Wait-For-Model"]="true"),n.use_cache===!1&&(i["X-Use-Cache"]="false"),n.dont_load_model&&(i["X-Load-Model"]="0"));const u=await fetch(`https://api-inference.huggingface.co/models/${r}`,{headers:i,method:"POST",body:t!=null&&t.binary?e.data:JSON.stringify({...l,options:n}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"});if(n.retry_on_error!==!1&&u.status===503&&!n.wait_for_model)return this.request(e,{...n,wait_for_model:!0});if(t!=null&&t.blob){if(!u.ok)throw new Error("An error occurred while fetching the blob");return await u.blob()}const o=await u.json();if(o.error)throw new Error(o.error);return o}},Mr=function(){return Mr=Object.assign||function(t){for(var n,r=1,l=arguments.length;r0&&n>="0"&&n<="9"?"_"+n+r:""+n.toUpperCase()+r}function Kc(e,t){return t===void 0&&(t={}),Qc(e,Mr({delimiter:"",transform:os},t))}function Xc(e,t){return t===0?e.toLowerCase():os(e,t)}function Yc(e,t){return t===void 0&&(t={}),Kc(e,Mr({transform:Xc},t))}var ql={},Gc={get exports(){return ql},set exports(e){ql=e}},ke={},Jl={},Zc={get exports(){return Jl},set exports(e){Jl=e}},ss={};/** - * @license React - * scheduler.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */(function(e){function t(x,P){var z=x.length;x.push(P);e:for(;0>>1,Z=x[W];if(0>>1;Wl(xl,z))wtl(rr,xl)?(x[W]=rr,x[wt]=z,W=wt):(x[W]=xl,x[gt]=z,W=gt);else if(wtl(rr,z))x[W]=rr,x[wt]=z,W=wt;else break e}}return P}function l(x,P){var z=x.sortIndex-P.sortIndex;return z!==0?z:x.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var u=Date,o=u.now();e.unstable_now=function(){return u.now()-o}}var s=[],c=[],h=1,m=null,p=3,g=!1,w=!1,k=!1,F=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(x){for(var P=n(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=x)r(c),P.sortIndex=P.expirationTime,t(s,P);else break;P=n(c)}}function v(x){if(k=!1,d(x),!w)if(n(s)!==null)w=!0,Sl(E);else{var P=n(c);P!==null&&El(v,P.startTime-x)}}function E(x,P){w=!1,k&&(k=!1,f(N),N=-1),g=!0;var z=p;try{for(d(P),m=n(s);m!==null&&(!(m.expirationTime>P)||x&&!ze());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var Z=W(m.expirationTime<=P);P=e.unstable_now(),typeof Z=="function"?m.callback=Z:m===n(s)&&r(s),d(P)}else r(s);m=n(s)}if(m!==null)var nr=!0;else{var gt=n(c);gt!==null&&El(v,gt.startTime-P),nr=!1}return nr}finally{m=null,p=z,g=!1}}var C=!1,_=null,N=-1,H=5,O=-1;function ze(){return!(e.unstable_now()-Ox||125W?(x.sortIndex=z,t(c,x),n(s)===null&&x===n(c)&&(k?(f(N),N=-1):k=!0,El(v,z-W))):(x.sortIndex=Z,t(s,x),w||g||(w=!0,Sl(E))),x},e.unstable_shouldYield=ze,e.unstable_wrapCallback=function(x){var P=p;return function(){var z=p;p=P;try{return x.apply(this,arguments)}finally{p=z}}}})(ss);(function(e){e.exports=ss})(Zc);/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var as=ee,we=Jl;function y(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),bl=Object.prototype.hasOwnProperty,qc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Qu={},Ku={};function Jc(e){return bl.call(Ku,e)?!0:bl.call(Qu,e)?!1:qc.test(e)?Ku[e]=!0:(Qu[e]=!0,!1)}function bc(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function ef(e,t,n,r){if(t===null||typeof t>"u"||bc(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function ce(e,t,n,r,l,i,u){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=u}var ne={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ne[e]=new ce(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];ne[t]=new ce(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ne[e]=new ce(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ne[e]=new ce(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ne[e]=new ce(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ne[e]=new ce(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ne[e]=new ce(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ne[e]=new ce(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ne[e]=new ce(e,5,!1,e.toLowerCase(),null,!1,!1)});var Gi=/[\-:]([a-z])/g;function Zi(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!1,!1)});ne.xlinkHref=new ce("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!0,!0)});function qi(e,t,n,r){var l=ne.hasOwnProperty(t)?ne[t]:null;(l!==null?l.type!==0:r||!(2o||l[u]!==i[o]){var s=` -`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=o);break}}}finally{Nl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?Sn(e):""}function tf(e){switch(e.tag){case 5:return Sn(e.type);case 16:return Sn("Lazy");case 13:return Sn("Suspense");case 19:return Sn("SuspenseList");case 0:case 2:case 15:return e=Pl(e.type,!1),e;case 11:return e=Pl(e.type.render,!1),e;case 1:return e=Pl(e.type,!0),e;default:return""}}function ri(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Ft:return"Fragment";case jt:return"Portal";case ei:return"Profiler";case Ji:return"StrictMode";case ti:return"Suspense";case ni:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case ds:return(e.displayName||"Context")+".Consumer";case fs:return(e._context.displayName||"Context")+".Provider";case bi:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case eu:return t=e.displayName||null,t!==null?t:ri(e.type)||"Memo";case be:t=e._payload,e=e._init;try{return ri(e(t))}catch{}}return null}function nf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ri(t);case 8:return t===Ji?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function pt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ms(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function rf(e){var t=ms(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,i.call(this,u)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function ur(e){e._valueTracker||(e._valueTracker=rf(e))}function hs(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=ms(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Dr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function li(e,t){var n=t.checked;return V({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Yu(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=pt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function vs(e,t){t=t.checked,t!=null&&qi(e,"checked",t,!1)}function ii(e,t){vs(e,t);var n=pt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?ui(e,t.type,n):t.hasOwnProperty("defaultValue")&&ui(e,t.type,pt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Gu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function ui(e,t,n){(t!=="number"||Dr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var En=Array.isArray;function Yt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=or.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Dn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var _n={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},lf=["Webkit","ms","Moz","O"];Object.keys(_n).forEach(function(e){lf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),_n[t]=_n[e]})});function ks(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||_n.hasOwnProperty(e)&&_n[e]?(""+t).trim():t+"px"}function Ss(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=ks(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var uf=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ai(e,t){if(t){if(uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(y(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(y(61))}if(t.style!=null&&typeof t.style!="object")throw Error(y(62))}}function ci(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var fi=null;function tu(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var di=null,Gt=null,Zt=null;function Ju(e){if(e=er(e)){if(typeof di!="function")throw Error(y(280));var t=e.stateNode;t&&(t=cl(t),di(e.stateNode,e.type,t))}}function Es(e){Gt?Zt?Zt.push(e):Zt=[e]:Gt=e}function xs(){if(Gt){var e=Gt,t=Zt;if(Zt=Gt=null,Ju(e),t)for(e=0;e>>=0,e===0?32:31-(yf(e)/gf|0)|0}var sr=64,ar=4194304;function xn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function $r(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,u=n&268435455;if(u!==0){var o=u&~l;o!==0?r=xn(o):(i&=u,i!==0&&(r=xn(i)))}else u=n&~l,u!==0?r=xn(u):i!==0&&(r=xn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function Jn(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Ie(t),e[t]=n}function Ef(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Pn),oo=String.fromCharCode(32),so=!1;function Hs(e,t){switch(e){case"keyup":return Zf.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Ws(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Ut=!1;function Jf(e,t){switch(e){case"compositionend":return Ws(t);case"keypress":return t.which!==32?null:(so=!0,oo);case"textInput":return e=t.data,e===oo&&so?null:e;default:return null}}function bf(e,t){if(Ut)return e==="compositionend"||!au&&Hs(e,t)?(e=Vs(),Cr=uu=rt=null,Ut=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=po(n)}}function Ys(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?Ys(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Gs(){for(var e=window,t=Dr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Dr(e.document)}return t}function cu(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function sd(e){var t=Gs(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&Ys(n.ownerDocument.documentElement,n)){if(r!==null&&cu(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=mo(n,i);var u=mo(n,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(u.node,u.offset)):(t.setEnd(u.node,u.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,$t=null,gi=null,Ln=null,wi=!1;function ho(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;wi||$t==null||$t!==Dr(r)||(r=$t,"selectionStart"in r&&cu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Ln&&Vn(Ln,r)||(Ln=r,r=Br(gi,"onSelect"),0Bt||(e.current=_i[Bt],_i[Bt]=null,Bt--)}function M(e,t){Bt++,_i[Bt]=e.current,e.current=t}var mt={},ue=vt(mt),pe=vt(!1),zt=mt;function tn(e,t){var n=e.type.contextTypes;if(!n)return mt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function me(e){return e=e.childContextTypes,e!=null}function Wr(){j(pe),j(ue)}function Eo(e,t,n){if(ue.current!==mt)throw Error(y(168));M(ue,t),M(pe,n)}function la(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(y(108,nf(e)||"Unknown",l));return V({},n,r)}function Qr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||mt,zt=ue.current,M(ue,e),M(pe,pe.current),!0}function xo(e,t,n){var r=e.stateNode;if(!r)throw Error(y(169));n?(e=la(e,t,zt),r.__reactInternalMemoizedMergedChildContext=e,j(pe),j(ue),M(ue,e)):j(pe),M(pe,n)}var He=null,fl=!1,Vl=!1;function ia(e){He===null?He=[e]:He.push(e)}function kd(e){fl=!0,ia(e)}function yt(){if(!Vl&&He!==null){Vl=!0;var e=0,t=I;try{var n=He;for(I=1;e>=u,l-=u,We=1<<32-Ie(t)+l|n<N?(H=_,_=null):H=_.sibling;var O=p(f,_,d[N],v);if(O===null){_===null&&(_=H);break}e&&_&&O.alternate===null&&t(f,_),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O,_=H}if(N===d.length)return n(f,_),U&&St(f,N),E;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var ze=p(f,_,O.value,v);if(ze===null){_===null&&(_=H);break}e&&_&&ze.alternate===null&&t(f,_),a=i(ze,a,N),C===null?E=ze:C.sibling=ze,C=ze,_=H}if(O.done)return n(f,_),U&&St(f,N),E;if(_===null){for(;!O.done;N++,O=d.next())O=m(f,O.value,v),O!==null&&(a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return U&&St(f,N),E}for(_=r(f,_);!O.done;N++,O=d.next())O=g(_,f,N,O.value,v),O!==null&&(e&&O.alternate!==null&&_.delete(O.key===null?N:O.key),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return e&&_.forEach(function(fn){return t(f,fn)}),U&&St(f,N),E}function F(f,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Ft&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case ir:e:{for(var E=d.key,C=a;C!==null;){if(C.key===E){if(E=d.type,E===Ft){if(C.tag===7){n(f,C.sibling),a=l(C,d.props.children),a.return=f,f=a;break e}}else if(C.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===be&&To(E)===C.type){n(f,C.sibling),a=l(C,d.props),a.ref=gn(f,C,d),a.return=f,f=a;break e}n(f,C);break}else t(f,C);C=C.sibling}d.type===Ft?(a=Pt(d.props.children,f.mode,v,d.key),a.return=f,f=a):(v=Rr(d.type,d.key,d.props,null,f.mode,v),v.ref=gn(f,a,d),v.return=f,f=v)}return u(f);case jt:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){n(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{n(f,a);break}else t(f,a);a=a.sibling}a=Gl(d,f.mode,v),a.return=f,f=a}return u(f);case be:return C=d._init,F(f,a,C(d._payload),v)}if(En(d))return w(f,a,d,v);if(pn(d))return k(f,a,d,v);vr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(n(f,a.sibling),a=l(a,d),a.return=f,f=a):(n(f,a),a=Yl(d,f.mode,v),a.return=f,f=a),u(f)):n(f,a)}return F}var rn=pa(!0),ma=pa(!1),tr={},Ve=vt(tr),Qn=vt(tr),Kn=vt(tr);function _t(e){if(e===tr)throw Error(y(174));return e}function wu(e,t){switch(M(Kn,t),M(Qn,e),M(Ve,tr),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:si(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=si(t,e)}j(Ve),M(Ve,t)}function ln(){j(Ve),j(Qn),j(Kn)}function ha(e){_t(Kn.current);var t=_t(Ve.current),n=si(t,e.type);t!==n&&(M(Qn,e),M(Ve,n))}function ku(e){Qn.current===e&&(j(Ve),j(Qn))}var $=vt(0);function qr(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Bl=[];function Su(){for(var e=0;en?n:4,e(!0);var r=Hl.transition;Hl.transition={};try{e(!1),t()}finally{I=n,Hl.transition=r}}function Oa(){return Pe().memoizedState}function Cd(e,t,n){var r=ft(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Ra(e))Ia(t,n);else if(n=aa(e,t,n,r),n!==null){var l=se();Me(n,e,r,l),Ma(n,t,r)}}function _d(e,t,n){var r=ft(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Ra(e))Ia(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var u=t.lastRenderedState,o=i(u,n);if(l.hasEagerState=!0,l.eagerState=o,je(o,u)){var s=t.interleaved;s===null?(l.next=l,yu(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=aa(e,t,l,r),n!==null&&(l=se(),Me(n,e,r,l),Ma(n,t,r))}}function Ra(e){var t=e.alternate;return e===A||t!==null&&t===A}function Ia(e,t){Tn=Jr=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ma(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,ru(e,n)}}var br={readContext:Ne,useCallback:re,useContext:re,useEffect:re,useImperativeHandle:re,useInsertionEffect:re,useLayoutEffect:re,useMemo:re,useReducer:re,useRef:re,useState:re,useDebugValue:re,useDeferredValue:re,useTransition:re,useMutableSource:re,useSyncExternalStore:re,useId:re,unstable_isNewReconciler:!1},Nd={readContext:Ne,useCallback:function(e,t){return Ue().memoizedState=[e,t===void 0?null:t],e},useContext:Ne,useEffect:Ro,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,zr(4194308,4,Na.bind(null,t,e),n)},useLayoutEffect:function(e,t){return zr(4194308,4,e,t)},useInsertionEffect:function(e,t){return zr(4,2,e,t)},useMemo:function(e,t){var n=Ue();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ue();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Cd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var t=Ue();return e={current:e},t.memoizedState=e},useState:Oo,useDebugValue:Nu,useDeferredValue:function(e){return Ue().memoizedState=e},useTransition:function(){var e=Oo(!1),t=e[0];return e=xd.bind(null,e[1]),Ue().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=A,l=Ue();if(U){if(n===void 0)throw Error(y(407));n=n()}else{if(n=t(),J===null)throw Error(y(349));Tt&30||ga(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,Ro(ka.bind(null,r,i,e),[e]),r.flags|=2048,Gn(9,wa.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=Ue(),t=J.identifierPrefix;if(U){var n=Qe,r=We;n=(r&~(1<<32-Ie(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=Xn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(n,{is:r.is}):(e=u.createElement(n),n==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,n),e[$e]=t,e[Wn]=r,Ha(e,t,!1,!1),t.stateNode=e;e:{switch(u=ci(n,r),n){case"dialog":D("cancel",e),D("close",e),l=r;break;case"iframe":case"object":case"embed":D("load",e),l=r;break;case"video":case"audio":for(l=0;lon&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304)}else{if(!r)if(e=qr(u),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),wn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!u.alternate&&!U)return le(t),null}else 2*Q()-i.renderingStartTime>on&&n!==1073741824&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304);i.isBackwards?(u.sibling=t.child,t.child=u):(n=i.last,n!==null?n.sibling=u:t.child=u,i.last=u)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=Q(),t.sibling=null,n=$.current,M($,r?n&1|2:n&1),t):(le(t),null);case 22:case 23:return Ru(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?ve&1073741824&&(le(t),t.subtreeFlags&6&&(t.flags|=8192)):le(t),null;case 24:return null;case 25:return null}throw Error(y(156,t.tag))}function Md(e,t){switch(du(t),t.tag){case 1:return me(t.type)&&Wr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return ln(),j(pe),j(ue),Su(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return ku(t),null;case 13:if(j($),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(y(340));nn()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return j($),null;case 4:return ln(),null;case 10:return vu(t.type._context),null;case 22:case 23:return Ru(),null;case 24:return null;default:return null}}var gr=!1,ie=!1,Dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Kt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){B(e,t,r)}else n.current=null}function Fi(e,t,n){try{n()}catch(r){B(e,t,r)}}var Vo=!1;function jd(e,t){if(ki=Ar,e=Gs(),cu(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var u=0,o=-1,s=-1,c=0,h=0,m=e,p=null;t:for(;;){for(var g;m!==n||l!==0&&m.nodeType!==3||(o=u+l),m!==i||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(g=m.firstChild)!==null;)p=m,m=g;for(;;){if(m===e)break t;if(p===n&&++c===l&&(o=u),p===i&&++h===r&&(s=u),(g=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=g}n=o===-1||s===-1?null:{start:o,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(Si={focusedElem:e,selectionRange:n},Ar=!1,S=t;S!==null;)if(t=S,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,S=e;else for(;S!==null;){t=S;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,F=w.memoizedState,f=t.stateNode,a=f.getSnapshotBeforeUpdate(t.elementType===t.type?k:Te(t.type,k),F);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=t.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(t,t.return,v)}if(e=t.sibling,e!==null){e.return=t.return,S=e;break}S=t.return}return w=Vo,Vo=!1,w}function On(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Fi(t,n,i)}l=l.next}while(l!==r)}}function ml(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function Ui(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ka(e){var t=e.alternate;t!==null&&(e.alternate=null,Ka(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[$e],delete t[Wn],delete t[Ci],delete t[gd],delete t[wd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Xa(e){return e.tag===5||e.tag===3||e.tag===4}function Bo(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Xa(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function $i(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Hr));else if(r!==4&&(e=e.child,e!==null))for($i(e,t,n),e=e.sibling;e!==null;)$i(e,t,n),e=e.sibling}function Ai(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Ai(e,t,n),e=e.sibling;e!==null;)Ai(e,t,n),e=e.sibling}var b=null,Oe=!1;function Je(e,t,n){for(n=n.child;n!==null;)Ya(e,t,n),n=n.sibling}function Ya(e,t,n){if(Ae&&typeof Ae.onCommitFiberUnmount=="function")try{Ae.onCommitFiberUnmount(ul,n)}catch{}switch(n.tag){case 5:ie||Kt(n,t);case 6:var r=b,l=Oe;b=null,Je(e,t,n),b=r,Oe=l,b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):b.removeChild(n.stateNode));break;case 18:b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?Al(e.parentNode,n):e.nodeType===1&&Al(e,n),$n(e)):Al(b,n.stateNode));break;case 4:r=b,l=Oe,b=n.stateNode.containerInfo,Oe=!0,Je(e,t,n),b=r,Oe=l;break;case 0:case 11:case 14:case 15:if(!ie&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,u=i.destroy;i=i.tag,u!==void 0&&(i&2||i&4)&&Fi(n,t,u),l=l.next}while(l!==r)}Je(e,t,n);break;case 1:if(!ie&&(Kt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(o){B(n,t,o)}Je(e,t,n);break;case 21:Je(e,t,n);break;case 22:n.mode&1?(ie=(r=ie)||n.memoizedState!==null,Je(e,t,n),ie=r):Je(e,t,n);break;default:Je(e,t,n)}}function Ho(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Dd),t.forEach(function(r){var l=Qd.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Le(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=u),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Ud(r/1960))-r,10e?16:e,lt===null)var r=!1;else{if(e=lt,lt=null,nl=0,R&6)throw Error(y(331));var l=R;for(R|=4,S=e.current;S!==null;){var i=S,u=i.child;if(S.flags&16){var o=i.deletions;if(o!==null){for(var s=0;sQ()-Tu?Nt(e,0):Lu|=n),he(e,t)}function nc(e,t){t===0&&(e.mode&1?(t=ar,ar<<=1,!(ar&130023424)&&(ar=4194304)):t=1);var n=se();e=Ge(e,t),e!==null&&(Jn(e,t,n),he(e,n))}function Wd(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),nc(e,n)}function Qd(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(t),nc(e,n)}var rc;rc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||pe.current)de=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return de=!1,Rd(e,t,n);de=!!(e.flags&131072)}else de=!1,U&&t.flags&1048576&&ua(t,Xr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Lr(e,t),e=t.pendingProps;var l=tn(t,ue.current);Jt(t,n),l=xu(null,t,r,e,l,n);var i=Cu();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,me(r)?(i=!0,Qr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,gu(t),l.updater=dl,t.stateNode=l,l._reactInternals=t,Ti(t,r,e,n),t=Ii(null,t,r,!0,i,n)):(t.tag=0,U&&i&&fu(t),oe(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Lr(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Xd(r),e=Te(r,e),l){case 0:t=Ri(null,t,r,e,n);break e;case 1:t=Uo(null,t,r,e,n);break e;case 11:t=jo(null,t,r,e,n);break e;case 14:t=Fo(null,t,r,Te(r.type,e),n);break e}throw Error(y(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Ri(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Uo(e,t,r,l,n);case 3:e:{if(Aa(t),e===null)throw Error(y(387));r=t.pendingProps,i=t.memoizedState,l=i.element,ca(e,t),Zr(t,r,null,n);var u=t.memoizedState;if(r=u.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=un(Error(y(423)),t),t=$o(e,t,r,n,l);break e}else if(r!==l){l=un(Error(y(424)),t),t=$o(e,t,r,n,l);break e}else for(ye=st(t.stateNode.containerInfo.firstChild),ge=t,U=!0,Re=null,n=ma(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(nn(),r===l){t=Ze(e,t,n);break e}oe(e,t,r,n)}t=t.child}return t;case 5:return ha(t),e===null&&Pi(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,u=l.children,Ei(r,l)?u=null:i!==null&&Ei(r,i)&&(t.flags|=32),$a(e,t),oe(e,t,u,n),t.child;case 6:return e===null&&Pi(t),null;case 13:return Va(e,t,n);case 4:return wu(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=rn(t,null,r,n):oe(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),jo(e,t,r,l,n);case 7:return oe(e,t,t.pendingProps,n),t.child;case 8:return oe(e,t,t.pendingProps.children,n),t.child;case 12:return oe(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,u=l.value,M(Yr,r._currentValue),r._currentValue=u,i!==null)if(je(i.value,u)){if(i.children===l.children&&!pe.current){t=Ze(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var o=i.dependencies;if(o!==null){u=i.child;for(var s=o.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ke(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),zi(i.return,n,t),o.lanes|=n;break}s=s.next}}else if(i.tag===10)u=i.type===t.type?null:i.child;else if(i.tag===18){if(u=i.return,u===null)throw Error(y(341));u.lanes|=n,o=u.alternate,o!==null&&(o.lanes|=n),zi(u,n,t),u=i.sibling}else u=i.child;if(u!==null)u.return=i;else for(u=i;u!==null;){if(u===t){u=null;break}if(i=u.sibling,i!==null){i.return=u.return,u=i;break}u=u.return}i=u}oe(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,Jt(t,n),l=Ne(l),r=r(l),t.flags|=1,oe(e,t,r,n),t.child;case 14:return r=t.type,l=Te(r,t.pendingProps),l=Te(r.type,l),Fo(e,t,r,l,n);case 15:return Fa(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Lr(e,t),t.tag=1,me(r)?(e=!0,Qr(t)):e=!1,Jt(t,n),da(t,r,l),Ti(t,r,l,n),Ii(null,t,r,!0,e,n);case 19:return Ba(e,t,n);case 22:return Ua(e,t,n)}throw Error(y(156,t.tag))};function lc(e,t){return Ts(e,t)}function Kd(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ce(e,t,n,r){return new Kd(e,t,n,r)}function Mu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Xd(e){if(typeof e=="function")return Mu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===bi)return 11;if(e===eu)return 14}return 2}function dt(e,t){var n=e.alternate;return n===null?(n=Ce(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Rr(e,t,n,r,l,i){var u=2;if(r=e,typeof e=="function")Mu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Ft:return Pt(n.children,l,i,t);case Ji:u=8,l|=8;break;case ei:return e=Ce(12,n,t,l|2),e.elementType=ei,e.lanes=i,e;case ti:return e=Ce(13,n,t,l),e.elementType=ti,e.lanes=i,e;case ni:return e=Ce(19,n,t,l),e.elementType=ni,e.lanes=i,e;case ps:return vl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case fs:u=10;break e;case ds:u=9;break e;case bi:u=11;break e;case eu:u=14;break e;case be:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return t=Ce(u,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Pt(e,t,n,r){return e=Ce(7,e,r,t),e.lanes=n,e}function vl(e,t,n,r){return e=Ce(22,e,r,t),e.elementType=ps,e.lanes=n,e.stateNode={isHidden:!1},e}function Yl(e,t,n){return e=Ce(6,e,null,t),e.lanes=n,e}function Gl(e,t,n){return t=Ce(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Yd(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Du(e,t,n,r,l,i,u,o,s){return e=new Yd(e,t,n,o,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Ce(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},gu(i),e}function Gd(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=ke})(Gc);var sc,qo=ql;sc=qo.createRoot,qo.hydrateRoot;const K=new us;console.log("!",{hfInference:K,a:Object.keys(K),b:Object.getOwnPropertyNames(us)});const ep=["audio-classification","audio-to-audio","automatic-speech-recognition","conversational","depth-estimation","document-question-answering","feature-extraction","fill-mask","graph-ml","image-classification","image-segmentation","image-to-image","image-to-text","multiple-choice","object-detection","other","question-answering","reinforcement-learning","robotics","sentence-similarity","summarization","table-question-answering","table-to-text","tabular-classification","tabular-regression","tabular-to-text","text-classification","text-generation","text-retrieval","text-to-image","text-to-speech","text2text-generation","time-series-forecasting","token-classification","translation","unconditional-image-generation","video-classification","visual-question-answering","voice-activity-detection","zero-shot-classification","zero-shot-image-classification"].filter(e=>Object.getOwnPropertyNames(Object.getPrototypeOf(K)).includes(Yc(e))),Zl={},tp=async e=>{if(Zl[e])return Zl[e];const t=[];for await(const n of Vc({search:{task:e}}))t.push(n);return Zl[e]=t,t},np=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Task"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.setTask(t.target.value),placeholder:"Select a task",value:e.task,children:[L("option",{children:"Select a task"}),ep.map(t=>L("option",{value:t,children:t},t))]})]}),rp=e=>{const[t,n]=ee.useState(!1),[r,l]=ee.useState([]);return ee.useEffect(()=>{e.task&&(n(!0),tp(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Model"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.setModel(i.target.value),placeholder:"Select a model",value:e.model,children:[L("option",{children:"Select a model"}),r.map(i=>L("option",{value:i.name,children:i.name},i.name))]})]}):L("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},lp=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),ip=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("img",{className:"w-full",src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),up=e=>e.model&&e.task?["audio-classification","automatic-speech-recognition"].includes(e.task)?L(lp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["image-classification","image-segmentation","object-detection"].includes(e.task)?L(ip,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["conversational","feature-extraction","fill-mask","question-answering","summarization","table-question-answering","text-classification","text-generation","text-to-image","token-classification","translation","zero-shot-classification"].includes(e.task)?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),L("input",{className:"bg-yellow-200 py-6 text-center w-full",onChange:t=>{t.target.value?e.setInputs(t.target.value):e.setInputs("")},type:"text",value:e.inputs??""})]}):L("div",{className:"w-full",children:L("p",{className:"text-center",children:"Inference for this task is not yet supported."})}):L(ee.Fragment,{}),op=e=>{if(e.inputs&&e.model&&e.task){const t=()=>{e.setInputs(void 0),e.setOutput(void 0)};return L("button",{className:`border-4 border-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:"Clear"})}return L(ee.Fragment,{})},sp=e=>{if(e.inputs&&e.model&&e.task){const t=async()=>{if(e.inputs&&e.model&&e.task){e.setLoading(!0);try{switch(e.task){case"audio-classification":{const n=await K.audioClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"automatic-speech-recognition":{const n=await K.automaticSpeechRecognition({data:e.inputs,model:e.model});e.setOutput(n);break}case"conversational":{const n=await K.conversational({inputs:{text:e.inputs},model:e.model});e.setOutput(n);break}case"feature-extraction":{const n=await K.featureExtraction({inputs:{[e.inputs]:e.inputs},model:e.model});e.setOutput(n);break}case"fill-mask":{const n=await K.fillMask({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"image-classification":{const n=await K.imageClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-segmentation":{const n=await K.imageSegmentation({data:e.inputs,model:e.model});e.setOutput(n);break}case"object-detection":{const n=await K.objectDetection({data:e.inputs,model:e.model});e.setOutput(n);break}case"question-answering":{const n=await K.questionAnswer({inputs:{context:e.inputs,question:e.inputs},model:e.model});e.setOutput(n);break}case"summarization":{const n=await K.summarization({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"table-question-answering":{const n=await K.tableQuestionAnswer({inputs:{query:e.inputs,table:{[e.inputs]:[e.inputs]}},model:e.model});e.setOutput(n);break}case"text-classification":{const n=await K.textClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-generation":{const n=await K.textGeneration({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-to-image":{const n=await K.textToImage({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"token-classification":{const n=await K.tokenClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"translation":{const n=await K.translation({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"zero-shot-classification":{const n=await K.zeroShotClassification({inputs:e.inputs,model:e.model,parameters:{candidate_labels:[e.inputs]}});e.setOutput(n);break}}}catch(n){n instanceof Error&&e.setOutput(n.message)}e.setLoading(!1)}};return L("button",{className:`bg-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:e.loading?"Submitting":"Submit"})}return L(ee.Fragment,{})},ap=e=>{if(e.output){const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Output"}),L("pre",{className:`bg-yellow-200 p-6 w-full whitespace-pre-wrap ${e.loading?"cursor-wait opacity-50":""}`,children:t})]})}return L(ee.Fragment,{})},cp=()=>{const[e,t]=ee.useState(),[n,r]=ee.useState(),[l,i]=ee.useState(),[u,o]=ee.useState(!1),[s,c]=ee.useState();return L("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:De("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[L("header",{className:"text-center text-6xl",children:"🤗"}),L(np,{setTask:t,task:e}),L(rp,{model:n,setModel:r,task:e}),L(up,{inputs:l,model:n,setInputs:i,task:e}),L(op,{inputs:l,loading:u,model:n,setInputs:i,setOutput:c,task:e}),L(sp,{inputs:l,loading:u,model:n,setLoading:o,setOutput:c,task:e}),L(ap,{loading:u,output:s})]})})},fp=()=>{const e="root",t=document.getElementById(e);if(t){const n=sc(t),r=L(ee.StrictMode,{children:L(cp,{})});n.render(r)}};fp(); diff --git a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py b/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py deleted file mode 100644 index 76ecb679085a96544d3bbf0ecff2a6fa371bd181..0000000000000000000000000000000000000000 --- a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py +++ /dev/null @@ -1,73 +0,0 @@ -"""Download modles.""" -# pylint: disable=invalid-name, broad-exception-caught, line-too-long -from typing import Optional - -import typer -from dl_hf_model import dl_hf_model -from loguru import logger - -url = "https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGML/blob/main/upstage-llama-2-70b-instruct-v2.ggmlv3.q3_K_S.bin" - - -__version__ = "0.0.1" - -app = typer.Typer( - name="dl-mode", - add_completion=False, - help="donwload models from hf and save to a dir (default models)", -) - - -def _version_callback(value: bool) -> None: - if value: - typer.echo( - f"{app.info.name} v.{__version__} -- download models for given url(s)" - ) - raise typer.Exit() - - -@app.command() -def main( - urls: str = typer.Argument( # pylint: disable=unused-argument - "", - help=f"one or more urls (default {url})", - show_default=False, - ), - version: Optional[bool] = typer.Option( # pylint: disable=unused-argument - None, - "--version", - "-v", - "-V", - help="Show version info and exit.", - callback=_version_callback, - is_eager=True, - ), - model_dir: Optional[str] = typer.Option( - None, - "--mode-dir", - help="dir to save downloaded models (default models)", - ), -): - """Download a model or model given url(s).""" - logger.trace(f"{urls}") - if model_dir is None: - model_dir = "models" - if isinstance(urls, str): - urls.split() - - url_list = urls[:] - if not urls: - url_list = [url] - try: - for elm in url_list: - dl_hf_model(elm) - except Exception as exc: - logger.error(exc) - raise typer.Exit() - - -if __name__ == "__main__": - try: - app() - except Exception as exc_: - logger.error(exc_) diff --git a/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py b/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py deleted file mode 100644 index 43b7c9d584f35eb0e6fc8a7a4477a72bec58caa9..0000000000000000000000000000000000000000 --- a/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py +++ /dev/null @@ -1,140 +0,0 @@ -# ------------------------------------------------------------------------------------ -# Minimal DALL-E -# Copyright (c) 2021 KakaoBrain. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------ -# Modified from minGPT (https://github.com/karpathy/minGPT) -# Copyright (c) 2020 Andrej Karpathy. All Rights Reserved. -# ------------------------------------------------------------------------------------ - -import math -import torch -import torch.nn as nn -from torch.nn import functional as F - - -class GELU(nn.Module): - def __init__(self, use_approx=False): - super().__init__() - self.use_approx = use_approx - - def forward(self, x): - if self.use_approx: - return x * torch.sigmoid(1.702 * x) - else: - return F.gelu(x) - - -class MultiHeadSelfAttention(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - resid_pdrop: float, - attn_pdrop: float, - attn_bias: bool, - use_mask: bool = True): - super().__init__() - assert embed_dim % n_heads == 0 - - # key, query, value projections for all heads - self.key = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.query = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.value = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - - # regularization - self.attn_drop = nn.Dropout(attn_pdrop) - self.resid_drop = nn.Dropout(resid_pdrop) - - # output projection - self.proj = nn.Linear(embed_dim, embed_dim, attn_bias) - - self.n_heads = n_heads - self.ctx_len = ctx_len - self.use_mask = use_mask - if self.use_mask: - self.register_buffer("mask", torch.ones(ctx_len, ctx_len), persistent=False) - self.mask = torch.tril(self.mask).view(1, ctx_len, ctx_len) - - def forward(self, x, use_cache=False, layer_past=None): - B, T, C = x.shape - x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - k = self.key(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - v = self.value(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - - if use_cache: - present = torch.stack([k, v]) - - if layer_past is not None: - past_key, past_value = layer_past - k = torch.cat([past_key, k], dim=-2) - v = torch.cat([past_value, v], dim=-2) - - if use_cache and layer_past is not None: - # Tensor shape below: (B * nh, 1, hs) X (B * nh, hs, K) -> (B * nh, 1, K) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, 1, K) X (B*nh, K, hs) -> (B*nh, 1, hs) - else: - # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, T) -> (B * nh, T, T) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - if self.use_mask: - mask = self.mask if T == self.ctx_len else self.mask[:, :T, :T] - att = att.masked_fill(mask == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs) - y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_drop(self.proj(y)) - if use_cache: - return y.transpose(0, 1).contiguous(), present # (T, B, C) -> (B, T, C) - else: - return y.transpose(0, 1).contiguous() # (T, B, C) -> (B, T, C) - - -class Block(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - mlp_bias: bool, - attn_bias: bool, - resid_pdrop: bool, - attn_pdrop: bool, - gelu_use_approx: bool): - super().__init__() - self.ln1 = nn.LayerNorm(embed_dim) - self.ln2 = nn.LayerNorm(embed_dim) - - self.attn = MultiHeadSelfAttention(ctx_len=ctx_len, - embed_dim=embed_dim, - n_heads=n_heads, - attn_pdrop=attn_pdrop, - resid_pdrop=resid_pdrop, - attn_bias=attn_bias, - use_mask=True) - self.mlp = nn.Sequential( - nn.Linear(embed_dim, 4 * embed_dim, bias=mlp_bias), - GELU(gelu_use_approx), - nn.Linear(4 * embed_dim, embed_dim, bias=mlp_bias), - nn.Dropout(resid_pdrop), - ) - - def forward(self, x): - x = x + self.attn(self.ln1(x)) - x = x + self.mlp(self.ln2(x)) - return x - - def sample(self, x, layer_past=None): - attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past) - x = x + attn - x = x + self.mlp(self.ln2(x)) - return x, present diff --git a/spaces/mmcquade11/codex-text-summarizer/README.md b/spaces/mmcquade11/codex-text-summarizer/README.md deleted file mode 100644 index 5680f2aa5b018812dbbc1abb8c9a1eac1fc2ac0d..0000000000000000000000000000000000000000 --- a/spaces/mmcquade11/codex-text-summarizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Codex Text Summarizer -emoji: 🦀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/mms-meta/MMS/vits/transforms.py b/spaces/mms-meta/MMS/vits/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/vits/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/mnauf/detect-bees/utils/aws/resume.py b/spaces/mnauf/detect-bees/utils/aws/resume.py deleted file mode 100644 index b21731c979a121ab8227280351b70d6062efd983..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/utils/aws/resume.py +++ /dev/null @@ -1,40 +0,0 @@ -# Resume all interrupted trainings in yolov5/ dir including DDP trainings -# Usage: $ python utils/aws/resume.py - -import os -import sys -from pathlib import Path - -import torch -import yaml - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[2] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -port = 0 # --master_port -path = Path('').resolve() -for last in path.rglob('*/**/last.pt'): - ckpt = torch.load(last) - if ckpt['optimizer'] is None: - continue - - # Load opt.yaml - with open(last.parent.parent / 'opt.yaml', errors='ignore') as f: - opt = yaml.safe_load(f) - - # Get device count - d = opt['device'].split(',') # devices - nd = len(d) # number of devices - ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel - - if ddp: # multi-GPU - port += 1 - cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}' - else: # single-GPU - cmd = f'python train.py --resume {last}' - - cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread - print(cmd) - os.system(cmd) diff --git a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js b/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js deleted file mode 100644 index b81b605684da5373137dcdf31265f0f7e6e33b6d..0000000000000000000000000000000000000000 --- a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_jupyter_widgets_jupyterlab_manager=self.webpackChunk_jupyter_widgets_jupyterlab_manager||[]).push([[150],{6110:(e,t,o)=>{"use strict";o.r(t),o.d(t,{CONTROL_COMM_PROTOCOL_VERSION:()=>g,CONTROL_COMM_TARGET:()=>f,CONTROL_COMM_TIMEOUT:()=>p,ManagerBase:()=>v,base64ToBuffer:()=>d,bufferToBase64:()=>m,bufferToHex:()=>a,hexToBuffer:()=>i,serialize_state:()=>b});var s=o(9930),n=o(1526),r=o(5766);const l=["00","01","02","03","04","05","06","07","08","09","0A","0B","0C","0D","0E","0F","10","11","12","13","14","15","16","17","18","19","1A","1B","1C","1D","1E","1F","20","21","22","23","24","25","26","27","28","29","2A","2B","2C","2D","2E","2F","30","31","32","33","34","35","36","37","38","39","3A","3B","3C","3D","3E","3F","40","41","42","43","44","45","46","47","48","49","4A","4B","4C","4D","4E","4F","50","51","52","53","54","55","56","57","58","59","5A","5B","5C","5D","5E","5F","60","61","62","63","64","65","66","67","68","69","6A","6B","6C","6D","6E","6F","70","71","72","73","74","75","76","77","78","79","7A","7B","7C","7D","7E","7F","80","81","82","83","84","85","86","87","88","89","8A","8B","8C","8D","8E","8F","90","91","92","93","94","95","96","97","98","99","9A","9B","9C","9D","9E","9F","A0","A1","A2","A3","A4","A5","A6","A7","A8","A9","AA","AB","AC","AD","AE","AF","B0","B1","B2","B3","B4","B5","B6","B7","B8","B9","BA","BB","BC","BD","BE","BF","C0","C1","C2","C3","C4","C5","C6","C7","C8","C9","CA","CB","CC","CD","CE","CF","D0","D1","D2","D3","D4","D5","D6","D7","D8","D9","DA","DB","DC","DD","DE","DF","E0","E1","E2","E3","E4","E5","E6","E7","E8","E9","EA","EB","EC","ED","EE","EF","F0","F1","F2","F3","F4","F5","F6","F7","F8","F9","FA","FB","FC","FD","FE","FF"];function a(e){const t=new Uint8Array(e),o=[];for(let e=0;e/g,">");for(navigator&&"Microsoft Internet Explorer"===navigator.appName&&(r=r.replace(/(%[^\n]*)\n/g,"$1
                \n"));t>e;)n[t]="",t--;return n[e]="@@"+s.length+"@@",o&&(r=o(r)),s.push(r),n}var u=o(4330),h=o.n(u);const w=s.PROTOCOL_VERSION.split(".",1)[0],f="jupyter.widget.control",g="1.0.0",p=4e3;class v{constructor(){this.comm_target_name="jupyter.widget",this._models=Object.create(null)}setViewOptions(e={}){return e}create_view(e,t={}){const o=(0,s.uuid)(),n=e.state_change=e.state_change.then((async()=>{const n=e.get("_view_name"),r=e.get("_view_module");try{const s=new(await this.loadViewClass(n,r,e.get("_view_module_version")))({model:e,options:this.setViewOptions(t)});return s.listenTo(e,"destroy",s.remove),await s.render(),s.once("remove",(()=>{e.views&&delete e.views[o]})),s}catch(o){console.error(`Could not create a view for model id ${e.model_id}`);const l=`Failed to create view for '${n}' from module '${r}' with model '${e.name}' from module '${e.module}'`,a=new(s.createErrorWidgetModel(o,l)),i=new s.ErrorWidgetView({model:a,options:this.setViewOptions(t)});return await i.render(),i}}));return e.views&&(e.views[o]=n),n}callbacks(e){return{}}async get_model(e){const t=this._models[e];if(void 0===t)throw new Error("widget model not found");return t}has_model(e){return void 0!==this._models[e]}handle_comm_open(e,t){const o=(t.metadata||{}).version||"";if(o.split(".",1)[0]!==w){const e=`Wrong widget protocol version: received protocol version '${o}', but was expecting major version '${w}'`;return console.error(e),Promise.reject(e)}const n=t.content.data,r=n.buffer_paths||[],l=t.buffers||[];return(0,s.put_buffers)(n.state,r,l),this.new_model({model_name:n.state._model_name,model_module:n.state._model_module,model_module_version:n.state._model_module_version,comm:e},n.state).catch((0,s.reject)("Could not create a model.",!0))}new_widget(e,t={}){let o;if(void 0===e.view_name||void 0===e.view_module||void 0===e.view_module_version)return Promise.reject("new_widget(...) must be given view information in the options.");o=e.comm?Promise.resolve(e.comm):this._create_comm(this.comm_target_name,e.model_id,{state:{_model_module:e.model_module,_model_module_version:e.model_module_version,_model_name:e.model_name,_view_module:e.view_module,_view_module_version:e.view_module_version,_view_name:e.view_name}},{version:s.PROTOCOL_VERSION});const n=Object.assign({},e);return o.then((e=>(n.comm=e,this.new_model(n,t).then((e=>(e.sync("create",e),e))))),(()=>(n.model_id||(n.model_id=(0,s.uuid)()),this.new_model(n,t))))}register_model(e,t){this._models[e]=t,t.then((t=>{t.once("comm:close",(()=>{delete this._models[e]}))}))}async new_model(e,t={}){var o,s;const n=null!==(o=e.model_id)&&void 0!==o?o:null===(s=e.comm)||void 0===s?void 0:s.comm_id;if(!n)throw new Error("Neither comm nor model_id provided in options object. At least one must exist.");e.model_id=n;const r=this._make_model(e,t);return this.register_model(n,r),await r}async _loadFromKernel(){let e,t;try{const o=await this._create_comm(f,(0,s.uuid)(),{},{version:g});await new Promise(((s,n)=>{o.on_msg((o=>{e=o.content.data,"update_states"===e.method?(t=(o.buffers||[]).map((e=>e instanceof DataView?e:new DataView(e instanceof ArrayBuffer?e:e.buffer))),s(null)):console.warn(`\n Unknown ${e.method} message on the Control channel\n `)})),o.on_close((()=>n("Control comm was closed too early"))),o.send({method:"request_states"},{}),setTimeout((()=>n("Control comm did not respond in time")),p)})),o.close()}catch(e){return console.warn('Failed to fetch ipywidgets through the "jupyter.widget.control" comm channel, fallback to fetching individual model state. Reason:',e),this._loadFromKernelModels()}const o=e.states,n={},r={};for(let o=0;o({widget_id:e,comm:this.has_model(e)?void 0:await this._create_comm("jupyter.widget",e)}))));await Promise.all(l.map((async({widget_id:e,comm:t})=>{const l=o[e];e in n&&(0,s.put_buffers)(l,n[e],r[e]);try{if(t)await this.new_model({model_name:l.model_name,model_module:l.model_module,model_module_version:l.model_module_version,model_id:e,comm:t},l.state);else{const t=await this.get_model(e),o=await t.constructor._deserialize_state(l.state,this);t.set_state(o)}}catch(e){console.error(e)}})))}async _loadFromKernelModels(){const e=await this._get_comm_info(),t=await Promise.all(Object.keys(e).map((async e=>{if(this.has_model(e))return;const t=await this._create_comm(this.comm_target_name,e);let o="";const r=new n.PromiseDelegate;return t.on_msg((e=>{if(e.parent_header.msg_id===o&&"comm_msg"===e.header.msg_type&&"update"===e.content.data.method){const o=e.content.data,n=o.buffer_paths||[],l=e.buffers||[];(0,s.put_buffers)(o.state,n,l),r.resolve({comm:t,msg:e})}})),o=t.send({method:"request_state"},this.callbacks(void 0)),r.promise})));await Promise.all(t.map((async e=>{if(!e)return;const t=e.msg.content;await this.new_model({model_name:t.data.state._model_name,model_module:t.data.state._model_module,model_module_version:t.data.state._model_module_version,comm:e.comm},t.data.state)})))}async _make_model(e,t={}){const o=e.model_id,n=this.loadModelClass(e.model_name,e.model_module,e.model_module_version);let r;const l=(e,t)=>new(s.createErrorWidgetModel(e,t));try{r=await n}catch(e){const t="Could not instantiate widget";return console.error(t),l(e,t)}if(!r){const t="Could not instantiate widget";return console.error(t),l(new Error(`Cannot find model module ${e.model_module}@${e.model_module_version}, ${e.model_name}`),t)}let a;try{const s=await r._deserialize_state(t,this);a=new r(s,{widget_manager:this,model_id:o,comm:e.comm})}catch(t){console.error(t),a=l(t,`Model class '${e.model_name}' from module '${e.model_module}' is loaded but can not be instantiated`)}return a.name=e.model_name,a.module=e.model_module,a}clear_state(){return(0,s.resolvePromisesDict)(this._models).then((e=>{Object.keys(e).forEach((t=>e[t].close())),this._models=Object.create(null)}))}get_state(e={}){const t=Object.keys(this._models).map((e=>this._models[e]));return Promise.all(t).then((t=>b(t,e)))}set_state(e){if(!(e.version_major&&e.version_major<=2))throw"Unsupported widget state format";const t=e.state;return this._get_comm_info().then((e=>Promise.all(Object.keys(t).map((o=>{const n={base64:d,hex:i},r=t[o],l=r.state;if(r.buffers){const e=r.buffers.map((e=>e.path)),t=r.buffers.map((e=>new DataView(n[e.encoding](e.data))));(0,s.put_buffers)(r.state,e,t)}if(this.has_model(o))return this.get_model(o).then((e=>e.constructor._deserialize_state(l||{},this).then((t=>(e.set_state(t),e)))));const a={model_id:o,model_name:r.model_name,model_module:r.model_module,model_module_version:r.model_module_version};return Object.prototype.hasOwnProperty.call(e,"model_id")?this._create_comm(this.comm_target_name,o).then((e=>(a.comm=e,this.new_model(a)))):this.new_model(a,l)})))))}disconnect(){Object.keys(this._models).forEach((e=>{this._models[e].then((e=>{e.comm_live=!1}))}))}resolveUrl(e){return Promise.resolve(e)}inline_sanitize(e){const t=function(e){const t=[];let o,s=null,n=null,r=null,l=0;/`/.test(e)?(e=e.replace(/~/g,"~T").replace(/(^|[^\\])(`+)([^\n]*?[^`\n])\2(?!`)/gm,(e=>e.replace(/\$/g,"~D"))),o=e=>e.replace(/~([TD])/g,((e,t)=>"T"===t?"~":"$"))):o=e=>e;let a=e.replace(/\r\n?/g,"\n").split(c);for(let e=1,i=a.length;e{let o=n[t];return"\\\\("===o.substr(0,3)&&"\\\\)"===o.substr(o.length-3)?o="\\("+o.substring(3,o.length-3)+"\\)":"\\\\["===o.substr(0,3)&&"\\\\]"===o.substr(o.length-3)&&(o="\\["+o.substring(3,o.length-3)+"\\]"),o}))}async loadModelClass(e,t,o){try{const s=this.loadClass(e,t,o);return await s,s}catch(o){console.error(o);const n=`Failed to load model class '${e}' from module '${t}'`;return s.createErrorWidgetModel(o,n)}}async loadViewClass(e,t,o){try{const s=this.loadClass(e,t,o);return await s,s}catch(o){console.error(o);const n=`Failed to load view class '${e}' from module '${t}'`;return s.createErrorWidgetView(o,n)}}filterExistingModelState(e){let t=e.state;return t=Object.keys(t).filter((e=>!this.has_model(e))).reduce(((e,o)=>(e[o]=t[o],e)),{}),Object.assign(Object.assign({},e),{state:t})}}function b(e,t={}){const o={};return e.forEach((e=>{const n=e.model_id,r=(0,s.remove_buffers)(e.serialize(e.get_state(t.drop_defaults))),l=r.buffers.map(((e,t)=>({data:m(e),path:r.buffer_paths[t],encoding:"base64"})));o[n]={model_name:e.name,model_module:e.module,model_module_version:e.get("_model_module_version"),state:r.state},l.length>0&&(o[n].buffers=l)})),{version_major:2,version_minor:0,state:o}}},6527:()=>{},6969:()=>{},2232:()=>{},4195:()=>{},3443:()=>{}}]); \ No newline at end of file diff --git a/spaces/mrm8488/PromptSource/templates.py b/spaces/mrm8488/PromptSource/templates.py deleted file mode 100644 index 52425f26663f0d120b6660a94bee98a085c7cccf..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/PromptSource/templates.py +++ /dev/null @@ -1,515 +0,0 @@ -import os -import random -import uuid -from collections import Counter, defaultdict -from shutil import rmtree -from typing import Dict, List, Optional, Tuple - -import pandas as pd -import pkg_resources -import yaml -from jinja2 import BaseLoader, Environment, meta - - -# Truncation of jinja template variables -# 1710 = 300 words x 4.7 avg characters per word + 300 spaces -TEXT_VAR_LENGTH = 2048 - -# Local path to the folder containing the templates -TEMPLATES_FOLDER_PATH = pkg_resources.resource_filename(__name__, "templates") - -env = Environment(loader=BaseLoader) - -# Allow the python function zip() -env.globals.update(zip=zip) - -# These are users whose datasets should be included in the results returned by -# filter_english_datasets (regardless of their metadata) -INCLUDED_USERS = {"Zaid", "craffel"} - - -def highlight(input): - return "" + input + "" - - -def choice(choices): - return random.choice(choices) - - -def most_frequent(items): - """Returns the set of items which appear most frequently in the input""" - if not items: - return - item_counts = Counter(items).most_common() - max_freq = item_counts[0][1] - most_frequent_items = [c[0] for c in item_counts if c[1] == max_freq] - return most_frequent_items - - -env.filters["highlight"] = highlight -env.filters["choice"] = choice -env.filters["most_frequent"] = most_frequent - - -class Template(yaml.YAMLObject): - """ - A prompt template. - """ - - yaml_tag = "!Template" - - def __init__(self, name, jinja, reference, metadata=None, answer_choices=None): - """ - Creates a prompt template. - - A prompt template is expressed in Jinja. It is rendered using an example - from the corresponding Hugging Face datasets library (a dictionary). The - separator ||| should appear once to divide the template into prompt and - output. Generally, the prompt should provide information on the desired - behavior, e.g., text passage and instructions, and the output should be - a desired response. - - :param name: unique name (per dataset) for template - :param jinja: template expressed in Jinja - :param reference: string describing author or paper reference for template - :param metadata: a Metadata object with template annotations - :param answer_choices: Jinja expression for answer choices. Should produce - a ||| delimited string of choices that enumerates - the possible completions for templates that should - be evaluated as ranked completions. If None, then - the template is open-ended. This list is accessible - from within Jinja as the variable `answer_choices`. - """ - self.id = str(uuid.uuid4()) - self.name = name - self.jinja = jinja - self.reference = reference - self.metadata = metadata if metadata is not None else Template.Metadata() - self.answer_choices = answer_choices - - def get_id(self): - """ - Returns the id of the template - - :return: unique id for template - """ - return self.id - - def get_name(self): - """ - Returns the name of the template - - :return: unique (per dataset) name for template - """ - return self.name - - def get_reference(self): - """ - Returns the bibliographic reference (or author) for the template - - :return: reference as a string - """ - return self.reference - - def get_answer_choices_expr(self): - """ - Returns a Jinja expression for computing the answer choices from an example. - - :return: String, or None if no answer choices - """ - return self.answer_choices - - def get_answer_choices_list(self, example): - """ - Returns a list of answer choices for a given example - - :return: list of strings, or None if get_answer_choices_expr is None - """ - jinja = self.get_answer_choices_expr() - if jinja is None: - return None - - rtemplate = env.from_string(jinja) - protected_example = self._escape_pipe(example) - rendered_choices = rtemplate.render(**protected_example) - return [self._unescape_pipe(answer_choice.strip()) for answer_choice in rendered_choices.split("|||")] - - def get_fixed_answer_choices_list(self): - """ - Returns a list of answer choices that is static across examples, if possible - - :return: list of strings, or None if no static list exists - """ - jinja = self.get_answer_choices_expr() - if jinja is None: - return None - - parse = env.parse(jinja) - variables = meta.find_undeclared_variables(parse) - if len(variables) == 0: - rtemplate = env.from_string(jinja) - rendered_choices = rtemplate.render() - return [answer_choice.strip() for answer_choice in rendered_choices.split("|||")] - else: - return None - - def apply(self, example, truncate=True, highlight_variables=False): - """ - Creates a prompt by applying this template to an example - - :param example: the dataset example to create a prompt for - :param truncate: if True, example fields will be truncated to TEXT_VAR_LENGTH chars - :param highlight_variables: highlight the added variables - :return: tuple of 2 strings, for prompt and output - """ - jinja = self.jinja - - # Truncates the prompt if needed - if truncate: - trunc_command = ( - f" | string | truncate({TEXT_VAR_LENGTH}) }}}}" # Escaping curly braces requires doubling them - ) - jinja = jinja.replace("}}", trunc_command) - - # Highlights text that was substituted for variables, if requested - if highlight_variables: - jinja = jinja.replace("}}", " | highlight }}") - rtemplate = env.from_string(jinja) - - protected_example = self._escape_pipe(example) - - # Adds in answer_choices variable - if "answer_choices" in protected_example: - raise ValueError("Example contains the restricted key 'answer_choices'.") - - protected_example["answer_choices"] = self.get_answer_choices_list(example) - - # Renders the Jinja template - rendered_example = rtemplate.render(**protected_example) - - # Splits on the separator, and then replaces back any occurrences of the - # separator in the original example - return [self._unescape_pipe(part).strip() for part in rendered_example.split("|||")] - - pipe_protector = "3ed2dface8203c4c9dfb1a5dc58e41e0" - - @classmethod - def _escape_pipe(cls, example): - # Replaces any occurrences of the "|||" separator in the example, which - # which will be replaced back after splitting - protected_example = { - key: value.replace("|||", cls.pipe_protector) if isinstance(value, str) else value - for key, value in example.items() - } - return protected_example - - @classmethod - def _unescape_pipe(cls, string): - # replaces back any occurrences of the separator in a string - return string.replace(cls.pipe_protector, "|||") - - class Metadata(yaml.YAMLObject): - """ - Metadata for a prompt template. - """ - - yaml_tag = "!TemplateMetadata" - - def __init__( - self, - original_task: Optional[bool] = None, - choices_in_prompt: Optional[bool] = None, - metrics: Optional[List[str]] = None, - ): - """ - Initializes template metadata. - - In the following, trivial choices are defined as Yes/No, True/False, - etc. and nontrivial choices are other types of choices denoted in - the answer_choices field. - - :param original_task: If True, this prompt asks a model to perform the original task designed for - this dataset. - :param choices_in_prompt: If True, the answer choices are included in the templates such that models - see those choices in the input. Only applicable to classification tasks. - :param metrics: List of strings denoting metrics to use for evaluation - """ - self.original_task = original_task - self.choices_in_prompt = choices_in_prompt - self.metrics = metrics - - -class TemplateCollection: - """ - This helper class wraps the DatasetTemplates class - - Initialized the DatasetTemplates for all existing template folder - - Give access to each DatasetTemplates - - Provides aggregated counts over all DatasetTemplates - """ - - def __init__(self): - - # Dict of all the DatasetTemplates, key is the tuple (dataset_name, subset_name) - self.datasets_templates: Dict[(str, Optional[str]), DatasetTemplates] = self._collect_datasets() - - @property - def keys(self): - return list(self.datasets_templates.keys()) - - def __len__(self) -> int: - return len(self.datasets_templates) - - def remove(self, dataset_name: str, subset_name: Optional[str] = None) -> None: - del self.datasets_templates[dataset_name, subset_name] - - def _collect_datasets(self) -> Dict[Tuple[str, str], "DatasetTemplates"]: - """ - Initialize a DatasetTemplates object for each templates.yaml detected in the templates folder - - Returns: a dict with key=(dataset_name, subset_name) - """ - dataset_folders = os.listdir(TEMPLATES_FOLDER_PATH) - dataset_folders = [folder for folder in dataset_folders if not folder.startswith(".")] - - output = {} # format is {(dataset_name, subset_name): DatasetsTemplates} - for dataset in dataset_folders: - if dataset in INCLUDED_USERS: - for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)): - output = {**output, **self._collect_dataset(dataset + "/" + filename)} - else: - output = {**output, **self._collect_dataset(dataset)} - return output - - def _collect_dataset(self, dataset): - output = {} # format is {(dataset_name, subset_name): DatasetsTemplates} - for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)): - if filename.endswith(".yaml"): - # If there is no sub-folder, there is no subset for this dataset - output[(dataset, None)] = DatasetTemplates(dataset) - else: - # This is a subfolder, and its name corresponds to the subset name - output[(dataset, filename)] = DatasetTemplates(dataset_name=dataset, subset_name=filename) - return output - - def get_dataset(self, dataset_name: str, subset_name: Optional[str] = None) -> "DatasetTemplates": - """ - Return the DatasetTemplates object corresponding to the dataset name - - :param dataset_name: name of the dataset to get - :param subset_name: name of the subset - """ - # if the dataset does not exist, we add it - if dataset_name not in self.keys: - self.datasets_templates[(dataset_name, subset_name)] = DatasetTemplates(dataset_name, subset_name) - - return self.datasets_templates[(dataset_name, subset_name)] - - def get_templates_count(self) -> Dict: - """ - Return the overall number count over all datasets - - NB: we don't breakdown datasets into subsets for the count, i.e subsets count are included - into the dataset count - """ - - count_dict = defaultdict(int) - for k, v in self.datasets_templates.items(): - # Subsets count towards dataset count - count_dict[k[0]] += len(v) - # converting to regular dict - return dict(count_dict) - - -class DatasetTemplates: - """ - Class that wraps all templates for a specific dataset/subset and implements all the helper - functions necessary to read/write to the yaml file - """ - - TEMPLATES_KEY = "templates" - DATASET_KEY = "dataset" - SUBSET_KEY = "subset" - TEMPLATE_FILENAME = "templates.yaml" - - def __init__(self, dataset_name: str, subset_name: str = None): - self.dataset_name: str = dataset_name - self.subset_name: str = subset_name - # dictionary is keyed by template name. - self.templates: Dict = self.read_from_file() - - # Mapping from template name to template id - self.name_to_id_mapping = {} - self.sync_mapping() - - def sync_mapping(self) -> None: - """ - Re-compute the name_to_id_mapping to ensure it is in sync with self.templates - """ - self.name_to_id_mapping = {template.name: template.id for template in self.templates.values()} - - @property - def all_template_names(self) -> List[str]: - """ - Sorted list of all templates names for this dataset - """ - return sorted([template.name for template in self.templates.values()]) - - @property - def folder_path(self) -> str: - if self.subset_name: - return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name, self.subset_name) - else: - return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name) - - @property - def yaml_path(self) -> str: - return os.path.join(self.folder_path, self.TEMPLATE_FILENAME) - - def format_for_dump(self) -> Dict: - """ - Create a formatted dictionary for the class attributes - """ - formatted_dict = {self.DATASET_KEY: self.dataset_name, self.TEMPLATES_KEY: self.templates} - if self.subset_name: - formatted_dict[self.SUBSET_KEY] = self.subset_name - return formatted_dict - - def read_from_file(self) -> Dict: - """ - Reads a file containing a prompt collection. - """ - - if not os.path.exists(self.yaml_path): - return {} - yaml_dict = yaml.load(open(self.yaml_path, "r"), Loader=yaml.FullLoader) - return yaml_dict[self.TEMPLATES_KEY] - - def write_to_file(self) -> None: - """ - Writes to a file with the current prompt collection. - """ - # Sync the mapping - self.sync_mapping() - - # We only create the folder if a template is written - if not os.path.exists(self.folder_path): - os.makedirs(self.folder_path) - yaml.dump(self.format_for_dump(), open(self.yaml_path, "w")) - - def add_template(self, template: "Template") -> None: - """ - Adds a new template for the dataset - - :param template: template - """ - self.templates[template.get_id()] = template - - self.write_to_file() - - def remove_template(self, template_name: str) -> None: - """ - Deletes a template - - :param template_name: name of template to remove - """ - - # Even if we have an ID, we want to check for duplicate names - if template_name not in self.all_template_names: - raise ValueError(f"No template with name {template_name} for dataset {self.dataset_name} exists.") - - del self.templates[self.name_to_id_mapping[template_name]] - - if len(self.templates) == 0: - # There is no remaining template, we can remove the entire folder - self.delete_folder() - else: - # We just update the file - self.write_to_file() - - def update_template( - self, - current_template_name: str, - new_template_name: str, - jinja: str, - reference: str, - metadata: Template.Metadata, - answer_choices: str, - ) -> None: - """ - Updates a pre-existing template and writes changes - - :param current_template_name: current name of the template stored in self.templates - :param new_template_name: new name for the template - :param jinja: new jinja entry - :param reference: new reference entry - :param metadata: a Metadata object with template annotations - :param answer_choices: new answer_choices string - """ - template_id = self.name_to_id_mapping[current_template_name] - self.templates[template_id].name = new_template_name - self.templates[template_id].jinja = jinja - self.templates[template_id].reference = reference - self.templates[template_id].metadata = metadata - self.templates[template_id].answer_choices = answer_choices - - self.write_to_file() - - def delete_folder(self) -> None: - """ - Delete the folder corresponding to self.folder_path - """ - self.sync_mapping() - - rmtree(self.folder_path) - - # If it is a subset, we have to check whether to remove the dataset folder - if self.subset_name: - # have to check for other folders - base_dataset_folder = os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name) - if len(os.listdir(base_dataset_folder)) == 0: - rmtree(base_dataset_folder) - - def __getitem__(self, template_key: str) -> "Template": - return self.templates[self.name_to_id_mapping[template_key]] - - def __len__(self) -> int: - return len(self.templates) - - -def get_templates_data_frame(): - """ - Gathers all template information into a Pandas DataFrame. - - :return: Pandas DataFrame - """ - data = { - "id": [], - "dataset": [], - "subset": [], - "name": [], - "reference": [], - "original_task": [], - "choices_in_prompt": [], - "metrics": [], - "answer_choices": [], - "jinja": [], - } - - template_collection = TemplateCollection() - - for key in template_collection.keys: - templates = template_collection.get_dataset(key[0], key[1]) - for template_name in templates.all_template_names: - template = templates[template_name] - data["id"].append(template.get_id()) - data["dataset"].append(key[0]) - data["subset"].append(key[1]) - data["name"].append(template.get_name()) - data["reference"].append(template.get_reference()) - data["original_task"].append(template.metadata.original_task) - data["choices_in_prompt"].append(template.metadata.choices_in_prompt) - data["metrics"].append(template.metadata.metrics) - data["answer_choices"].append(template.get_answer_choices_expr()) - data["jinja"].append(template.jinja) - - return pd.DataFrame(data) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md deleted file mode 100644 index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py deleted file mode 100644 index 232a5e402a146023e5c93f3c2574ecec98faf9d5..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, List, Optional, Tuple - -import numpy as np - -import torch -import torch.nn as nn -from dataclasses import dataclass, field -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.models.wav2vec.wav2vec2 import ( - ConvFeatureExtractionModel, - TransformerEncoder, -) -from fairseq.modules import GradMultiply, LayerNorm -from fairseq.tasks.hubert_pretraining import ( - HubertPretrainingConfig, - HubertPretrainingTask, -) -from omegaconf import II - -logger = logging.getLogger(__name__) - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum( - ["static", "uniform", "normal", "poisson"] -) - - -@dataclass -class HubertConfig(FairseqDataclass): - label_rate: int = II("task.label_rate") - - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group " - "norm with d groups in the first conv block, whereas layer_norm " - "has layer norms in every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for the transformer"}, - ) - attention_dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for attention weights"}, - ) - activation_dropout: float = field( - default=0.0, - metadata={"help": "dropout probability after activation in FFN"}, - ) - encoder_layerdrop: float = field( - default=0.0, - metadata={"help": "probability of dropping a tarnsformer layer"}, - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={ - "help": "dropout to apply to the features (after feat extr)" - }, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many " - "dimensions. set to encoder_embed_dim is <= 0" - }, - ) - untie_final_proj: bool = field( - default=False, - metadata={"help": "use separate projection for each target"}, - ) - layer_norm_first: bool = field( - default=False, - metadata={"help": "apply layernorm first in the transformer"}, - ) - conv_feature_layers: str = field( - default="[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2", - metadata={ - "help": "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, - metadata={"help": "multiply feature extractor var grads by this"}, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, - metadata={"help": "probability of replacing a token with mask"}, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # channel masking - mask_channel_length: int = field( - default=10, - metadata={"help": "length of the mask for features (channels)"}, - ) - mask_channel_prob: float = field( - default=0.0, - metadata={"help": "probability of replacing a feature with 0"}, - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, - metadata={"help": "whether to allow channel masks to overlap"}, - ) - mask_channel_min_space: int = field( - default=1, - metadata={ - "help": "min space between spans (if no overlap is enabled)" - }, - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={ - "help": "number of filters for convolutional positional embeddings" - }, - ) - conv_pos_groups: int = field( - default=16, - metadata={ - "help": "number of groups for convolutional positional embedding" - }, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={"help": "legacy (to be removed)"}, - ) - - # loss computation - skip_masked: bool = field( - default=False, - metadata={"help": "skip computing losses over masked frames"}, - ) - skip_nomask: bool = field( - default=False, - metadata={"help": "skip computing losses over unmasked frames"}, - ) - - -@register_model("hubert", dataclass=HubertConfig) -class HubertModel(BaseFairseqModel): - def __init__( - self, - cfg: HubertConfig, - task_cfg: HubertPretrainingConfig, - dictionaries: List[Dictionary], - ) -> None: - super().__init__() - logger.info(f"HubertModel Config: {cfg}") - - feature_enc_layers = eval(cfg.conv_feature_layers) # noqa - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - feature_ds_rate = np.prod([s for _, _, s in feature_enc_layers]) - self.feat2tar_ratio = ( - cfg.label_rate * feature_ds_rate / task_cfg.sample_rate - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - self.logit_temp = cfg.logit_temp - self.skip_masked = cfg.skip_masked - self.skip_nomask = cfg.skip_nomask - - final_dim = ( - cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - ) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.untie_final_proj = cfg.untie_final_proj - if self.untie_final_proj: - self.final_proj = nn.Linear( - cfg.encoder_embed_dim, final_dim * len(dictionaries) - ) - else: - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - # modules below are not needed during fine-tuning - if any([d is None for d in dictionaries]): - logger.info( - "cannot find dictionary. assume will be used for fine-tuning" - ) - else: - self.num_classes = [len(d) for d in dictionaries] - self.label_embs_concat = nn.Parameter( - torch.FloatTensor(sum(self.num_classes), final_dim) - ) - nn.init.uniform_(self.label_embs_concat) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: HubertConfig, task: HubertPretrainingTask): - """Build a new model instance.""" - - model = HubertModel(cfg, task.cfg, task.dictionaries) - return model - - def apply_mask(self, x, padding_mask, target_list): - B, T, C = x.shape - if self.mask_prob > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x[mask_indices] = self.mask_emb - else: - mask_indices = None - - if self.mask_channel_prob > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - return x, mask_indices - - def compute_nce(self, x, pos, negs): - neg_is_pos = (pos == negs).all(-1) - pos = pos.unsqueeze(0) - targets = torch.cat([pos, negs], dim=0) - - logits = torch.cosine_similarity( - x.float(), targets.float(), dim=-1 - ).type_as(x) - logits /= self.logit_temp - if neg_is_pos.any(): - logits[1:][neg_is_pos] = float("-inf") - logits = logits.transpose(0, 1) # (num_x, num_cls+1) - return logits - - def forward_features(self, source: torch.Tensor) -> torch.Tensor: - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - return features - - def forward_targets( - self, features: torch.Tensor, target_list: List[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Trim features to ensure labels exist and then get aligned labels - feat_tsz = features.size(2) - targ_tsz = min([t.size(1) for t in target_list]) - if self.feat2tar_ratio * feat_tsz > targ_tsz: - feat_tsz = int(targ_tsz / self.feat2tar_ratio) - features = features[..., :feat_tsz] - target_inds = torch.arange(feat_tsz).float() * self.feat2tar_ratio - target_list = [t[:, target_inds.long()] for t in target_list] - return features, target_list - - def forward_padding_mask( - self, features: torch.Tensor, padding_mask: torch.Tensor, - ) -> torch.Tensor: - extra = padding_mask.size(1) % features.size(1) - if extra > 0: - padding_mask = padding_mask[:, :-extra] - padding_mask = padding_mask.view( - padding_mask.size(0), features.size(1), -1 - ) - padding_mask = padding_mask.all(-1) - return padding_mask - - def forward( - self, - source: torch.Tensor, - target_list: Optional[List[torch.Tensor]] = None, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = True, - features_only: bool = False, - output_layer: Optional[int] = None, - ) -> Dict[str, torch.Tensor]: - """output layer is 1-based""" - features = self.forward_features(source) - if target_list is not None: - features, target_list = self.forward_targets(features, target_list) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None: - padding_mask = self.forward_padding_mask(features, padding_mask) - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - if mask: - x, mask_indices = self.apply_mask( - features, padding_mask, target_list - ) - else: - x = features - mask_indices = None - - # feature: (B, T, D), float - # target: (B, T), long - # x: (B, T, D), float - # padding_mask: (B, T), bool - # mask_indices: (B, T), bool - x, _ = self.encoder( - x, - padding_mask=padding_mask, - layer=None if output_layer is None else output_layer - 1 - ) - - if features_only: - return {"x": x, "padding_mask": padding_mask, "features": features} - - def compute_pred(proj_x, target, label_embs): - # compute logits for the i-th label set - y = torch.index_select(label_embs, 0, target.long()) - negs = label_embs.unsqueeze(1).expand(-1, proj_x.size(0), -1) - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - # proj_x: (S, D) - # y: (S, D) - # negs: (Neg, S, D) - return self.compute_nce(proj_x, y, negs) - - label_embs_list = self.label_embs_concat.split(self.num_classes, 0) - - if not self.skip_masked: - masked_indices = torch.logical_and(~padding_mask, mask_indices) - proj_x_m = self.final_proj(x[masked_indices]) - if self.untie_final_proj: - proj_x_m_list = proj_x_m.chunk(len(target_list), dim=-1) - else: - proj_x_m_list = [proj_x_m for _ in range(len(target_list))] - logit_m_list = [ - compute_pred(proj_x_m, t[masked_indices], label_embs_list[i]) - for i, (proj_x_m, t) in enumerate( - zip(proj_x_m_list, target_list) - ) - ] - else: - logit_m_list = [None for _ in target_list] - - if not self.skip_nomask: - nomask_indices = torch.logical_and(~padding_mask, ~mask_indices) - proj_x_u = self.final_proj(x[nomask_indices]) - if self.untie_final_proj: - proj_x_u_list = proj_x_u.chunk(len(target_list), dim=-1) - else: - proj_x_u_list = [proj_x_u for _ in range(len(target_list))] - - logit_u_list = [ - compute_pred(proj_x_u, t[nomask_indices], label_embs_list[i]) - for i, (proj_x_u, t) in enumerate( - zip(proj_x_u_list, target_list) - ) - ] - else: - logit_u_list = [None for _ in target_list] - - result = { - "logit_m_list": logit_m_list, - "logit_u_list": logit_u_list, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - return result - - def extract_features( - self, - source: torch.Tensor, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = False, - ret_conv: bool = False, - output_layer: Optional[int] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - res = self.forward( - source, - padding_mask=padding_mask, - mask=mask, - features_only=True, - output_layer=output_layer, - ) - feature = res["features"] if ret_conv else res["x"] - return feature, res["padding_mask"] - - def get_logits(self, net_output, is_masked=True): - if is_masked: - logits_list = net_output["logit_m_list"] - else: - logits_list = net_output["logit_u_list"] - logits_list = [x.float() for x in logits_list if x is not None] - return logits_list - - def get_targets(self, net_output, is_masked=True): - logits_list = self.get_logits(net_output, is_masked) - targets_list = [ - x.new_zeros(x.size(0), dtype=torch.long) for x in logits_list - ] - return targets_list - - def get_extra_losses(self, net_output): - extra_losses = [] - names = [] - - if "features_pen" in net_output: - extra_losses.append(net_output["features_pen"]) - names.append("features_pen") - - return extra_losses, names - - def remove_pretraining_modules(self): - self.target_glu = None - self.final_proj = None diff --git a/spaces/mshukor/UnIVAL/models/unival/__init__.py b/spaces/mshukor/UnIVAL/models/unival/__init__.py deleted file mode 100644 index 8f78d35df16bef627995c32f287e55c27382ec93..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/unival/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .unival import UnIVALModel, unival_base_architecture, unival_large_architecture \ No newline at end of file diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py deleted file mode 100644 index 97f3c02a8ccf434e9f7788ba503d64e0395146b0..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py +++ /dev/null @@ -1,479 +0,0 @@ -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class Upsample2D(nn.Module): - """ - An upsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. :param use_conv: a bool determining if a convolution is - applied. :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv=False, use_conv_transpose=False, out_channels=None, name="conv"): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_conv_transpose = use_conv_transpose - self.name = name - - conv = None - if use_conv_transpose: - conv = nn.ConvTranspose2d(channels, self.out_channels, 4, 2, 1) - elif use_conv: - conv = nn.Conv2d(self.channels, self.out_channels, 3, padding=1) - - # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed - if name == "conv": - self.conv = conv - else: - self.Conv2d_0 = conv - - def forward(self, x): - assert x.shape[1] == self.channels - if self.use_conv_transpose: - return self.conv(x) - - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - - # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed - if self.use_conv: - if self.name == "conv": - x = self.conv(x) - else: - x = self.Conv2d_0(x) - - return x - - -class Downsample2D(nn.Module): - """ - A downsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. :param use_conv: a bool determining if a convolution is - applied. :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv=False, out_channels=None, padding=1, name="conv"): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.padding = padding - stride = 2 - self.name = name - - if use_conv: - conv = nn.Conv2d(self.channels, self.out_channels, 3, stride=stride, padding=padding) - else: - assert self.channels == self.out_channels - conv = nn.AvgPool2d(kernel_size=stride, stride=stride) - - # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed - if name == "conv": - self.Conv2d_0 = conv - self.conv = conv - elif name == "Conv2d_0": - self.conv = conv - else: - self.conv = conv - - def forward(self, x): - assert x.shape[1] == self.channels - if self.use_conv and self.padding == 0: - pad = (0, 1, 0, 1) - x = F.pad(x, pad, mode="constant", value=0) - - assert x.shape[1] == self.channels - x = self.conv(x) - - return x - - -class FirUpsample2D(nn.Module): - def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)): - super().__init__() - out_channels = out_channels if out_channels else channels - if use_conv: - self.Conv2d_0 = nn.Conv2d(channels, out_channels, kernel_size=3, stride=1, padding=1) - self.use_conv = use_conv - self.fir_kernel = fir_kernel - self.out_channels = out_channels - - def _upsample_2d(self, x, weight=None, kernel=None, factor=2, gain=1): - """Fused `upsample_2d()` followed by `Conv2d()`. - - Args: - Padding is performed only once at the beginning, not between the operations. The fused op is considerably more - efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary: - order. - x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, - C]`. - weight: Weight tensor of the shape `[filterH, filterW, inChannels, - outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`. - kernel: FIR filter of the shape `[firH, firW]` or `[firN]` - (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling. - factor: Integer upsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0). - - Returns: - Tensor of the shape `[N, C, H * factor, W * factor]` or `[N, H * factor, W * factor, C]`, and same datatype as - `x`. - """ - - assert isinstance(factor, int) and factor >= 1 - - # Setup filter kernel. - if kernel is None: - kernel = [1] * factor - - # setup kernel - kernel = torch.tensor(kernel, dtype=torch.float32) - if kernel.ndim == 1: - kernel = torch.outer(kernel, kernel) - kernel /= torch.sum(kernel) - - kernel = kernel * (gain * (factor**2)) - - if self.use_conv: - convH = weight.shape[2] - convW = weight.shape[3] - inC = weight.shape[1] - - p = (kernel.shape[0] - factor) - (convW - 1) - - stride = (factor, factor) - # Determine data dimensions. - output_shape = ((x.shape[2] - 1) * factor + convH, (x.shape[3] - 1) * factor + convW) - output_padding = ( - output_shape[0] - (x.shape[2] - 1) * stride[0] - convH, - output_shape[1] - (x.shape[3] - 1) * stride[1] - convW, - ) - assert output_padding[0] >= 0 and output_padding[1] >= 0 - inC = weight.shape[1] - num_groups = x.shape[1] // inC - - # Transpose weights. - weight = torch.reshape(weight, (num_groups, -1, inC, convH, convW)) - weight = torch.flip(weight, dims=[3, 4]).permute(0, 2, 1, 3, 4) - weight = torch.reshape(weight, (num_groups * inC, -1, convH, convW)) - - x = F.conv_transpose2d(x, weight, stride=stride, output_padding=output_padding, padding=0) - - x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), pad=((p + 1) // 2 + factor - 1, p // 2 + 1)) - else: - p = kernel.shape[0] - factor - x = upfirdn2d_native( - x, torch.tensor(kernel, device=x.device), up=factor, pad=((p + 1) // 2 + factor - 1, p // 2) - ) - - return x - - def forward(self, x): - if self.use_conv: - height = self._upsample_2d(x, self.Conv2d_0.weight, kernel=self.fir_kernel) - height = height + self.Conv2d_0.bias.reshape(1, -1, 1, 1) - else: - height = self._upsample_2d(x, kernel=self.fir_kernel, factor=2) - - return height - - -class FirDownsample2D(nn.Module): - def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)): - super().__init__() - out_channels = out_channels if out_channels else channels - if use_conv: - self.Conv2d_0 = nn.Conv2d(channels, out_channels, kernel_size=3, stride=1, padding=1) - self.fir_kernel = fir_kernel - self.use_conv = use_conv - self.out_channels = out_channels - - def _downsample_2d(self, x, weight=None, kernel=None, factor=2, gain=1): - """Fused `Conv2d()` followed by `downsample_2d()`. - - Args: - Padding is performed only once at the beginning, not between the operations. The fused op is considerably more - efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary: - order. - x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`. w: Weight tensor of the shape `[filterH, - filterW, inChannels, outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] // - numGroups`. k: FIR filter of the shape `[firH, firW]` or `[firN]` (separable). The default is `[1] * - factor`, which corresponds to average pooling. factor: Integer downsampling factor (default: 2). gain: - Scaling factor for signal magnitude (default: 1.0). - - Returns: - Tensor of the shape `[N, C, H // factor, W // factor]` or `[N, H // factor, W // factor, C]`, and same - datatype as `x`. - """ - - assert isinstance(factor, int) and factor >= 1 - if kernel is None: - kernel = [1] * factor - - # setup kernel - kernel = torch.tensor(kernel, dtype=torch.float32) - if kernel.ndim == 1: - kernel = torch.outer(kernel, kernel) - kernel /= torch.sum(kernel) - - kernel = kernel * gain - - if self.use_conv: - _, _, convH, convW = weight.shape - p = (kernel.shape[0] - factor) + (convW - 1) - s = [factor, factor] - x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), pad=((p + 1) // 2, p // 2)) - x = F.conv2d(x, weight, stride=s, padding=0) - else: - p = kernel.shape[0] - factor - x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), down=factor, pad=((p + 1) // 2, p // 2)) - - return x - - def forward(self, x): - if self.use_conv: - x = self._downsample_2d(x, weight=self.Conv2d_0.weight, kernel=self.fir_kernel) - x = x + self.Conv2d_0.bias.reshape(1, -1, 1, 1) - else: - x = self._downsample_2d(x, kernel=self.fir_kernel, factor=2) - - return x - - -class ResnetBlock2D(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - conv_shortcut=False, - dropout=0.0, - temb_channels=512, - groups=32, - groups_out=None, - pre_norm=True, - eps=1e-6, - non_linearity="swish", - time_embedding_norm="default", - kernel=None, - output_scale_factor=1.0, - use_in_shortcut=None, - up=False, - down=False, - ): - super().__init__() - self.pre_norm = pre_norm - self.pre_norm = True - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - self.time_embedding_norm = time_embedding_norm - self.up = up - self.down = down - self.output_scale_factor = output_scale_factor - - if groups_out is None: - groups_out = groups - - self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=in_channels, eps=eps, affine=True) - - self.conv1 = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - - if temb_channels is not None: - self.time_emb_proj = torch.nn.Linear(temb_channels, out_channels) - else: - self.time_emb_proj = None - - self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels, eps=eps, affine=True) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - - if non_linearity == "swish": - self.nonlinearity = lambda x: F.silu(x) - elif non_linearity == "mish": - self.nonlinearity = Mish() - elif non_linearity == "silu": - self.nonlinearity = nn.SiLU() - - self.upsample = self.downsample = None - if self.up: - if kernel == "fir": - fir_kernel = (1, 3, 3, 1) - self.upsample = lambda x: upsample_2d(x, kernel=fir_kernel) - elif kernel == "sde_vp": - self.upsample = partial(F.interpolate, scale_factor=2.0, mode="nearest") - else: - self.upsample = Upsample2D(in_channels, use_conv=False) - elif self.down: - if kernel == "fir": - fir_kernel = (1, 3, 3, 1) - self.downsample = lambda x: downsample_2d(x, kernel=fir_kernel) - elif kernel == "sde_vp": - self.downsample = partial(F.avg_pool2d, kernel_size=2, stride=2) - else: - self.downsample = Downsample2D(in_channels, use_conv=False, padding=1, name="op") - - self.use_in_shortcut = self.in_channels != self.out_channels if use_in_shortcut is None else use_in_shortcut - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x, temb): - hidden_states = x - - # make sure hidden states is in float32 - # when running in half-precision - hidden_states = self.norm1(hidden_states).type(hidden_states.dtype) - hidden_states = self.nonlinearity(hidden_states) - - if self.upsample is not None: - x = self.upsample(x) - hidden_states = self.upsample(hidden_states) - elif self.downsample is not None: - x = self.downsample(x) - hidden_states = self.downsample(hidden_states) - - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - hidden_states = hidden_states + temb - - # make sure hidden states is in float32 - # when running in half-precision - hidden_states = self.norm2(hidden_states).type(hidden_states.dtype) - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - x = self.conv_shortcut(x) - - out = (x + hidden_states) / self.output_scale_factor - - return out - - -class Mish(torch.nn.Module): - def forward(self, x): - return x * torch.tanh(torch.nn.functional.softplus(x)) - - -def upsample_2d(x, kernel=None, factor=2, gain=1): - r"""Upsample2D a batch of 2D images with the given filter. - - Args: - Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and upsamples each image with the given - filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the specified - `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its shape is a: - multiple of the upsampling factor. - x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, - C]`. - k: FIR filter of the shape `[firH, firW]` or `[firN]` - (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling. - factor: Integer upsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0). - - Returns: - Tensor of the shape `[N, C, H * factor, W * factor]` - """ - assert isinstance(factor, int) and factor >= 1 - if kernel is None: - kernel = [1] * factor - - kernel = torch.tensor(kernel, dtype=torch.float32) - if kernel.ndim == 1: - kernel = torch.outer(kernel, kernel) - kernel /= torch.sum(kernel) - - kernel = kernel * (gain * (factor**2)) - p = kernel.shape[0] - factor - return upfirdn2d_native(x, kernel.to(device=x.device), up=factor, pad=((p + 1) // 2 + factor - 1, p // 2)) - - -def downsample_2d(x, kernel=None, factor=2, gain=1): - r"""Downsample2D a batch of 2D images with the given filter. - - Args: - Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and downsamples each image with the - given filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the - specified `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its - shape is a multiple of the downsampling factor. - x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, - C]`. - kernel: FIR filter of the shape `[firH, firW]` or `[firN]` - (separable). The default is `[1] * factor`, which corresponds to average pooling. - factor: Integer downsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0). - - Returns: - Tensor of the shape `[N, C, H // factor, W // factor]` - """ - - assert isinstance(factor, int) and factor >= 1 - if kernel is None: - kernel = [1] * factor - - kernel = torch.tensor(kernel, dtype=torch.float32) - if kernel.ndim == 1: - kernel = torch.outer(kernel, kernel) - kernel /= torch.sum(kernel) - - kernel = kernel * gain - p = kernel.shape[0] - factor - return upfirdn2d_native(x, kernel.to(device=x.device), down=factor, pad=((p + 1) // 2, p // 2)) - - -def upfirdn2d_native(input, kernel, up=1, down=1, pad=(0, 0)): - up_x = up_y = up - down_x = down_y = down - pad_x0 = pad_y0 = pad[0] - pad_x1 = pad_y1 = pad[1] - - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - - # Temporary workaround for mps specific issue: https://github.com/pytorch/pytorch/issues/84535 - if input.device.type == "mps": - out = out.to("cpu") - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]) - out = out.to(input.device) # Move back to mps if necessary - out = out[ - :, - max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/muyi12314/anime-remove-background/app.py b/spaces/muyi12314/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/muyi12314/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py b/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py deleted file mode 100644 index c452cacbb15ac13919b9fcaa482ed829983a8fd6..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py +++ /dev/null @@ -1,222 +0,0 @@ -import base64 -import http -import io -import time -from json import JSONDecodeError -from typing import Optional - -import requests -from PIL import Image -from fastapi import Header, Depends, Form, File, Request, APIRouter, UploadFile -from fastapi.openapi.models import Response -from pydantic import ValidationError -from starlette.responses import JSONResponse - -from carvekit.web.deps import config, ml_processor -from carvekit.web.handlers.response import handle_response, Authenticate -from carvekit.web.responses.api import error_dict -from carvekit.web.schemas.request import Parameters -from carvekit.web.utils.net_utils import is_loopback - -api_router = APIRouter(prefix="", tags=["api"]) - - -# noinspection PyBroadException -@api_router.post("/removebg") -async def removebg( - request: Request, - image_file: Optional[bytes] = File(None), - auth: bool = Depends(Authenticate), - content_type: str = Header(""), - image_file_b64: Optional[str] = Form(None), - image_url: Optional[str] = Form(None), - bg_image_file: Optional[bytes] = File(None), - size: Optional[str] = Form("full"), - type: Optional[str] = Form("auto"), - format: Optional[str] = Form("auto"), - roi: str = Form("0% 0% 100% 100%"), - crop: bool = Form(False), - crop_margin: Optional[str] = Form("0px"), - scale: Optional[str] = Form("original"), - position: Optional[str] = Form("original"), - channels: Optional[str] = Form("rgba"), - add_shadow: bool = Form(False), # Not supported at the moment - semitransparency: bool = Form(False), # Not supported at the moment - bg_color: Optional[str] = Form(""), -): - if auth is False: - return JSONResponse(content=error_dict("Missing API Key"), status_code=403) - if ( - content_type not in ["application/x-www-form-urlencoded", "application/json"] - and "multipart/form-data" not in content_type - ): - return JSONResponse( - content=error_dict("Invalid request content type"), status_code=400 - ) - - if image_url: - if not ( - image_url.startswith("http://") or image_url.startswith("https://") - ) or is_loopback(image_url): - print( - f"Possible ssrf attempt to /api/removebg endpoint with image url: {image_url}" - ) - return JSONResponse( - content=error_dict("Invalid image url."), status_code=400 - ) # possible ssrf attempt - - image = None - bg = None - parameters = None - if ( - content_type == "application/x-www-form-urlencoded" - or "multipart/form-data" in content_type - ): - if image_file_b64 is None and image_url is None and image_file is None: - return JSONResponse(content=error_dict("File not found"), status_code=400) - - if image_file_b64: - if len(image_file_b64) == 0: - return JSONResponse(content=error_dict("Empty image"), status_code=400) - try: - image = Image.open(io.BytesIO(base64.b64decode(image_file_b64))) - except BaseException: - return JSONResponse( - content=error_dict("Error decode image!"), status_code=400 - ) - elif image_url: - try: - image = Image.open(io.BytesIO(requests.get(image_url).content)) - except BaseException: - return JSONResponse( - content=error_dict("Error download image!"), status_code=400 - ) - elif image_file: - if len(image_file) == 0: - return JSONResponse(content=error_dict("Empty image"), status_code=400) - image = Image.open(io.BytesIO(image_file)) - - if bg_image_file: - if len(bg_image_file) == 0: - return JSONResponse(content=error_dict("Empty image"), status_code=400) - bg = Image.open(io.BytesIO(bg_image_file)) - try: - parameters = Parameters( - image_file_b64=image_file_b64, - image_url=image_url, - size=size, - type=type, - format=format, - roi=roi, - crop=crop, - crop_margin=crop_margin, - scale=scale, - position=position, - channels=channels, - add_shadow=add_shadow, - semitransparency=semitransparency, - bg_color=bg_color, - ) - except ValidationError as e: - return JSONResponse( - content=e.json(), status_code=400, media_type="application/json" - ) - - else: - payload = None - try: - payload = await request.json() - except JSONDecodeError: - return JSONResponse(content=error_dict("Empty json"), status_code=400) - try: - parameters = Parameters(**payload) - except ValidationError as e: - return Response( - content=e.json(), status_code=400, media_type="application/json" - ) - if parameters.image_file_b64 is None and parameters.image_url is None: - return JSONResponse(content=error_dict("File not found"), status_code=400) - - if parameters.image_file_b64: - if len(parameters.image_file_b64) == 0: - return JSONResponse(content=error_dict("Empty image"), status_code=400) - try: - image = Image.open( - io.BytesIO(base64.b64decode(parameters.image_file_b64)) - ) - except BaseException: - return JSONResponse( - content=error_dict("Error decode image!"), status_code=400 - ) - elif parameters.image_url: - if not ( - parameters.image_url.startswith("http://") - or parameters.image_url.startswith("https://") - ) or is_loopback(parameters.image_url): - print( - f"Possible ssrf attempt to /api/removebg endpoint with image url: {parameters.image_url}" - ) - return JSONResponse( - content=error_dict("Invalid image url."), status_code=400 - ) # possible ssrf attempt - try: - image = Image.open( - io.BytesIO(requests.get(parameters.image_url).content) - ) - except BaseException: - return JSONResponse( - content=error_dict("Error download image!"), status_code=400 - ) - if image is None: - return JSONResponse( - content=error_dict("Error download image!"), status_code=400 - ) - - job_id = ml_processor.job_create([parameters.dict(), image, bg, False]) - - while ml_processor.job_status(job_id) != "finished": - if ml_processor.job_status(job_id) == "not_found": - return JSONResponse( - content=error_dict("Job ID not found!"), status_code=500 - ) - time.sleep(5) - - result = ml_processor.job_result(job_id) - return handle_response(result, image) - - -@api_router.get("/account") -def account(): - """ - Stub for compatibility with remove.bg api libraries - """ - return JSONResponse( - content={ - "data": { - "attributes": { - "credits": { - "total": 99999, - "subscription": 99999, - "payg": 99999, - "enterprise": 99999, - }, - "api": {"free_calls": 99999, "sizes": "all"}, - } - } - }, - status_code=200, - ) - - -@api_router.get("/admin/config") -def status(auth: str = Depends(Authenticate)): - """ - Returns the current server config. - """ - if not auth or auth != "admin": - return JSONResponse( - content=error_dict("Authentication failed"), status_code=403 - ) - resp = JSONResponse(content=config.json(), status_code=200) - resp.headers["X-Credits-Charged"] = "0" - return resp diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md deleted file mode 100644 index 0914c840ead6c16364e51c4e39eab41f86c42bc3..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md +++ /dev/null @@ -1,23 +0,0 @@ - -

                How to Download Al Qunut by Sudais PDF

                -

                Al Qunut is a special supplication recited by Muslims during the Witr prayer in the last part of the night. It is a heartfelt plea to Allah for guidance, forgiveness, mercy and protection. Al Qunut has many benefits and virtues, as it is a way of communicating with Allah and expressing one's needs and feelings.

                -

                One of the most famous reciters of Al Qunut is Sheikh Abdul Rahman As-Sudais, the imam of the Grand Mosque in Makkah. His voice is melodious and his words are powerful and moving. Many Muslims around the world listen to his recitation and follow along with his dua.

                -

                Al Qunut By Sudais Pdf Download


                Downloadhttps://urlcod.com/2uIbl3



                -

                If you want to download Al Qunut by Sudais PDF, you can find it online on various websites that offer Islamic audio and video files. One of them is Archive.org, which is a non-profit library of millions of free books, movies, music and more. Here are the steps to download Al Qunut by Sudais PDF from Archive.org:

                -
                  -
                1. Go to https://archive.org/details/MakkahDuaAlQunootAudio, which is the page for Makkah Dua Al Qunoot Audio by Sheikh Sudais.
                2. -
                3. On the right side of the page, you will see a list of download options. Click on the one that says "PDF" to download the file in PDF format.
                4. -
                5. Save the file to your device and open it with any PDF reader.
                6. -
                7. Enjoy listening to and reading Al Qunut by Sudais PDF.
                8. -
                -

                You can also find other versions of Al Qunut by Sudais on Archive.org, such as Witr and Dua Al Qunoot by Sheikh Abdul Rahman As-Sudais[^2^] or Night 1 - Dua Al Qunoot by Sheikh Sudais HQ[^3^]. You can download them in different formats, such as MP3, OGG or MPEG4.

                -

                May Allah accept your dua and grant you His blessings. Ameen.

                Al Qunut is a very flexible and adaptable dua that can be recited in any language and with any words. However, there are some recommended words and phrases that the Prophet Muhammad (peace be upon him) taught us to say in Al Qunut. Some of them are:

                -
                  -
                • Allahumma inna nasta'eenuka wa nastaghfiruka wa nu'minu bika wa natawakkalu 'alaika wa nuthni 'alaikal khairi wa nashkuruka wa la nakfuruka wa nakhla'u wa natruku man yafjuruk. (O Allah, we seek Your help and Your forgiveness and we believe in You and rely on You and praise You for all the good things and we thank You and we do not deny You and we abandon and forsake those who disobey You.)
                • -
                • Allahumma iyyaka na'budu wa laka nusalli wa nasjudu wa ilaika nas'a wa nahfidu wa narju rahmataka wa nakhsha 'adhabaka inna 'adhabaka bil kuffari mulhiq. (O Allah, You alone we worship and to You we pray and prostrate and to You we hasten and present ourselves and we hope for Your mercy and we fear Your punishment. Indeed, Your punishment will overtake the disbelievers.)
                • -
                • Allahumma ighfir lana warhamna wa 'afina wa ihdina wasrif 'anna sharra ma qadaita fa innaka taqdi wa la yuqda 'alaik. (O Allah, forgive us and have mercy on us and grant us well-being and guide us and avert from us the evil of what You have decreed. For indeed, You decree and none can decree over You.)
                • -
                -

                These are some of the examples of Al Qunut that we can learn from the Sunnah of the Prophet Muhammad (peace be upon him). We can also add our own personal supplications and requests to Allah in Al Qunut, as long as they are lawful and good.

                -

                Al Qunut is a very powerful and effective way of invoking Allah's help and mercy in times of hardship and distress. It is also a way of expressing our gratitude and praise to Allah for His countless blessings and favors. Al Qunut is a means of strengthening our faith and trust in Allah and His plan for us. Al Qunut is a source of comfort and peace for our hearts and souls.

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/neveu/img-to-music/share_btn.py b/spaces/neveu/img-to-music/share_btn.py deleted file mode 100644 index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000 --- a/spaces/neveu/img-to-music/share_btn.py +++ /dev/null @@ -1,100 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/nicehero/ManualMask/README.md b/spaces/nicehero/ManualMask/README.md deleted file mode 100644 index 7869a1c04eff24c6b65d55a973680a0237e12caf..0000000000000000000000000000000000000000 --- a/spaces/nicehero/ManualMask/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ManualMask -emoji: 📈 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: bsd ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nightfury/img2music/share_btn.py b/spaces/nightfury/img2music/share_btn.py deleted file mode 100644 index cc6a470a1ef9d8687d19658cd0106f8c3b9b053d..0000000000000000000000000000000000000000 --- a/spaces/nightfury/img2music/share_btn.py +++ /dev/null @@ -1,100 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/spaces/nightfury/img2music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py deleted file mode 100644 index 850a852a2f0986d4d1ce89a526d96db42c76e44f..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py +++ /dev/null @@ -1,133 +0,0 @@ -import math -import torch - - -def diou_loss( - boxes1: torch.Tensor, - boxes2: torch.Tensor, - reduction: str = "none", - eps: float = 1e-7, -) -> torch.Tensor: - """ - Distance Intersection over Union Loss (Zhaohui Zheng et. al) - https://arxiv.org/abs/1911.08287 - Args: - boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,). - reduction: 'none' | 'mean' | 'sum' - 'none': No reduction will be applied to the output. - 'mean': The output will be averaged. - 'sum': The output will be summed. - eps (float): small number to prevent division by zero - """ - - x1, y1, x2, y2 = boxes1.unbind(dim=-1) - x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1) - - # TODO: use torch._assert_async() when pytorch 1.8 support is dropped - assert (x2 >= x1).all(), "bad box: x1 larger than x2" - assert (y2 >= y1).all(), "bad box: y1 larger than y2" - - # Intersection keypoints - xkis1 = torch.max(x1, x1g) - ykis1 = torch.max(y1, y1g) - xkis2 = torch.min(x2, x2g) - ykis2 = torch.min(y2, y2g) - - intsct = torch.zeros_like(x1) - mask = (ykis2 > ykis1) & (xkis2 > xkis1) - intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask]) - union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps - iou = intsct / union - - # smallest enclosing box - xc1 = torch.min(x1, x1g) - yc1 = torch.min(y1, y1g) - xc2 = torch.max(x2, x2g) - yc2 = torch.max(y2, y2g) - diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps - - # centers of boxes - x_p = (x2 + x1) / 2 - y_p = (y2 + y1) / 2 - x_g = (x1g + x2g) / 2 - y_g = (y1g + y2g) / 2 - distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2) - - # Eqn. (7) - loss = 1 - iou + (distance / diag_len) - if reduction == "mean": - loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum() - elif reduction == "sum": - loss = loss.sum() - - return loss - - -def ciou_loss( - boxes1: torch.Tensor, - boxes2: torch.Tensor, - reduction: str = "none", - eps: float = 1e-7, -) -> torch.Tensor: - """ - Complete Intersection over Union Loss (Zhaohui Zheng et. al) - https://arxiv.org/abs/1911.08287 - Args: - boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,). - reduction: 'none' | 'mean' | 'sum' - 'none': No reduction will be applied to the output. - 'mean': The output will be averaged. - 'sum': The output will be summed. - eps (float): small number to prevent division by zero - """ - - x1, y1, x2, y2 = boxes1.unbind(dim=-1) - x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1) - - # TODO: use torch._assert_async() when pytorch 1.8 support is dropped - assert (x2 >= x1).all(), "bad box: x1 larger than x2" - assert (y2 >= y1).all(), "bad box: y1 larger than y2" - - # Intersection keypoints - xkis1 = torch.max(x1, x1g) - ykis1 = torch.max(y1, y1g) - xkis2 = torch.min(x2, x2g) - ykis2 = torch.min(y2, y2g) - - intsct = torch.zeros_like(x1) - mask = (ykis2 > ykis1) & (xkis2 > xkis1) - intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask]) - union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps - iou = intsct / union - - # smallest enclosing box - xc1 = torch.min(x1, x1g) - yc1 = torch.min(y1, y1g) - xc2 = torch.max(x2, x2g) - yc2 = torch.max(y2, y2g) - diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps - - # centers of boxes - x_p = (x2 + x1) / 2 - y_p = (y2 + y1) / 2 - x_g = (x1g + x2g) / 2 - y_g = (y1g + y2g) / 2 - distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2) - - # width and height of boxes - w_pred = x2 - x1 - h_pred = y2 - y1 - w_gt = x2g - x1g - h_gt = y2g - y1g - v = (4 / (math.pi**2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) - with torch.no_grad(): - alpha = v / (1 - iou + v + eps) - - # Eqn. (10) - loss = 1 - iou + (distance / diag_len) + alpha * v - if reduction == "mean": - loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum() - elif reduction == "sum": - loss = loss.sum() - - return loss diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py deleted file mode 100644 index aac56c07da2be4e181e3e95de8cee1fc2858286d..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import copy -import numpy as np -import os -import unittest -import pycocotools.mask as mask_util - -from detectron2.data import MetadataCatalog, detection_utils -from detectron2.data import transforms as T -from detectron2.structures import BitMasks, BoxMode -from detectron2.utils.file_io import PathManager - - -class TestTransformAnnotations(unittest.TestCase): - def test_transform_simple_annotation(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 3, - "segmentation": [[10, 10, 100, 100, 100, 10], [150, 150, 200, 150, 200, 200]], - } - - output = detection_utils.transform_instance_annotations(anno, transforms, (400, 400)) - self.assertTrue(np.allclose(output["bbox"], [200, 10, 390, 300])) - self.assertEqual(len(output["segmentation"]), len(anno["segmentation"])) - self.assertTrue(np.allclose(output["segmentation"][0], [390, 10, 300, 100, 300, 10])) - - detection_utils.annotations_to_instances([output, output], (400, 400)) - - def test_transform_empty_annotation(self): - detection_utils.annotations_to_instances([], (400, 400)) - - def test_flip_keypoints(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "keypoints": np.random.rand(17, 3) * 50 + 15, - } - - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), - transforms, - (400, 400), - keypoint_hflip_indices=detection_utils.create_keypoint_hflip_indices( - ["keypoints_coco_2017_train"] - ), - ) - # The first keypoint is nose - self.assertTrue(np.allclose(output["keypoints"][0, 0], 400 - anno["keypoints"][0, 0])) - # The last 16 keypoints are 8 left-right pairs - self.assertTrue( - np.allclose( - output["keypoints"][1:, 0].reshape(-1, 2)[:, ::-1], - 400 - anno["keypoints"][1:, 0].reshape(-1, 2), - ) - ) - self.assertTrue( - np.allclose( - output["keypoints"][1:, 1:].reshape(-1, 2, 2)[:, ::-1, :], - anno["keypoints"][1:, 1:].reshape(-1, 2, 2), - ) - ) - - def test_crop(self): - transforms = T.TransformList([T.CropTransform(300, 300, 10, 10)]) - keypoints = np.random.rand(17, 3) * 50 + 15 - keypoints[:, 2] = 2 - anno = { - "bbox": np.asarray([10, 10, 200, 400]), - "bbox_mode": BoxMode.XYXY_ABS, - "keypoints": keypoints, - } - - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), transforms, (10, 10) - ) - # box is shifted and cropped - self.assertTrue((output["bbox"] == np.asarray([0, 0, 0, 10])).all()) - # keypoints are no longer visible - self.assertTrue((output["keypoints"][:, 2] == 0).all()) - - def test_transform_RLE(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - mask = np.zeros((300, 400), order="F").astype("uint8") - mask[:, :200] = 1 - - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "segmentation": mask_util.encode(mask[:, :, None])[0], - "category_id": 3, - } - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), transforms, (300, 400) - ) - mask = output["segmentation"] - self.assertTrue((mask[:, 200:] == 1).all()) - self.assertTrue((mask[:, :200] == 0).all()) - - inst = detection_utils.annotations_to_instances( - [output, output], (400, 400), mask_format="bitmask" - ) - self.assertTrue(isinstance(inst.gt_masks, BitMasks)) - - def test_transform_RLE_resize(self): - transforms = T.TransformList( - [T.HFlipTransform(400), T.ScaleTransform(300, 400, 400, 400, "bilinear")] - ) - mask = np.zeros((300, 400), order="F").astype("uint8") - mask[:, :200] = 1 - - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "segmentation": mask_util.encode(mask[:, :, None])[0], - "category_id": 3, - } - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), transforms, (400, 400) - ) - - inst = detection_utils.annotations_to_instances( - [output, output], (400, 400), mask_format="bitmask" - ) - self.assertTrue(isinstance(inst.gt_masks, BitMasks)) - - def test_gen_crop(self): - instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS} - t = detection_utils.gen_crop_transform_with_instance((10, 10), (150, 150), instance) - # the box center must fall into the cropped region - self.assertTrue(t.x0 <= 55 <= t.x0 + t.w) - - def test_gen_crop_outside_boxes(self): - instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS} - with self.assertRaises(AssertionError): - detection_utils.gen_crop_transform_with_instance((10, 10), (15, 15), instance) - - def test_read_sem_seg(self): - cityscapes_dir = MetadataCatalog.get("cityscapes_fine_sem_seg_val").gt_dir - sem_seg_gt_path = os.path.join( - cityscapes_dir, "frankfurt", "frankfurt_000001_083852_gtFine_labelIds.png" - ) - if not PathManager.exists(sem_seg_gt_path): - raise unittest.SkipTest( - "Semantic segmentation ground truth {} not found.".format(sem_seg_gt_path) - ) - sem_seg = detection_utils.read_image(sem_seg_gt_path, "L") - self.assertEqual(sem_seg.ndim, 3) - self.assertEqual(sem_seg.shape[2], 1) - self.assertEqual(sem_seg.dtype, np.uint8) - self.assertEqual(sem_seg.max(), 32) - self.assertEqual(sem_seg.min(), 1) - - def test_read_exif_orientation(self): - # https://github.com/recurser/exif-orientation-examples/raw/master/Landscape_5.jpg - URL = "detectron2://assets/Landscape_5.jpg" - img = detection_utils.read_image(URL, "RGB") - self.assertEqual(img.ndim, 3) - self.assertEqual(img.dtype, np.uint8) - self.assertEqual(img.shape, (1200, 1800, 3)) # check that shape is not transposed - - def test_opencv_exif_orientation(self): - import cv2 - - URL = "detectron2://assets/Landscape_5.jpg" - with PathManager.open(URL, "rb") as f: - img = cv2.imdecode(np.frombuffer(f.read(), dtype="uint8"), cv2.IMREAD_COLOR) - self.assertEqual(img.dtype, np.uint8) - self.assertEqual(img.shape, (1200, 1800, 3)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/nomic-ai/allenai_prosocial-dialog/README.md b/spaces/nomic-ai/allenai_prosocial-dialog/README.md deleted file mode 100644 index a7c5142359e64cd9a8f10a792bdd8f6299a39280..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/allenai_prosocial-dialog/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: allenai/prosocial-dialog -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- \ No newline at end of file diff --git a/spaces/nomic-ai/derek-thomas_ScienceQA/style.css b/spaces/nomic-ai/derek-thomas_ScienceQA/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/derek-thomas_ScienceQA/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py b/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py deleted file mode 100644 index 93e3d1653c8c90640b1fb0752f96ee3a75f2cedb..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py +++ /dev/null @@ -1,519 +0,0 @@ -import time -import re -import io -import json -import copy -import logging - -from .. import config, utils -from ..returnvalues import ReturnValue -from ..storage import contact_change -from ..utils import update_info_dict - -logger = logging.getLogger('itchat') - - -def load_contact(core): - core.update_chatroom = update_chatroom - core.update_friend = update_friend - core.get_contact = get_contact - core.get_friends = get_friends - core.get_chatrooms = get_chatrooms - core.get_mps = get_mps - core.set_alias = set_alias - core.set_pinned = set_pinned - core.accept_friend = accept_friend - core.get_head_img = get_head_img - core.create_chatroom = create_chatroom - core.set_chatroom_name = set_chatroom_name - core.delete_member_from_chatroom = delete_member_from_chatroom - core.add_member_into_chatroom = add_member_into_chatroom - - -def update_chatroom(self, userName, detailedMember=False): - if not isinstance(userName, list): - userName = [userName] - url = '%s/webwxbatchgetcontact?type=ex&r=%s' % ( - self.loginInfo['url'], int(time.time())) - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'Count': len(userName), - 'List': [{ - 'UserName': u, - 'ChatRoomId': '', } for u in userName], } - chatroomList = json.loads(self.s.post(url, data=json.dumps(data), headers=headers - ).content.decode('utf8', 'replace')).get('ContactList') - if not chatroomList: - return ReturnValue({'BaseResponse': { - 'ErrMsg': 'No chatroom found', - 'Ret': -1001, }}) - - if detailedMember: - def get_detailed_member_info(encryChatroomId, memberList): - url = '%s/webwxbatchgetcontact?type=ex&r=%s' % ( - self.loginInfo['url'], int(time.time())) - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT, } - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'Count': len(memberList), - 'List': [{ - 'UserName': member['UserName'], - 'EncryChatRoomId': encryChatroomId} - for member in memberList], } - return json.loads(self.s.post(url, data=json.dumps(data), headers=headers - ).content.decode('utf8', 'replace'))['ContactList'] - MAX_GET_NUMBER = 50 - for chatroom in chatroomList: - totalMemberList = [] - for i in range(int(len(chatroom['MemberList']) / MAX_GET_NUMBER + 1)): - memberList = chatroom['MemberList'][i * - MAX_GET_NUMBER: (i+1)*MAX_GET_NUMBER] - totalMemberList += get_detailed_member_info( - chatroom['EncryChatRoomId'], memberList) - chatroom['MemberList'] = totalMemberList - - update_local_chatrooms(self, chatroomList) - r = [self.storageClass.search_chatrooms(userName=c['UserName']) - for c in chatroomList] - return r if 1 < len(r) else r[0] - - -def update_friend(self, userName): - if not isinstance(userName, list): - userName = [userName] - url = '%s/webwxbatchgetcontact?type=ex&r=%s' % ( - self.loginInfo['url'], int(time.time())) - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'Count': len(userName), - 'List': [{ - 'UserName': u, - 'EncryChatRoomId': '', } for u in userName], } - friendList = json.loads(self.s.post(url, data=json.dumps(data), headers=headers - ).content.decode('utf8', 'replace')).get('ContactList') - - update_local_friends(self, friendList) - r = [self.storageClass.search_friends(userName=f['UserName']) - for f in friendList] - return r if len(r) != 1 else r[0] - - -@contact_change -def update_local_chatrooms(core, l): - ''' - get a list of chatrooms for updating local chatrooms - return a list of given chatrooms with updated info - ''' - for chatroom in l: - # format new chatrooms - utils.emoji_formatter(chatroom, 'NickName') - for member in chatroom['MemberList']: - if 'NickName' in member: - utils.emoji_formatter(member, 'NickName') - if 'DisplayName' in member: - utils.emoji_formatter(member, 'DisplayName') - if 'RemarkName' in member: - utils.emoji_formatter(member, 'RemarkName') - # update it to old chatrooms - oldChatroom = utils.search_dict_list( - core.chatroomList, 'UserName', chatroom['UserName']) - if oldChatroom: - update_info_dict(oldChatroom, chatroom) - # - update other values - memberList = chatroom.get('MemberList', []) - oldMemberList = oldChatroom['MemberList'] - if memberList: - for member in memberList: - oldMember = utils.search_dict_list( - oldMemberList, 'UserName', member['UserName']) - if oldMember: - update_info_dict(oldMember, member) - else: - oldMemberList.append(member) - else: - core.chatroomList.append(chatroom) - oldChatroom = utils.search_dict_list( - core.chatroomList, 'UserName', chatroom['UserName']) - # delete useless members - if len(chatroom['MemberList']) != len(oldChatroom['MemberList']) and \ - chatroom['MemberList']: - existsUserNames = [member['UserName'] - for member in chatroom['MemberList']] - delList = [] - for i, member in enumerate(oldChatroom['MemberList']): - if member['UserName'] not in existsUserNames: - delList.append(i) - delList.sort(reverse=True) - for i in delList: - del oldChatroom['MemberList'][i] - # - update OwnerUin - if oldChatroom.get('ChatRoomOwner') and oldChatroom.get('MemberList'): - owner = utils.search_dict_list(oldChatroom['MemberList'], - 'UserName', oldChatroom['ChatRoomOwner']) - oldChatroom['OwnerUin'] = (owner or {}).get('Uin', 0) - # - update IsAdmin - if 'OwnerUin' in oldChatroom and oldChatroom['OwnerUin'] != 0: - oldChatroom['IsAdmin'] = \ - oldChatroom['OwnerUin'] == int(core.loginInfo['wxuin']) - else: - oldChatroom['IsAdmin'] = None - # - update Self - newSelf = utils.search_dict_list(oldChatroom['MemberList'], - 'UserName', core.storageClass.userName) - oldChatroom['Self'] = newSelf or copy.deepcopy(core.loginInfo['User']) - return { - 'Type': 'System', - 'Text': [chatroom['UserName'] for chatroom in l], - 'SystemInfo': 'chatrooms', - 'FromUserName': core.storageClass.userName, - 'ToUserName': core.storageClass.userName, } - - -@contact_change -def update_local_friends(core, l): - ''' - get a list of friends or mps for updating local contact - ''' - fullList = core.memberList + core.mpList - for friend in l: - if 'NickName' in friend: - utils.emoji_formatter(friend, 'NickName') - if 'DisplayName' in friend: - utils.emoji_formatter(friend, 'DisplayName') - if 'RemarkName' in friend: - utils.emoji_formatter(friend, 'RemarkName') - oldInfoDict = utils.search_dict_list( - fullList, 'UserName', friend['UserName']) - if oldInfoDict is None: - oldInfoDict = copy.deepcopy(friend) - if oldInfoDict['VerifyFlag'] & 8 == 0: - core.memberList.append(oldInfoDict) - else: - core.mpList.append(oldInfoDict) - else: - update_info_dict(oldInfoDict, friend) - - -@contact_change -def update_local_uin(core, msg): - ''' - content contains uins and StatusNotifyUserName contains username - they are in same order, so what I do is to pair them together - - I caught an exception in this method while not knowing why - but don't worry, it won't cause any problem - ''' - uins = re.search('([^<]*?)<', msg['Content']) - usernameChangedList = [] - r = { - 'Type': 'System', - 'Text': usernameChangedList, - 'SystemInfo': 'uins', } - if uins: - uins = uins.group(1).split(',') - usernames = msg['StatusNotifyUserName'].split(',') - if 0 < len(uins) == len(usernames): - for uin, username in zip(uins, usernames): - if not '@' in username: - continue - fullContact = core.memberList + core.chatroomList + core.mpList - userDicts = utils.search_dict_list(fullContact, - 'UserName', username) - if userDicts: - if userDicts.get('Uin', 0) == 0: - userDicts['Uin'] = uin - usernameChangedList.append(username) - logger.debug('Uin fetched: %s, %s' % (username, uin)) - else: - if userDicts['Uin'] != uin: - logger.debug('Uin changed: %s, %s' % ( - userDicts['Uin'], uin)) - else: - if '@@' in username: - core.storageClass.updateLock.release() - update_chatroom(core, username) - core.storageClass.updateLock.acquire() - newChatroomDict = utils.search_dict_list( - core.chatroomList, 'UserName', username) - if newChatroomDict is None: - newChatroomDict = utils.struct_friend_info({ - 'UserName': username, - 'Uin': uin, - 'Self': copy.deepcopy(core.loginInfo['User'])}) - core.chatroomList.append(newChatroomDict) - else: - newChatroomDict['Uin'] = uin - elif '@' in username: - core.storageClass.updateLock.release() - update_friend(core, username) - core.storageClass.updateLock.acquire() - newFriendDict = utils.search_dict_list( - core.memberList, 'UserName', username) - if newFriendDict is None: - newFriendDict = utils.struct_friend_info({ - 'UserName': username, - 'Uin': uin, }) - core.memberList.append(newFriendDict) - else: - newFriendDict['Uin'] = uin - usernameChangedList.append(username) - logger.debug('Uin fetched: %s, %s' % (username, uin)) - else: - logger.debug('Wrong length of uins & usernames: %s, %s' % ( - len(uins), len(usernames))) - else: - logger.debug('No uins in 51 message') - logger.debug(msg['Content']) - return r - - -def get_contact(self, update=False): - if not update: - return utils.contact_deep_copy(self, self.chatroomList) - - def _get_contact(seq=0): - url = '%s/webwxgetcontact?r=%s&seq=%s&skey=%s' % (self.loginInfo['url'], - int(time.time()), seq, self.loginInfo['skey']) - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT, } - try: - r = self.s.get(url, headers=headers) - except: - logger.info( - 'Failed to fetch contact, that may because of the amount of your chatrooms') - for chatroom in self.get_chatrooms(): - self.update_chatroom(chatroom['UserName'], detailedMember=True) - return 0, [] - j = json.loads(r.content.decode('utf-8', 'replace')) - return j.get('Seq', 0), j.get('MemberList') - seq, memberList = 0, [] - while 1: - seq, batchMemberList = _get_contact(seq) - memberList.extend(batchMemberList) - if seq == 0: - break - chatroomList, otherList = [], [] - for m in memberList: - if m['Sex'] != 0: - otherList.append(m) - elif '@@' in m['UserName']: - chatroomList.append(m) - elif '@' in m['UserName']: - # mp will be dealt in update_local_friends as well - otherList.append(m) - if chatroomList: - update_local_chatrooms(self, chatroomList) - if otherList: - update_local_friends(self, otherList) - return utils.contact_deep_copy(self, chatroomList) - - -def get_friends(self, update=False): - if update: - self.get_contact(update=True) - return utils.contact_deep_copy(self, self.memberList) - - -def get_chatrooms(self, update=False, contactOnly=False): - if contactOnly: - return self.get_contact(update=True) - else: - if update: - self.get_contact(True) - return utils.contact_deep_copy(self, self.chatroomList) - - -def get_mps(self, update=False): - if update: - self.get_contact(update=True) - return utils.contact_deep_copy(self, self.mpList) - - -def set_alias(self, userName, alias): - oldFriendInfo = utils.search_dict_list( - self.memberList, 'UserName', userName) - if oldFriendInfo is None: - return ReturnValue({'BaseResponse': { - 'Ret': -1001, }}) - url = '%s/webwxoplog?lang=%s&pass_ticket=%s' % ( - self.loginInfo['url'], 'zh_CN', self.loginInfo['pass_ticket']) - data = { - 'UserName': userName, - 'CmdId': 2, - 'RemarkName': alias, - 'BaseRequest': self.loginInfo['BaseRequest'], } - headers = {'User-Agent': config.USER_AGENT} - r = self.s.post(url, json.dumps(data, ensure_ascii=False).encode('utf8'), - headers=headers) - r = ReturnValue(rawResponse=r) - if r: - oldFriendInfo['RemarkName'] = alias - return r - - -def set_pinned(self, userName, isPinned=True): - url = '%s/webwxoplog?pass_ticket=%s' % ( - self.loginInfo['url'], self.loginInfo['pass_ticket']) - data = { - 'UserName': userName, - 'CmdId': 3, - 'OP': int(isPinned), - 'BaseRequest': self.loginInfo['BaseRequest'], } - headers = {'User-Agent': config.USER_AGENT} - r = self.s.post(url, json=data, headers=headers) - return ReturnValue(rawResponse=r) - - -def accept_friend(self, userName, v4='', autoUpdate=True): - url = f"{self.loginInfo['url']}/webwxverifyuser?r={int(time.time())}&pass_ticket={self.loginInfo['pass_ticket']}" - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'Opcode': 3, # 3 - 'VerifyUserListSize': 1, - 'VerifyUserList': [{ - 'Value': userName, - 'VerifyUserTicket': v4, }], - 'VerifyContent': '', - 'SceneListCount': 1, - 'SceneList': [33], - 'skey': self.loginInfo['skey'], } - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - r = self.s.post(url, headers=headers, - data=json.dumps(data, ensure_ascii=False).encode('utf8', 'replace')) - if autoUpdate: - self.update_friend(userName) - return ReturnValue(rawResponse=r) - - -def get_head_img(self, userName=None, chatroomUserName=None, picDir=None): - ''' get head image - * if you want to get chatroom header: only set chatroomUserName - * if you want to get friend header: only set userName - * if you want to get chatroom member header: set both - ''' - params = { - 'userName': userName or chatroomUserName or self.storageClass.userName, - 'skey': self.loginInfo['skey'], - 'type': 'big', } - url = '%s/webwxgeticon' % self.loginInfo['url'] - if chatroomUserName is None: - infoDict = self.storageClass.search_friends(userName=userName) - if infoDict is None: - return ReturnValue({'BaseResponse': { - 'ErrMsg': 'No friend found', - 'Ret': -1001, }}) - else: - if userName is None: - url = '%s/webwxgetheadimg' % self.loginInfo['url'] - else: - chatroom = self.storageClass.search_chatrooms( - userName=chatroomUserName) - if chatroomUserName is None: - return ReturnValue({'BaseResponse': { - 'ErrMsg': 'No chatroom found', - 'Ret': -1001, }}) - if 'EncryChatRoomId' in chatroom: - params['chatroomid'] = chatroom['EncryChatRoomId'] - params['chatroomid'] = params.get( - 'chatroomid') or chatroom['UserName'] - headers = {'User-Agent': config.USER_AGENT} - r = self.s.get(url, params=params, stream=True, headers=headers) - tempStorage = io.BytesIO() - for block in r.iter_content(1024): - tempStorage.write(block) - if picDir is None: - return tempStorage.getvalue() - with open(picDir, 'wb') as f: - f.write(tempStorage.getvalue()) - tempStorage.seek(0) - return ReturnValue({'BaseResponse': { - 'ErrMsg': 'Successfully downloaded', - 'Ret': 0, }, - 'PostFix': utils.get_image_postfix(tempStorage.read(20)), }) - - -def create_chatroom(self, memberList, topic=''): - url = '%s/webwxcreatechatroom?pass_ticket=%s&r=%s' % ( - self.loginInfo['url'], self.loginInfo['pass_ticket'], int(time.time())) - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'MemberCount': len(memberList.split(',')), - 'MemberList': [{'UserName': member} for member in memberList.split(',')], - 'Topic': topic, } - headers = { - 'content-type': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - r = self.s.post(url, headers=headers, - data=json.dumps(data, ensure_ascii=False).encode('utf8', 'ignore')) - return ReturnValue(rawResponse=r) - - -def set_chatroom_name(self, chatroomUserName, name): - url = '%s/webwxupdatechatroom?fun=modtopic&pass_ticket=%s' % ( - self.loginInfo['url'], self.loginInfo['pass_ticket']) - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'ChatRoomName': chatroomUserName, - 'NewTopic': name, } - headers = { - 'content-type': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - r = self.s.post(url, headers=headers, - data=json.dumps(data, ensure_ascii=False).encode('utf8', 'ignore')) - return ReturnValue(rawResponse=r) - - -def delete_member_from_chatroom(self, chatroomUserName, memberList): - url = '%s/webwxupdatechatroom?fun=delmember&pass_ticket=%s' % ( - self.loginInfo['url'], self.loginInfo['pass_ticket']) - data = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'ChatRoomName': chatroomUserName, - 'DelMemberList': ','.join([member['UserName'] for member in memberList]), } - headers = { - 'content-type': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - r = self.s.post(url, data=json.dumps(data), headers=headers) - return ReturnValue(rawResponse=r) - - -def add_member_into_chatroom(self, chatroomUserName, memberList, - useInvitation=False): - ''' add or invite member into chatroom - * there are two ways to get members into chatroom: invite or directly add - * but for chatrooms with more than 40 users, you can only use invite - * but don't worry we will auto-force userInvitation for you when necessary - ''' - if not useInvitation: - chatroom = self.storageClass.search_chatrooms( - userName=chatroomUserName) - if not chatroom: - chatroom = self.update_chatroom(chatroomUserName) - if len(chatroom['MemberList']) > self.loginInfo['InviteStartCount']: - useInvitation = True - if useInvitation: - fun, memberKeyName = 'invitemember', 'InviteMemberList' - else: - fun, memberKeyName = 'addmember', 'AddMemberList' - url = '%s/webwxupdatechatroom?fun=%s&pass_ticket=%s' % ( - self.loginInfo['url'], fun, self.loginInfo['pass_ticket']) - params = { - 'BaseRequest': self.loginInfo['BaseRequest'], - 'ChatRoomName': chatroomUserName, - memberKeyName: memberList, } - headers = { - 'content-type': 'application/json; charset=UTF-8', - 'User-Agent': config.USER_AGENT} - r = self.s.post(url, data=json.dumps(params), headers=headers) - return ReturnValue(rawResponse=r) diff --git a/spaces/openai/openai-detector/detection.md b/spaces/openai/openai-detector/detection.md deleted file mode 100644 index c10ca0a64af844027ae1275a117c90d478db6620..0000000000000000000000000000000000000000 --- a/spaces/openai/openai-detector/detection.md +++ /dev/null @@ -1,50 +0,0 @@ -We encourage you to try improving our baselines. Please let us know if you have questions or find any interesting results! - -## Simple baseline - -We've provided a starter baseline which trains a logistic regression detector on TF-IDF unigram and bigram features, in [`baseline.py`](./baseline.py). - -### Initial Analysis - -The baseline achieves the following accuracies: - -| Model | Temperature 1 | Top-K 40 | -| ----- | ------ | ------ | -| 117M | 88.29% | 96.79% | -| 345M | 88.94% | 95.22% | -| 762M | 77.16% | 94.43% | -| 1542M | 74.31% | 92.69% | - - - -Unsurprisingly, shorter documents are harder to detect and performance improves gradually with length. Accuracy of detection of short documents of 500 characters (a long paragraph) is about 15% lower. - - - -Truncated sampling, which is commonly used for high-quality generations from the GPT-2 model family, results in a shift in the part of speech distribution of the generated text compared to real text. A clear example is the underuse of proper nouns and overuse of pronouns which are more generic. This shift contributes to the 8% to 18% higher detection rate of Top-K samples compared to random samples across models. - -### Finetuning - -When run on samples from the finetuned GPT-2 full model, detection rate falls from 92.7% to 70.2% for Top-K 40 generations. Note that about half of this drop is accounted for by length, since Amazon reviews are shorter than WebText documents. - -## "Zero-shot" baseline - -We attempt a second baseline which uses a language model to evaluate total log probability, and thresholds based on this probability. This baseline underperforms relative to the simple baselinie. However, we are interested in further variants, such as binning per-token log probabilities. - -### Initial analysis - -Here, we show results of log-prob based detection for both standard (t=1) and Top-K 40 generations. - - - -The main result is that GPT-2 detects itself 81.8% of the time in the easy case of Top-K 40 generations. This is pretty constant across model sizes. All underperform relative to the simple baseline. - -For random samples, results are unsurprising. Bigger models are better able to realize that generated text is still kind of weird and "random". Detection rates also go down as generators get better. - -For Top-K 40, results are perhaps more surprising. Using a bigger model as a discriminator does not really improve detection rates across the board (the smallest GPT-2 model does as well at detecting full GPT-2 as full GPT-2), and a bigger model does not "detect down well" - that is, full GPT-2 is actually kind of bad at detecting an adversary using small GPT-2. - -An important difference is that while in the random samples case, generations are less likely than real data, in the Top-K 40 case, they are more likely. - -### Finetuning - -When detecting samples from our finetuned GPT-2 full model using GPT-2 full, we observe a 63.2% detection rate on random samples (drop of 13%) and 76.2% detection rate with Top-K 40 samples (drop of 5.6%) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py deleted file mode 100644 index 3db7e73ca6afc5fa7c67c1902d79e67c1aa728bc..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Optional - -from torch import nn - -from .transformer_2d import Transformer2DModel, Transformer2DModelOutput - - -class DualTransformer2DModel(nn.Module): - """ - Dual transformer wrapper that combines two `Transformer2DModel`s for mixed inference. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - Pass if the input is continuous. The number of channels in the input and output. - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.1): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of encoder_hidden_states dimensions to use. - sample_size (`int`, *optional*): Pass if the input is discrete. The width of the latent images. - Note that this is fixed at training time as it is used for learning a number of position embeddings. See - `ImagePositionalEmbeddings`. - num_vector_embeds (`int`, *optional*): - Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): Pass if at least one of the norm_layers is `AdaLayerNorm`. - The number of diffusion steps used during training. Note that this is fixed at training time as it is used - to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for - up to but not more than steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the TransformerBlocks' attention should contain a bias parameter. - """ - - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - ): - super().__init__() - self.transformers = nn.ModuleList( - [ - Transformer2DModel( - num_attention_heads=num_attention_heads, - attention_head_dim=attention_head_dim, - in_channels=in_channels, - num_layers=num_layers, - dropout=dropout, - norm_num_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attention_bias=attention_bias, - sample_size=sample_size, - num_vector_embeds=num_vector_embeds, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - ) - for _ in range(2) - ] - ) - - # Variables that can be set by a pipeline: - - # The ratio of transformer1 to transformer2's output states to be combined during inference - self.mix_ratio = 0.5 - - # The shape of `encoder_hidden_states` is expected to be - # `(batch_size, condition_lengths[0]+condition_lengths[1], num_features)` - self.condition_lengths = [77, 257] - - # Which transformer to use to encode which condition. - # E.g. `(1, 0)` means that we'll use `transformers[1](conditions[0])` and `transformers[0](conditions[1])` - self.transformer_index_for_condition = [1, 0] - - def forward( - self, - hidden_states, - encoder_hidden_states, - timestep=None, - attention_mask=None, - cross_attention_kwargs=None, - return_dict: bool = True, - ): - """ - Args: - hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`. - When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input - hidden_states - encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.long`, *optional*): - Optional timestep to be applied as an embedding in AdaLayerNorm's. Used to indicate denoising step. - attention_mask (`torch.FloatTensor`, *optional*): - Optional attention mask to be applied in Attention - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.transformer_2d.Transformer2DModelOutput`] or `tuple`: - [`~models.transformer_2d.Transformer2DModelOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - input_states = hidden_states - - encoded_states = [] - tokens_start = 0 - # attention_mask is not used yet - for i in range(2): - # for each of the two transformers, pass the corresponding condition tokens - condition_state = encoder_hidden_states[:, tokens_start : tokens_start + self.condition_lengths[i]] - transformer_index = self.transformer_index_for_condition[i] - encoded_state = self.transformers[transformer_index]( - input_states, - encoder_hidden_states=condition_state, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - encoded_states.append(encoded_state - input_states) - tokens_start += self.condition_lengths[i] - - output_states = encoded_states[0] * self.mix_ratio + encoded_states[1] * (1 - self.mix_ratio) - output_states = output_states + input_states - - if not return_dict: - return (output_states,) - - return Transformer2DModelOutput(sample=output_states) diff --git a/spaces/patrickvonplaten/asv/app.py b/spaces/patrickvonplaten/asv/app.py deleted file mode 100644 index e9ae64063507cd790f885481d0b62afa8a13f076..0000000000000000000000000000000000000000 --- a/spaces/patrickvonplaten/asv/app.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -import gradio as gr -from torchaudio.sox_effects import apply_effects_file -from transformers import AutoFeatureExtractor, AutoModelForAudioXVector - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -STYLE = """ - -""" -OUTPUT_OK = STYLE + """ -
                -

                The speakers are

                -

                {:.1f}%

                -

                similar

                -

                Welcome, human!

                -
                (You must get at least 85% to be considered the same person)
                -
                -""" -OUTPUT_FAIL = STYLE + """ -
                -

                The speakers are

                -

                {:.1f}%

                -

                similar

                -

                You shall not pass!

                -
                (You must get at least 85% to be considered the same person)
                -
                -""" - -EFFECTS = [ - ['remix', '-'], - ["channels", "1"], - ["rate", "16000"], - ["gain", "-1.0"], - ["silence", "1", "0.1", "0.1%", "-1", "0.1", "0.1%"], - ['trim', '0', '10'], -] - -THRESHOLD = 0.85 - -model_name = "microsoft/unispeech-sat-base-plus-sv" -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -model = AutoModelForAudioXVector.from_pretrained(model_name).to(device) -cosine_sim = torch.nn.CosineSimilarity(dim=-1) - - -def similarity_fn(path1, path2): - if not (path1 and path2): - return 'ERROR: Please record audio for *both* speakers!' - - wav1, _ = apply_effects_file(path1, EFFECTS) - wav2, _ = apply_effects_file(path2, EFFECTS) - print(wav1.shape, wav2.shape) - - input1 = feature_extractor(wav1.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - input2 = feature_extractor(wav2.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - - with torch.no_grad(): - emb1 = model(input1).embeddings - emb2 = model(input2).embeddings - emb1 = torch.nn.functional.normalize(emb1, dim=-1).cpu() - emb2 = torch.nn.functional.normalize(emb2, dim=-1).cpu() - similarity = cosine_sim(emb1, emb2).numpy()[0] - - if similarity >= THRESHOLD: - output = OUTPUT_OK.format(similarity * 100) - else: - output = OUTPUT_FAIL.format(similarity * 100) - - return output - - -inputs = [ - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #1"), - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #2"), -] -output = gr.outputs.HTML(label="") - - -description = ( - "This demo will compare two speech samples and determine if they are from the same speaker. " - "Try it with your own voice!" -) -article = ( - "

                " - "🎙️ Learn more about UniSpeech-SAT | " - "📚 UniSpeech-SAT paper | " - "📚 X-Vector paper" - "

                " -) - -interface = gr.Interface( - fn=similarity_fn, - inputs=inputs, - outputs=output, - title="Voice Authentication with UniSpeech-SAT + X-Vectors", - description=description, - article=article, - layout="horizontal", - theme="huggingface", - allow_flagging=False, - live=False, - examples=[ - ["samples/cate_blanch.mp3", "samples/cate_blanch_2.mp3"], - ["samples/cate_blanch.mp3", "samples/cate_blanch_3.mp3"], - ["samples/cate_blanch_2.mp3", "samples/cate_blanch_3.mp3"], - ["samples/heath_ledger.mp3", "samples/heath_ledger_2.mp3"], - ["samples/heath_ledger.mp3", "samples/heath_ledger_3.mp3"], - ["samples/heath_ledger_2.mp3", "samples/heath_ledger_3.mp3"], - ["samples/russel_crowe.mp3", "samples/russel_crowe_2.mp3"], - ["samples/cate_blanch.mp3", "samples/kirsten_dunst.wav"], - ["samples/russel_crowe.mp3", "samples/kirsten_dunst.wav"], - ["samples/russel_crowe_2.mp3", "samples/kirsten_dunst.wav"], - ["samples/leonardo_dicaprio.mp3", "samples/denzel_washington.mp3"], - ["samples/heath_ledger.mp3", "samples/denzel_washington.mp3"], - ["samples/heath_ledger_2.mp3", "samples/denzel_washington.mp3"], - ["samples/leonardo_dicaprio.mp3", "samples/russel_crowe.mp3"], - ["samples/leonardo_dicaprio.mp3", "samples/russel_crowe_2.mp3"], - ["samples/naomi_watts.mp3", "samples/denzel_washington.mp3"], - ["samples/naomi_watts.mp3", "samples/leonardo_dicaprio.mp3"], - ["samples/naomi_watts.mp3", "samples/cate_blanch_2.mp3"], - ["samples/naomi_watts.mp3", "samples/kirsten_dunst.wav"], - ] -) -interface.launch(enable_queue=True) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/screen.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/screen.py deleted file mode 100644 index 7f416e1e799abfbf62382456020cc8e59e5cf01f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/screen.py +++ /dev/null @@ -1,54 +0,0 @@ -from typing import Optional, TYPE_CHECKING - -from .segment import Segment -from .style import StyleType -from ._loop import loop_last - - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - RenderResult, - RenderableType, - Group, - ) - - -class Screen: - """A renderable that fills the terminal screen and crops excess. - - Args: - renderable (RenderableType): Child renderable. - style (StyleType, optional): Optional background style. Defaults to None. - """ - - renderable: "RenderableType" - - def __init__( - self, - *renderables: "RenderableType", - style: Optional[StyleType] = None, - application_mode: bool = False, - ) -> None: - from pip._vendor.rich.console import Group - - self.renderable = Group(*renderables) - self.style = style - self.application_mode = application_mode - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - width, height = options.size - style = console.get_style(self.style) if self.style else None - render_options = options.update(width=width, height=height) - lines = console.render_lines( - self.renderable or "", render_options, style=style, pad=True - ) - lines = Segment.set_shape(lines, width, height, style=style) - new_line = Segment("\n\r") if self.application_mode else Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/politweet-sh/politweet/tests/classifier_test.py b/spaces/politweet-sh/politweet/tests/classifier_test.py deleted file mode 100644 index 05ee4bec1a5e117851c456809c2831776aea24d4..0000000000000000000000000000000000000000 --- a/spaces/politweet-sh/politweet/tests/classifier_test.py +++ /dev/null @@ -1,15 +0,0 @@ -import unittest -import pandas as pd -from datetime import datetime -import sys -from pathlib import Path -sys.path.insert(0, str(Path(__file__).parents[1]) + "/textclassifier") -from TextClassifier import TextClassifier - -class MyTestCase(unittest.TestCase): - def test_something(self): - self.assertEqual(True, False) # add assertion here - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/portal/Control-Nets/style.css b/spaces/portal/Control-Nets/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/portal/Control-Nets/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/symfont.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/symfont.py deleted file mode 100644 index 0bd69a386ec9f01c8951f0dfc8bc8c261718cf1f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/symfont.py +++ /dev/null @@ -1,251 +0,0 @@ -from fontTools.pens.basePen import BasePen -from functools import partial -from itertools import count -import sympy as sp -import sys - -n = 3 # Max Bezier degree; 3 for cubic, 2 for quadratic - -t, x, y = sp.symbols("t x y", real=True) -c = sp.symbols("c", real=False) # Complex representation instead of x/y - -X = tuple(sp.symbols("x:%d" % (n + 1), real=True)) -Y = tuple(sp.symbols("y:%d" % (n + 1), real=True)) -P = tuple(zip(*(sp.symbols("p:%d[%s]" % (n + 1, w), real=True) for w in "01"))) -C = tuple(sp.symbols("c:%d" % (n + 1), real=False)) - -# Cubic Bernstein basis functions -BinomialCoefficient = [(1, 0)] -for i in range(1, n + 1): - last = BinomialCoefficient[-1] - this = tuple(last[j - 1] + last[j] for j in range(len(last))) + (0,) - BinomialCoefficient.append(this) -BinomialCoefficient = tuple(tuple(item[:-1]) for item in BinomialCoefficient) -del last, this - -BernsteinPolynomial = tuple( - tuple(c * t**i * (1 - t) ** (n - i) for i, c in enumerate(coeffs)) - for n, coeffs in enumerate(BinomialCoefficient) -) - -BezierCurve = tuple( - tuple( - sum(P[i][j] * bernstein for i, bernstein in enumerate(bernsteins)) - for j in range(2) - ) - for n, bernsteins in enumerate(BernsteinPolynomial) -) -BezierCurveC = tuple( - sum(C[i] * bernstein for i, bernstein in enumerate(bernsteins)) - for n, bernsteins in enumerate(BernsteinPolynomial) -) - - -def green(f, curveXY): - f = -sp.integrate(sp.sympify(f), y) - f = f.subs({x: curveXY[0], y: curveXY[1]}) - f = sp.integrate(f * sp.diff(curveXY[0], t), (t, 0, 1)) - return f - - -class _BezierFuncsLazy(dict): - def __init__(self, symfunc): - self._symfunc = symfunc - self._bezfuncs = {} - - def __missing__(self, i): - args = ["p%d" % d for d in range(i + 1)] - f = green(self._symfunc, BezierCurve[i]) - f = sp.gcd_terms(f.collect(sum(P, ()))) # Optimize - return sp.lambdify(args, f) - - -class GreenPen(BasePen): - - _BezierFuncs = {} - - @classmethod - def _getGreenBezierFuncs(celf, func): - funcstr = str(func) - if not funcstr in celf._BezierFuncs: - celf._BezierFuncs[funcstr] = _BezierFuncsLazy(func) - return celf._BezierFuncs[funcstr] - - def __init__(self, func, glyphset=None): - BasePen.__init__(self, glyphset) - self._funcs = self._getGreenBezierFuncs(func) - self.value = 0 - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _endPath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - # Green theorem is not defined on open contours. - raise NotImplementedError - - def _lineTo(self, p1): - p0 = self._getCurrentPoint() - self.value += self._funcs[1](p0, p1) - - def _qCurveToOne(self, p1, p2): - p0 = self._getCurrentPoint() - self.value += self._funcs[2](p0, p1, p2) - - def _curveToOne(self, p1, p2, p3): - p0 = self._getCurrentPoint() - self.value += self._funcs[3](p0, p1, p2, p3) - - -# Sample pens. -# Do not use this in real code. -# Use fontTools.pens.momentsPen.MomentsPen instead. -AreaPen = partial(GreenPen, func=1) -MomentXPen = partial(GreenPen, func=x) -MomentYPen = partial(GreenPen, func=y) -MomentXXPen = partial(GreenPen, func=x * x) -MomentYYPen = partial(GreenPen, func=y * y) -MomentXYPen = partial(GreenPen, func=x * y) - - -def printGreenPen(penName, funcs, file=sys.stdout, docstring=None): - - if docstring is not None: - print('"""%s"""' % docstring) - - print( - """from fontTools.pens.basePen import BasePen, OpenContourError -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -__all__ = ["%s"] - -class %s(BasePen): - - def __init__(self, glyphset=None): - BasePen.__init__(self, glyphset) -""" - % (penName, penName), - file=file, - ) - for name, f in funcs: - print(" self.%s = 0" % name, file=file) - print( - """ - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _endPath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - # Green theorem is not defined on open contours. - raise OpenContourError( - "Green theorem is not defined on open contours." - ) -""", - end="", - file=file, - ) - - for n in (1, 2, 3): - - subs = {P[i][j]: [X, Y][j][i] for i in range(n + 1) for j in range(2)} - greens = [green(f, BezierCurve[n]) for name, f in funcs] - greens = [sp.gcd_terms(f.collect(sum(P, ()))) for f in greens] # Optimize - greens = [f.subs(subs) for f in greens] # Convert to p to x/y - defs, exprs = sp.cse( - greens, - optimizations="basic", - symbols=(sp.Symbol("r%d" % i) for i in count()), - ) - - print() - for name, value in defs: - print(" @cython.locals(%s=cython.double)" % name, file=file) - if n == 1: - print( - """\ - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - def _lineTo(self, p1): - x0,y0 = self._getCurrentPoint() - x1,y1 = p1 -""", - file=file, - ) - elif n == 2: - print( - """\ - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - def _qCurveToOne(self, p1, p2): - x0,y0 = self._getCurrentPoint() - x1,y1 = p1 - x2,y2 = p2 -""", - file=file, - ) - elif n == 3: - print( - """\ - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - @cython.locals(x3=cython.double, y3=cython.double) - def _curveToOne(self, p1, p2, p3): - x0,y0 = self._getCurrentPoint() - x1,y1 = p1 - x2,y2 = p2 - x3,y3 = p3 -""", - file=file, - ) - for name, value in defs: - print(" %s = %s" % (name, value), file=file) - - print(file=file) - for name, value in zip([f[0] for f in funcs], exprs): - print(" self.%s += %s" % (name, value), file=file) - - print( - """ -if __name__ == '__main__': - from fontTools.misc.symfont import x, y, printGreenPen - printGreenPen('%s', [""" - % penName, - file=file, - ) - for name, f in funcs: - print(" ('%s', %s)," % (name, str(f)), file=file) - print(" ])", file=file) - - -if __name__ == "__main__": - pen = AreaPen() - pen.moveTo((100, 100)) - pen.lineTo((100, 200)) - pen.lineTo((200, 200)) - pen.curveTo((200, 250), (300, 300), (250, 350)) - pen.lineTo((200, 100)) - pen.closePath() - print(pen.value) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py deleted file mode 100644 index 65b25fb913a58481d84341810b45b5599eda3067..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py +++ /dev/null @@ -1,933 +0,0 @@ -""" -A directive for including a Matplotlib plot in a Sphinx document -================================================================ - -This is a Sphinx extension providing a reStructuredText directive -``.. plot::`` for including a plot in a Sphinx document. - -In HTML output, ``.. plot::`` will include a .png file with a link -to a high-res .png and .pdf. In LaTeX output, it will include a .pdf. - -The plot content may be defined in one of three ways: - -1. **A path to a source file** as the argument to the directive:: - - .. plot:: path/to/plot.py - - When a path to a source file is given, the content of the - directive may optionally contain a caption for the plot:: - - .. plot:: path/to/plot.py - - The plot caption. - - Additionally, one may specify the name of a function to call (with - no arguments) immediately after importing the module:: - - .. plot:: path/to/plot.py plot_function1 - -2. Included as **inline content** to the directive:: - - .. plot:: - - import matplotlib.pyplot as plt - plt.plot([1, 2, 3], [4, 5, 6]) - plt.title("A plotting exammple") - -3. Using **doctest** syntax:: - - .. plot:: - - A plotting example: - >>> import matplotlib.pyplot as plt - >>> plt.plot([1, 2, 3], [4, 5, 6]) - -Options -------- - -The ``.. plot::`` directive supports the following options: - -``:format:`` : {'python', 'doctest'} - The format of the input. If unset, the format is auto-detected. - -``:include-source:`` : bool - Whether to display the source code. The default can be changed using - the ``plot_include_source`` variable in :file:`conf.py` (which itself - defaults to False). - -``:show-source-link:`` : bool - Whether to show a link to the source in HTML. The default can be - changed using the ``plot_html_show_source_link`` variable in - :file:`conf.py` (which itself defaults to True). - -``:context:`` : bool or str - If provided, the code will be run in the context of all previous plot - directives for which the ``:context:`` option was specified. This only - applies to inline code plot directives, not those run from files. If - the ``:context: reset`` option is specified, the context is reset - for this and future plots, and previous figures are closed prior to - running the code. ``:context: close-figs`` keeps the context but closes - previous figures before running the code. - -``:nofigs:`` : bool - If specified, the code block will be run, but no figures will be - inserted. This is usually useful with the ``:context:`` option. - -``:caption:`` : str - If specified, the option's argument will be used as a caption for the - figure. This overwrites the caption given in the content, when the plot - is generated from a file. - -Additionally, this directive supports all the options of the `image directive -`_, -except for ``:target:`` (since plot will add its own target). These include -``:alt:``, ``:height:``, ``:width:``, ``:scale:``, ``:align:`` and ``:class:``. - -Configuration options ---------------------- - -The plot directive has the following configuration options: - -plot_include_source - Default value for the include-source option (default: False). - -plot_html_show_source_link - Whether to show a link to the source in HTML (default: True). - -plot_pre_code - Code that should be executed before each plot. If None (the default), - it will default to a string containing:: - - import numpy as np - from matplotlib import pyplot as plt - -plot_basedir - Base directory, to which ``plot::`` file names are relative to. - If None or empty (the default), file names are relative to the - directory where the file containing the directive is. - -plot_formats - File formats to generate (default: ['png', 'hires.png', 'pdf']). - List of tuples or strings:: - - [(suffix, dpi), suffix, ...] - - that determine the file format and the DPI. For entries whose - DPI was omitted, sensible defaults are chosen. When passing from - the command line through sphinx_build the list should be passed as - suffix:dpi,suffix:dpi, ... - -plot_html_show_formats - Whether to show links to the files in HTML (default: True). - -plot_rcparams - A dictionary containing any non-standard rcParams that should - be applied before each plot (default: {}). - -plot_apply_rcparams - By default, rcParams are applied when ``:context:`` option is not used - in a plot directive. If set, this configuration option overrides this - behavior and applies rcParams before each plot. - -plot_working_directory - By default, the working directory will be changed to the directory of - the example, so the code can get at its data files, if any. Also its - path will be added to `sys.path` so it can import any helper modules - sitting beside it. This configuration option can be used to specify - a central directory (also added to `sys.path`) where data files and - helper modules for all code are located. - -plot_template - Provide a customized template for preparing restructured text. - -plot_srcset - Allow the srcset image option for responsive image resolutions. List of - strings with the multiplicative factors followed by an "x". - e.g. ["2.0x", "1.5x"]. "2.0x" will create a png with the default "png" - resolution from plot_formats, multiplied by 2. If plot_srcset is - specified, the plot directive uses the - :doc:`/api/sphinxext_figmpl_directive_api` (instead of the usual figure - directive) in the intermediary rst file that is generated. - The plot_srcset option is incompatible with *singlehtml* builds, and an - error will be raised. - -Notes on how it works ---------------------- - -The plot directive runs the code it is given, either in the source file or the -code under the directive. The figure created (if any) is saved in the sphinx -build directory under a subdirectory named ``plot_directive``. It then creates -an intermediate rst file that calls a ``.. figure:`` directive (or -``.. figmpl::`` directive if ``plot_srcset`` is being used) and has links to -the ``*.png`` files in the ``plot_directive`` directory. These translations can -be customized by changing the *plot_template*. See the source of -:doc:`/api/sphinxext_plot_directive_api` for the templates defined in *TEMPLATE* -and *TEMPLATE_SRCSET*. -""" - -import contextlib -import doctest -from io import StringIO -import itertools -import os -from os.path import relpath -from pathlib import Path -import re -import shutil -import sys -import textwrap -import traceback - -from docutils.parsers.rst import directives, Directive -from docutils.parsers.rst.directives.images import Image -import jinja2 # Sphinx dependency. - -from sphinx.errors import ExtensionError - -import matplotlib -from matplotlib.backend_bases import FigureManagerBase -import matplotlib.pyplot as plt -from matplotlib import _pylab_helpers, cbook - -matplotlib.use("agg") - -__version__ = 2 - - -# ----------------------------------------------------------------------------- -# Registration hook -# ----------------------------------------------------------------------------- - - -def _option_boolean(arg): - if not arg or not arg.strip(): - # no argument given, assume used as a flag - return True - elif arg.strip().lower() in ('no', '0', 'false'): - return False - elif arg.strip().lower() in ('yes', '1', 'true'): - return True - else: - raise ValueError(f'{arg!r} unknown boolean') - - -def _option_context(arg): - if arg in [None, 'reset', 'close-figs']: - return arg - raise ValueError("Argument should be None or 'reset' or 'close-figs'") - - -def _option_format(arg): - return directives.choice(arg, ('python', 'doctest')) - - -def mark_plot_labels(app, document): - """ - To make plots referenceable, we need to move the reference from the - "htmlonly" (or "latexonly") node to the actual figure node itself. - """ - for name, explicit in document.nametypes.items(): - if not explicit: - continue - labelid = document.nameids[name] - if labelid is None: - continue - node = document.ids[labelid] - if node.tagname in ('html_only', 'latex_only'): - for n in node: - if n.tagname == 'figure': - sectname = name - for c in n: - if c.tagname == 'caption': - sectname = c.astext() - break - - node['ids'].remove(labelid) - node['names'].remove(name) - n['ids'].append(labelid) - n['names'].append(name) - document.settings.env.labels[name] = \ - document.settings.env.docname, labelid, sectname - break - - -class PlotDirective(Directive): - """The ``.. plot::`` directive, as documented in the module's docstring.""" - - has_content = True - required_arguments = 0 - optional_arguments = 2 - final_argument_whitespace = False - option_spec = { - 'alt': directives.unchanged, - 'height': directives.length_or_unitless, - 'width': directives.length_or_percentage_or_unitless, - 'scale': directives.nonnegative_int, - 'align': Image.align, - 'class': directives.class_option, - 'include-source': _option_boolean, - 'show-source-link': _option_boolean, - 'format': _option_format, - 'context': _option_context, - 'nofigs': directives.flag, - 'caption': directives.unchanged, - } - - def run(self): - """Run the plot directive.""" - try: - return run(self.arguments, self.content, self.options, - self.state_machine, self.state, self.lineno) - except Exception as e: - raise self.error(str(e)) - - -def _copy_css_file(app, exc): - if exc is None and app.builder.format == 'html': - src = cbook._get_data_path('plot_directive/plot_directive.css') - dst = app.outdir / Path('_static') - dst.mkdir(exist_ok=True) - # Use copyfile because we do not want to copy src's permissions. - shutil.copyfile(src, dst / Path('plot_directive.css')) - - -def setup(app): - setup.app = app - setup.config = app.config - setup.confdir = app.confdir - app.add_directive('plot', PlotDirective) - app.add_config_value('plot_pre_code', None, True) - app.add_config_value('plot_include_source', False, True) - app.add_config_value('plot_html_show_source_link', True, True) - app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) - app.add_config_value('plot_basedir', None, True) - app.add_config_value('plot_html_show_formats', True, True) - app.add_config_value('plot_rcparams', {}, True) - app.add_config_value('plot_apply_rcparams', False, True) - app.add_config_value('plot_working_directory', None, True) - app.add_config_value('plot_template', None, True) - app.add_config_value('plot_srcset', [], True) - app.connect('doctree-read', mark_plot_labels) - app.add_css_file('plot_directive.css') - app.connect('build-finished', _copy_css_file) - metadata = {'parallel_read_safe': True, 'parallel_write_safe': True, - 'version': matplotlib.__version__} - return metadata - - -# ----------------------------------------------------------------------------- -# Doctest handling -# ----------------------------------------------------------------------------- - - -def contains_doctest(text): - try: - # check if it's valid Python as-is - compile(text, '', 'exec') - return False - except SyntaxError: - pass - r = re.compile(r'^\s*>>>', re.M) - m = r.search(text) - return bool(m) - - -def _split_code_at_show(text, function_name): - """Split code at plt.show().""" - - is_doctest = contains_doctest(text) - if function_name is None: - parts = [] - part = [] - for line in text.split("\n"): - if ((not is_doctest and line.startswith('plt.show(')) or - (is_doctest and line.strip() == '>>> plt.show()')): - part.append(line) - parts.append("\n".join(part)) - part = [] - else: - part.append(line) - if "\n".join(part).strip(): - parts.append("\n".join(part)) - else: - parts = [text] - return is_doctest, parts - - -# ----------------------------------------------------------------------------- -# Template -# ----------------------------------------------------------------------------- - -_SOURCECODE = """ -{{ source_code }} - -.. only:: html - - {% if src_name or (html_show_formats and not multi_image) %} - ( - {%- if src_name -%} - :download:`Source code <{{ build_dir }}/{{ src_name }}>` - {%- endif -%} - {%- if html_show_formats and not multi_image -%} - {%- for img in images -%} - {%- for fmt in img.formats -%} - {%- if src_name or not loop.first -%}, {% endif -%} - :download:`{{ fmt }} <{{ build_dir }}/{{ img.basename }}.{{ fmt }}>` - {%- endfor -%} - {%- endfor -%} - {%- endif -%} - ) - {% endif %} -""" - -TEMPLATE_SRCSET = _SOURCECODE + """ - {% for img in images %} - .. figure-mpl:: {{ build_dir }}/{{ img.basename }}.{{ default_fmt }} - {% for option in options -%} - {{ option }} - {% endfor %} - {%- if caption -%} - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endif -%} - {%- if srcset -%} - :srcset: {{ build_dir }}/{{ img.basename }}.{{ default_fmt }} - {%- for sr in srcset -%} - , {{ build_dir }}/{{ img.basename }}.{{ sr }}.{{ default_fmt }} {{sr}} - {%- endfor -%} - {% endif %} - - {% if html_show_formats and multi_image %} - ( - {%- for fmt in img.formats -%} - {%- if not loop.first -%}, {% endif -%} - :download:`{{ fmt }} <{{ build_dir }}/{{ img.basename }}.{{ fmt }}>` - {%- endfor -%} - ) - {% endif %} - - - {% endfor %} - -.. only:: not html - - {% for img in images %} - .. figure-mpl:: {{ build_dir }}/{{ img.basename }}.* - {% for option in options -%} - {{ option }} - {% endfor -%} - - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endfor %} - -""" - -TEMPLATE = _SOURCECODE + """ - - {% for img in images %} - .. figure:: {{ build_dir }}/{{ img.basename }}.{{ default_fmt }} - {% for option in options -%} - {{ option }} - {% endfor %} - - {% if html_show_formats and multi_image -%} - ( - {%- for fmt in img.formats -%} - {%- if not loop.first -%}, {% endif -%} - :download:`{{ fmt }} <{{ build_dir }}/{{ img.basename }}.{{ fmt }}>` - {%- endfor -%} - ) - {%- endif -%} - - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endfor %} - -.. only:: not html - - {% for img in images %} - .. figure:: {{ build_dir }}/{{ img.basename }}.* - {% for option in options -%} - {{ option }} - {% endfor -%} - - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endfor %} - -""" - -exception_template = """ -.. only:: html - - [`source code <%(linkdir)s/%(basename)s.py>`__] - -Exception occurred rendering plot. - -""" - -# the context of the plot for all directives specified with the -# :context: option -plot_context = dict() - - -class ImageFile: - def __init__(self, basename, dirname): - self.basename = basename - self.dirname = dirname - self.formats = [] - - def filename(self, format): - return os.path.join(self.dirname, f"{self.basename}.{format}") - - def filenames(self): - return [self.filename(fmt) for fmt in self.formats] - - -def out_of_date(original, derived, includes=None): - """ - Return whether *derived* is out-of-date relative to *original* or any of - the RST files included in it using the RST include directive (*includes*). - *derived* and *original* are full paths, and *includes* is optionally a - list of full paths which may have been included in the *original*. - """ - if not os.path.exists(derived): - return True - - if includes is None: - includes = [] - files_to_check = [original, *includes] - - def out_of_date_one(original, derived_mtime): - return (os.path.exists(original) and - derived_mtime < os.stat(original).st_mtime) - - derived_mtime = os.stat(derived).st_mtime - return any(out_of_date_one(f, derived_mtime) for f in files_to_check) - - -class PlotError(RuntimeError): - pass - - -def _run_code(code, code_path, ns=None, function_name=None): - """ - Import a Python module from a path, and run the function given by - name, if function_name is not None. - """ - - # Change the working directory to the directory of the example, so - # it can get at its data files, if any. Add its path to sys.path - # so it can import any helper modules sitting beside it. - pwd = os.getcwd() - if setup.config.plot_working_directory is not None: - try: - os.chdir(setup.config.plot_working_directory) - except OSError as err: - raise OSError(f'{err}\n`plot_working_directory` option in ' - f'Sphinx configuration file must be a valid ' - f'directory path') from err - except TypeError as err: - raise TypeError(f'{err}\n`plot_working_directory` option in ' - f'Sphinx configuration file must be a string or ' - f'None') from err - elif code_path is not None: - dirname = os.path.abspath(os.path.dirname(code_path)) - os.chdir(dirname) - - with cbook._setattr_cm( - sys, argv=[code_path], path=[os.getcwd(), *sys.path]), \ - contextlib.redirect_stdout(StringIO()): - try: - if ns is None: - ns = {} - if not ns: - if setup.config.plot_pre_code is None: - exec('import numpy as np\n' - 'from matplotlib import pyplot as plt\n', ns) - else: - exec(str(setup.config.plot_pre_code), ns) - if "__main__" in code: - ns['__name__'] = '__main__' - - # Patch out non-interactive show() to avoid triggering a warning. - with cbook._setattr_cm(FigureManagerBase, show=lambda self: None): - exec(code, ns) - if function_name is not None: - exec(function_name + "()", ns) - - except (Exception, SystemExit) as err: - raise PlotError(traceback.format_exc()) from err - finally: - os.chdir(pwd) - return ns - - -def clear_state(plot_rcparams, close=True): - if close: - plt.close('all') - matplotlib.rc_file_defaults() - matplotlib.rcParams.update(plot_rcparams) - - -def get_plot_formats(config): - default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 200} - formats = [] - plot_formats = config.plot_formats - for fmt in plot_formats: - if isinstance(fmt, str): - if ':' in fmt: - suffix, dpi = fmt.split(':') - formats.append((str(suffix), int(dpi))) - else: - formats.append((fmt, default_dpi.get(fmt, 80))) - elif isinstance(fmt, (tuple, list)) and len(fmt) == 2: - formats.append((str(fmt[0]), int(fmt[1]))) - else: - raise PlotError('invalid image format "%r" in plot_formats' % fmt) - return formats - - -def _parse_srcset(entries): - """ - Parse srcset for multiples... - """ - srcset = {} - for entry in entries: - entry = entry.strip() - if len(entry) >= 2: - mult = entry[:-1] - srcset[float(mult)] = entry - else: - raise ExtensionError(f'srcset argument {entry!r} is invalid.') - return srcset - - -def render_figures(code, code_path, output_dir, output_base, context, - function_name, config, context_reset=False, - close_figs=False, - code_includes=None): - """ - Run a pyplot script and save the images in *output_dir*. - - Save the images under *output_dir* with file names derived from - *output_base* - """ - - if function_name is not None: - output_base = f'{output_base}_{function_name}' - formats = get_plot_formats(config) - - # Try to determine if all images already exist - - is_doctest, code_pieces = _split_code_at_show(code, function_name) - # Look for single-figure output files first - img = ImageFile(output_base, output_dir) - for format, dpi in formats: - if context or out_of_date(code_path, img.filename(format), - includes=code_includes): - all_exists = False - break - img.formats.append(format) - else: - all_exists = True - - if all_exists: - return [(code, [img])] - - # Then look for multi-figure output files - results = [] - for i, code_piece in enumerate(code_pieces): - images = [] - for j in itertools.count(): - if len(code_pieces) > 1: - img = ImageFile('%s_%02d_%02d' % (output_base, i, j), - output_dir) - else: - img = ImageFile('%s_%02d' % (output_base, j), output_dir) - for fmt, dpi in formats: - if context or out_of_date(code_path, img.filename(fmt), - includes=code_includes): - all_exists = False - break - img.formats.append(fmt) - - # assume that if we have one, we have them all - if not all_exists: - all_exists = (j > 0) - break - images.append(img) - if not all_exists: - break - results.append((code_piece, images)) - else: - all_exists = True - - if all_exists: - return results - - # We didn't find the files, so build them - - results = [] - ns = plot_context if context else {} - - if context_reset: - clear_state(config.plot_rcparams) - plot_context.clear() - - close_figs = not context or close_figs - - for i, code_piece in enumerate(code_pieces): - - if not context or config.plot_apply_rcparams: - clear_state(config.plot_rcparams, close_figs) - elif close_figs: - plt.close('all') - - _run_code(doctest.script_from_examples(code_piece) if is_doctest - else code_piece, - code_path, ns, function_name) - - images = [] - fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() - for j, figman in enumerate(fig_managers): - if len(fig_managers) == 1 and len(code_pieces) == 1: - img = ImageFile(output_base, output_dir) - elif len(code_pieces) == 1: - img = ImageFile("%s_%02d" % (output_base, j), output_dir) - else: - img = ImageFile("%s_%02d_%02d" % (output_base, i, j), - output_dir) - images.append(img) - - for fmt, dpi in formats: - try: - figman.canvas.figure.savefig(img.filename(fmt), dpi=dpi) - if fmt == formats[0][0] and config.plot_srcset: - # save a 2x, 3x etc version of the default... - srcset = _parse_srcset(config.plot_srcset) - for mult, suffix in srcset.items(): - fm = f'{suffix}.{fmt}' - img.formats.append(fm) - figman.canvas.figure.savefig(img.filename(fm), - dpi=int(dpi * mult)) - except Exception as err: - raise PlotError(traceback.format_exc()) from err - img.formats.append(fmt) - - results.append((code_piece, images)) - - if not context or config.plot_apply_rcparams: - clear_state(config.plot_rcparams, close=not context) - - return results - - -def run(arguments, content, options, state_machine, state, lineno): - document = state_machine.document - config = document.settings.env.config - nofigs = 'nofigs' in options - - if config.plot_srcset and setup.app.builder.name == 'singlehtml': - raise ExtensionError( - 'plot_srcset option not compatible with single HTML writer') - - formats = get_plot_formats(config) - default_fmt = formats[0][0] - - options.setdefault('include-source', config.plot_include_source) - options.setdefault('show-source-link', config.plot_html_show_source_link) - - if 'class' in options: - # classes are parsed into a list of string, and output by simply - # printing the list, abusing the fact that RST guarantees to strip - # non-conforming characters - options['class'] = ['plot-directive'] + options['class'] - else: - options.setdefault('class', ['plot-directive']) - keep_context = 'context' in options - context_opt = None if not keep_context else options['context'] - - rst_file = document.attributes['source'] - rst_dir = os.path.dirname(rst_file) - - if len(arguments): - if not config.plot_basedir: - source_file_name = os.path.join(setup.app.builder.srcdir, - directives.uri(arguments[0])) - else: - source_file_name = os.path.join(setup.confdir, config.plot_basedir, - directives.uri(arguments[0])) - # If there is content, it will be passed as a caption. - caption = '\n'.join(content) - - # Enforce unambiguous use of captions. - if "caption" in options: - if caption: - raise ValueError( - 'Caption specified in both content and options.' - ' Please remove ambiguity.' - ) - # Use caption option - caption = options["caption"] - - # If the optional function name is provided, use it - if len(arguments) == 2: - function_name = arguments[1] - else: - function_name = None - - code = Path(source_file_name).read_text(encoding='utf-8') - output_base = os.path.basename(source_file_name) - else: - source_file_name = rst_file - code = textwrap.dedent("\n".join(map(str, content))) - counter = document.attributes.get('_plot_counter', 0) + 1 - document.attributes['_plot_counter'] = counter - base, ext = os.path.splitext(os.path.basename(source_file_name)) - output_base = '%s-%d.py' % (base, counter) - function_name = None - caption = options.get('caption', '') - - base, source_ext = os.path.splitext(output_base) - if source_ext in ('.py', '.rst', '.txt'): - output_base = base - else: - source_ext = '' - - # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames - output_base = output_base.replace('.', '-') - - # is it in doctest format? - is_doctest = contains_doctest(code) - if 'format' in options: - if options['format'] == 'python': - is_doctest = False - else: - is_doctest = True - - # determine output directory name fragment - source_rel_name = relpath(source_file_name, setup.confdir) - source_rel_dir = os.path.dirname(source_rel_name).lstrip(os.path.sep) - - # build_dir: where to place output files (temporarily) - build_dir = os.path.join(os.path.dirname(setup.app.doctreedir), - 'plot_directive', - source_rel_dir) - # get rid of .. in paths, also changes pathsep - # see note in Python docs for warning about symbolic links on Windows. - # need to compare source and dest paths at end - build_dir = os.path.normpath(build_dir) - os.makedirs(build_dir, exist_ok=True) - - # how to link to files from the RST file - try: - build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/') - except ValueError: - # on Windows, relpath raises ValueError when path and start are on - # different mounts/drives - build_dir_link = build_dir - - # get list of included rst files so that the output is updated when any - # plots in the included files change. These attributes are modified by the - # include directive (see the docutils.parsers.rst.directives.misc module). - try: - source_file_includes = [os.path.join(os.getcwd(), t[0]) - for t in state.document.include_log] - except AttributeError: - # the document.include_log attribute only exists in docutils >=0.17, - # before that we need to inspect the state machine - possible_sources = {os.path.join(setup.confdir, t[0]) - for t in state_machine.input_lines.items} - source_file_includes = [f for f in possible_sources - if os.path.isfile(f)] - # remove the source file itself from the includes - try: - source_file_includes.remove(source_file_name) - except ValueError: - pass - - # save script (if necessary) - if options['show-source-link']: - Path(build_dir, output_base + source_ext).write_text( - doctest.script_from_examples(code) - if source_file_name == rst_file and is_doctest - else code, - encoding='utf-8') - - # make figures - try: - results = render_figures(code=code, - code_path=source_file_name, - output_dir=build_dir, - output_base=output_base, - context=keep_context, - function_name=function_name, - config=config, - context_reset=context_opt == 'reset', - close_figs=context_opt == 'close-figs', - code_includes=source_file_includes) - errors = [] - except PlotError as err: - reporter = state.memo.reporter - sm = reporter.system_message( - 2, "Exception occurred in plotting {}\n from {}:\n{}".format( - output_base, source_file_name, err), - line=lineno) - results = [(code, [])] - errors = [sm] - - # Properly indent the caption - if caption and config.plot_srcset: - caption = f':caption: {caption}' - elif caption: - caption = '\n' + '\n'.join(' ' + line.strip() - for line in caption.split('\n')) - # generate output restructuredtext - total_lines = [] - for j, (code_piece, images) in enumerate(results): - if options['include-source']: - if is_doctest: - lines = ['', *code_piece.splitlines()] - else: - lines = ['.. code-block:: python', '', - *textwrap.indent(code_piece, ' ').splitlines()] - source_code = "\n".join(lines) - else: - source_code = "" - - if nofigs: - images = [] - - opts = [ - f':{key}: {val}' for key, val in options.items() - if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] - - # Not-None src_name signals the need for a source download in the - # generated html - if j == 0 and options['show-source-link']: - src_name = output_base + source_ext - else: - src_name = None - if config.plot_srcset: - srcset = [*_parse_srcset(config.plot_srcset).values()] - template = TEMPLATE_SRCSET - else: - srcset = None - template = TEMPLATE - - result = jinja2.Template(config.plot_template or template).render( - default_fmt=default_fmt, - build_dir=build_dir_link, - src_name=src_name, - multi_image=len(images) > 1, - options=opts, - srcset=srcset, - images=images, - source_code=source_code, - html_show_formats=config.plot_html_show_formats and len(images), - caption=caption) - total_lines.extend(result.split("\n")) - total_lines.extend("\n") - - if total_lines: - state_machine.insert_input(total_lines, source=source_file_name) - - return errors diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_diff.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_diff.py deleted file mode 100644 index b401f182242b10c28754b25bae0ebc89caa069fa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_diff.py +++ /dev/null @@ -1,304 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Series, - Timestamp, - date_range, -) -import pandas._testing as tm - - -class TestDataFrameDiff: - def test_diff_requires_integer(self): - df = DataFrame(np.random.default_rng(2).standard_normal((2, 2))) - with pytest.raises(ValueError, match="periods must be an integer"): - df.diff(1.5) - - # GH#44572 np.int64 is accepted - @pytest.mark.parametrize("num", [1, np.int64(1)]) - def test_diff(self, datetime_frame, num): - df = datetime_frame - the_diff = df.diff(num) - - expected = df["A"] - df["A"].shift(num) - tm.assert_series_equal(the_diff["A"], expected) - - def test_diff_int_dtype(self): - # int dtype - a = 10_000_000_000_000_000 - b = a + 1 - ser = Series([a, b]) - - rs = DataFrame({"s": ser}).diff() - assert rs.s[1] == 1 - - def test_diff_mixed_numeric(self, datetime_frame): - # mixed numeric - tf = datetime_frame.astype("float32") - the_diff = tf.diff(1) - tm.assert_series_equal(the_diff["A"], tf["A"] - tf["A"].shift(1)) - - def test_diff_axis1_nonconsolidated(self): - # GH#10907 - df = DataFrame({"y": Series([2]), "z": Series([3])}) - df.insert(0, "x", 1) - result = df.diff(axis=1) - expected = DataFrame({"x": np.nan, "y": Series(1), "z": Series(1)}) - tm.assert_frame_equal(result, expected) - - def test_diff_timedelta64_with_nat(self): - # GH#32441 - arr = np.arange(6).reshape(3, 2).astype("timedelta64[ns]") - arr[:, 0] = np.timedelta64("NaT", "ns") - - df = DataFrame(arr) - result = df.diff(1, axis=0) - - expected = DataFrame({0: df[0], 1: [pd.NaT, pd.Timedelta(2), pd.Timedelta(2)]}) - tm.assert_equal(result, expected) - - result = df.diff(0) - expected = df - df - assert expected[0].isna().all() - tm.assert_equal(result, expected) - - result = df.diff(-1, axis=1) - expected = df * np.nan - tm.assert_equal(result, expected) - - @pytest.mark.parametrize("tz", [None, "UTC"]) - def test_diff_datetime_axis0_with_nat(self, tz): - # GH#32441 - dti = pd.DatetimeIndex(["NaT", "2019-01-01", "2019-01-02"], tz=tz) - ser = Series(dti) - - df = ser.to_frame() - - result = df.diff() - ex_index = pd.TimedeltaIndex([pd.NaT, pd.NaT, pd.Timedelta(days=1)]) - expected = Series(ex_index).to_frame() - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("tz", [None, "UTC"]) - def test_diff_datetime_with_nat_zero_periods(self, tz): - # diff on NaT values should give NaT, not timedelta64(0) - dti = date_range("2016-01-01", periods=4, tz=tz) - ser = Series(dti) - df = ser.to_frame() - - df[1] = ser.copy() - - df.iloc[:, 0] = pd.NaT - - expected = df - df - assert expected[0].isna().all() - - result = df.diff(0, axis=0) - tm.assert_frame_equal(result, expected) - - result = df.diff(0, axis=1) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("tz", [None, "UTC"]) - def test_diff_datetime_axis0(self, tz): - # GH#18578 - df = DataFrame( - { - 0: date_range("2010", freq="D", periods=2, tz=tz), - 1: date_range("2010", freq="D", periods=2, tz=tz), - } - ) - - result = df.diff(axis=0) - expected = DataFrame( - { - 0: pd.TimedeltaIndex(["NaT", "1 days"]), - 1: pd.TimedeltaIndex(["NaT", "1 days"]), - } - ) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("tz", [None, "UTC"]) - def test_diff_datetime_axis1(self, tz): - # GH#18578 - df = DataFrame( - { - 0: date_range("2010", freq="D", periods=2, tz=tz), - 1: date_range("2010", freq="D", periods=2, tz=tz), - } - ) - - result = df.diff(axis=1) - expected = DataFrame( - { - 0: pd.TimedeltaIndex(["NaT", "NaT"]), - 1: pd.TimedeltaIndex(["0 days", "0 days"]), - } - ) - tm.assert_frame_equal(result, expected) - - def test_diff_timedelta(self): - # GH#4533 - df = DataFrame( - { - "time": [Timestamp("20130101 9:01"), Timestamp("20130101 9:02")], - "value": [1.0, 2.0], - } - ) - - res = df.diff() - exp = DataFrame( - [[pd.NaT, np.nan], [pd.Timedelta("00:01:00"), 1]], columns=["time", "value"] - ) - tm.assert_frame_equal(res, exp) - - def test_diff_mixed_dtype(self): - df = DataFrame(np.random.default_rng(2).standard_normal((5, 3))) - df["A"] = np.array([1, 2, 3, 4, 5], dtype=object) - - result = df.diff() - assert result[0].dtype == np.float64 - - def test_diff_neg_n(self, datetime_frame): - rs = datetime_frame.diff(-1) - xp = datetime_frame - datetime_frame.shift(-1) - tm.assert_frame_equal(rs, xp) - - def test_diff_float_n(self, datetime_frame): - rs = datetime_frame.diff(1.0) - xp = datetime_frame.diff(1) - tm.assert_frame_equal(rs, xp) - - def test_diff_axis(self): - # GH#9727 - df = DataFrame([[1.0, 2.0], [3.0, 4.0]]) - tm.assert_frame_equal( - df.diff(axis=1), DataFrame([[np.nan, 1.0], [np.nan, 1.0]]) - ) - tm.assert_frame_equal( - df.diff(axis=0), DataFrame([[np.nan, np.nan], [2.0, 2.0]]) - ) - - def test_diff_period(self): - # GH#32995 Don't pass an incorrect axis - pi = date_range("2016-01-01", periods=3).to_period("D") - df = DataFrame({"A": pi}) - - result = df.diff(1, axis=1) - - expected = (df - pd.NaT).astype(object) - tm.assert_frame_equal(result, expected) - - def test_diff_axis1_mixed_dtypes(self): - # GH#32995 operate column-wise when we have mixed dtypes and axis=1 - df = DataFrame({"A": range(3), "B": 2 * np.arange(3, dtype=np.float64)}) - - expected = DataFrame({"A": [np.nan, np.nan, np.nan], "B": df["B"] / 2}) - - result = df.diff(axis=1) - tm.assert_frame_equal(result, expected) - - # GH#21437 mixed-float-dtypes - df = DataFrame( - {"a": np.arange(3, dtype="float32"), "b": np.arange(3, dtype="float64")} - ) - result = df.diff(axis=1) - expected = DataFrame({"a": df["a"] * np.nan, "b": df["b"] * 0}) - tm.assert_frame_equal(result, expected) - - def test_diff_axis1_mixed_dtypes_large_periods(self): - # GH#32995 operate column-wise when we have mixed dtypes and axis=1 - df = DataFrame({"A": range(3), "B": 2 * np.arange(3, dtype=np.float64)}) - - expected = df * np.nan - - result = df.diff(axis=1, periods=3) - tm.assert_frame_equal(result, expected) - - def test_diff_axis1_mixed_dtypes_negative_periods(self): - # GH#32995 operate column-wise when we have mixed dtypes and axis=1 - df = DataFrame({"A": range(3), "B": 2 * np.arange(3, dtype=np.float64)}) - - expected = DataFrame({"A": -1.0 * df["A"], "B": df["B"] * np.nan}) - - result = df.diff(axis=1, periods=-1) - tm.assert_frame_equal(result, expected) - - def test_diff_sparse(self): - # GH#28813 .diff() should work for sparse dataframes as well - sparse_df = DataFrame([[0, 1], [1, 0]], dtype="Sparse[int]") - - result = sparse_df.diff() - expected = DataFrame( - [[np.nan, np.nan], [1.0, -1.0]], dtype=pd.SparseDtype("float", 0.0) - ) - - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "axis,expected", - [ - ( - 0, - DataFrame( - { - "a": [np.nan, 0, 1, 0, np.nan, np.nan, np.nan, 0], - "b": [np.nan, 1, np.nan, np.nan, -2, 1, np.nan, np.nan], - "c": np.repeat(np.nan, 8), - "d": [np.nan, 3, 5, 7, 9, 11, 13, 15], - }, - dtype="Int64", - ), - ), - ( - 1, - DataFrame( - { - "a": np.repeat(np.nan, 8), - "b": [0, 1, np.nan, 1, np.nan, np.nan, np.nan, 0], - "c": np.repeat(np.nan, 8), - "d": np.repeat(np.nan, 8), - }, - dtype="Int64", - ), - ), - ], - ) - def test_diff_integer_na(self, axis, expected): - # GH#24171 IntegerNA Support for DataFrame.diff() - df = DataFrame( - { - "a": np.repeat([0, 1, np.nan, 2], 2), - "b": np.tile([0, 1, np.nan, 2], 2), - "c": np.repeat(np.nan, 8), - "d": np.arange(1, 9) ** 2, - }, - dtype="Int64", - ) - - # Test case for default behaviour of diff - result = df.diff(axis=axis) - tm.assert_frame_equal(result, expected) - - def test_diff_readonly(self): - # https://github.com/pandas-dev/pandas/issues/35559 - arr = np.random.default_rng(2).standard_normal((5, 2)) - arr.flags.writeable = False - df = DataFrame(arr) - result = df.diff() - expected = DataFrame(np.array(df)).diff() - tm.assert_frame_equal(result, expected) - - def test_diff_all_int_dtype(self, any_int_numpy_dtype): - # GH 14773 - df = DataFrame(range(5)) - df = df.astype(any_int_numpy_dtype) - result = df.diff() - expected_dtype = ( - "float32" if any_int_numpy_dtype in ("int8", "int16") else "float64" - ) - expected = DataFrame([np.nan, 1.0, 1.0, 1.0, 1.0], dtype=expected_dtype) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_pipe.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_pipe.py deleted file mode 100644 index 5bcc4360487f38491e2ae9f4c79d837e72ed0f6d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_pipe.py +++ /dev/null @@ -1,39 +0,0 @@ -import pytest - -from pandas import ( - DataFrame, - Series, -) -import pandas._testing as tm - - -class TestPipe: - def test_pipe(self, frame_or_series): - obj = DataFrame({"A": [1, 2, 3]}) - expected = DataFrame({"A": [1, 4, 9]}) - if frame_or_series is Series: - obj = obj["A"] - expected = expected["A"] - - f = lambda x, y: x**y - result = obj.pipe(f, 2) - tm.assert_equal(result, expected) - - def test_pipe_tuple(self, frame_or_series): - obj = DataFrame({"A": [1, 2, 3]}) - obj = tm.get_obj(obj, frame_or_series) - - f = lambda x, y: y - result = obj.pipe((f, "y"), 0) - tm.assert_equal(result, obj) - - def test_pipe_tuple_error(self, frame_or_series): - obj = DataFrame({"A": [1, 2, 3]}) - obj = tm.get_obj(obj, frame_or_series) - - f = lambda x, y: y - - msg = "y is both the pipe target and a keyword argument" - - with pytest.raises(ValueError, match=msg): - obj.pipe((f, "y"), x=1, y=0) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_setops.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_setops.py deleted file mode 100644 index a64994efec85a257afefc95283df1747e1ee39e5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_setops.py +++ /dev/null @@ -1,908 +0,0 @@ -""" -The tests in this package are to ensure the proper resultant dtypes of -set operations. -""" -from datetime import datetime -import operator - -import numpy as np -import pytest - -from pandas._libs import lib - -from pandas.core.dtypes.cast import find_common_type - -from pandas import ( - CategoricalDtype, - CategoricalIndex, - DatetimeTZDtype, - Index, - MultiIndex, - PeriodDtype, - RangeIndex, - Series, - Timestamp, -) -import pandas._testing as tm -from pandas.api.types import ( - is_signed_integer_dtype, - pandas_dtype, -) - - -def test_union_same_types(index): - # Union with a non-unique, non-monotonic index raises error - # Only needed for bool index factory - idx1 = index.sort_values() - idx2 = index.sort_values() - assert idx1.union(idx2).dtype == idx1.dtype - - -def test_union_different_types(index_flat, index_flat2, request): - # This test only considers combinations of indices - # GH 23525 - idx1 = index_flat - idx2 = index_flat2 - - if ( - not idx1.is_unique - and not idx2.is_unique - and idx1.dtype.kind == "i" - and idx2.dtype.kind == "b" - ) or ( - not idx2.is_unique - and not idx1.is_unique - and idx2.dtype.kind == "i" - and idx1.dtype.kind == "b" - ): - # Each condition had idx[1|2].is_monotonic_decreasing - # but failed when e.g. - # idx1 = Index( - # [True, True, True, True, True, True, True, True, False, False], dtype='bool' - # ) - # idx2 = Index([0, 0, 1, 1, 2, 2], dtype='int64') - mark = pytest.mark.xfail( - reason="GH#44000 True==1", raises=ValueError, strict=False - ) - request.node.add_marker(mark) - - common_dtype = find_common_type([idx1.dtype, idx2.dtype]) - - warn = None - msg = "'<' not supported between" - if not len(idx1) or not len(idx2): - pass - elif (idx1.dtype.kind == "c" and (not lib.is_np_dtype(idx2.dtype, "iufc"))) or ( - idx2.dtype.kind == "c" and (not lib.is_np_dtype(idx1.dtype, "iufc")) - ): - # complex objects non-sortable - warn = RuntimeWarning - elif ( - isinstance(idx1.dtype, PeriodDtype) and isinstance(idx2.dtype, CategoricalDtype) - ) or ( - isinstance(idx2.dtype, PeriodDtype) and isinstance(idx1.dtype, CategoricalDtype) - ): - warn = FutureWarning - msg = r"PeriodDtype\[B\] is deprecated" - mark = pytest.mark.xfail( - reason="Warning not produced on all builds", - raises=AssertionError, - strict=False, - ) - request.node.add_marker(mark) - - any_uint64 = np.uint64 in (idx1.dtype, idx2.dtype) - idx1_signed = is_signed_integer_dtype(idx1.dtype) - idx2_signed = is_signed_integer_dtype(idx2.dtype) - - # Union with a non-unique, non-monotonic index raises error - # This applies to the boolean index - idx1 = idx1.sort_values() - idx2 = idx2.sort_values() - - with tm.assert_produces_warning(warn, match=msg): - res1 = idx1.union(idx2) - res2 = idx2.union(idx1) - - if any_uint64 and (idx1_signed or idx2_signed): - assert res1.dtype == np.dtype("O") - assert res2.dtype == np.dtype("O") - else: - assert res1.dtype == common_dtype - assert res2.dtype == common_dtype - - -@pytest.mark.parametrize( - "idx_fact1,idx_fact2", - [ - (tm.makeIntIndex, tm.makeRangeIndex), - (tm.makeFloatIndex, tm.makeIntIndex), - (tm.makeFloatIndex, tm.makeRangeIndex), - (tm.makeFloatIndex, tm.makeUIntIndex), - ], -) -def test_compatible_inconsistent_pairs(idx_fact1, idx_fact2): - # GH 23525 - idx1 = idx_fact1(10) - idx2 = idx_fact2(20) - - res1 = idx1.union(idx2) - res2 = idx2.union(idx1) - - assert res1.dtype in (idx1.dtype, idx2.dtype) - assert res2.dtype in (idx1.dtype, idx2.dtype) - - -@pytest.mark.parametrize( - "left, right, expected", - [ - ("int64", "int64", "int64"), - ("int64", "uint64", "object"), - ("int64", "float64", "float64"), - ("uint64", "float64", "float64"), - ("uint64", "uint64", "uint64"), - ("float64", "float64", "float64"), - ("datetime64[ns]", "int64", "object"), - ("datetime64[ns]", "uint64", "object"), - ("datetime64[ns]", "float64", "object"), - ("datetime64[ns, CET]", "int64", "object"), - ("datetime64[ns, CET]", "uint64", "object"), - ("datetime64[ns, CET]", "float64", "object"), - ("Period[D]", "int64", "object"), - ("Period[D]", "uint64", "object"), - ("Period[D]", "float64", "object"), - ], -) -@pytest.mark.parametrize("names", [("foo", "foo", "foo"), ("foo", "bar", None)]) -def test_union_dtypes(left, right, expected, names): - left = pandas_dtype(left) - right = pandas_dtype(right) - a = Index([], dtype=left, name=names[0]) - b = Index([], dtype=right, name=names[1]) - result = a.union(b) - assert result.dtype == expected - assert result.name == names[2] - - # Testing name retention - # TODO: pin down desired dtype; do we want it to be commutative? - result = a.intersection(b) - assert result.name == names[2] - - -@pytest.mark.parametrize("values", [[1, 2, 2, 3], [3, 3]]) -def test_intersection_duplicates(values): - # GH#31326 - a = Index(values) - b = Index([3, 3]) - result = a.intersection(b) - expected = Index([3]) - tm.assert_index_equal(result, expected) - - -class TestSetOps: - # Set operation tests shared by all indexes in the `index` fixture - @pytest.mark.parametrize("case", [0.5, "xxx"]) - @pytest.mark.parametrize( - "method", ["intersection", "union", "difference", "symmetric_difference"] - ) - def test_set_ops_error_cases(self, case, method, index): - # non-iterable input - msg = "Input must be Index or array-like" - with pytest.raises(TypeError, match=msg): - getattr(index, method)(case) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_intersection_base(self, index): - if isinstance(index, CategoricalIndex): - pytest.skip(f"Not relevant for {type(index).__name__}") - - first = index[:5] - second = index[:3] - intersect = first.intersection(second) - assert tm.equalContents(intersect, second) - - if isinstance(index.dtype, DatetimeTZDtype): - # The second.values below will drop tz, so the rest of this test - # is not applicable. - return - - # GH#10149 - cases = [second.to_numpy(), second.to_series(), second.to_list()] - for case in cases: - result = first.intersection(case) - assert tm.equalContents(result, second) - - if isinstance(index, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with pytest.raises(TypeError, match=msg): - first.intersection([1, 2, 3]) - - @pytest.mark.filterwarnings( - "ignore:Falling back on a non-pyarrow:pandas.errors.PerformanceWarning" - ) - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_union_base(self, index): - first = index[3:] - second = index[:5] - everything = index - - union = first.union(second) - assert tm.equalContents(union, everything) - - if isinstance(index.dtype, DatetimeTZDtype): - # The second.values below will drop tz, so the rest of this test - # is not applicable. - return - - # GH#10149 - cases = [second.to_numpy(), second.to_series(), second.to_list()] - for case in cases: - result = first.union(case) - assert tm.equalContents(result, everything) - - if isinstance(index, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with pytest.raises(TypeError, match=msg): - first.union([1, 2, 3]) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - @pytest.mark.filterwarnings( - "ignore:Falling back on a non-pyarrow:pandas.errors.PerformanceWarning" - ) - def test_difference_base(self, sort, index): - first = index[2:] - second = index[:4] - if index.inferred_type == "boolean": - # i think (TODO: be sure) there assumptions baked in about - # the index fixture that don't hold here? - answer = set(first).difference(set(second)) - elif isinstance(index, CategoricalIndex): - answer = [] - else: - answer = index[4:] - result = first.difference(second, sort) - assert tm.equalContents(result, answer) - - # GH#10149 - cases = [second.to_numpy(), second.to_series(), second.to_list()] - for case in cases: - result = first.difference(case, sort) - assert tm.equalContents(result, answer) - - if isinstance(index, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with pytest.raises(TypeError, match=msg): - first.difference([1, 2, 3], sort) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - @pytest.mark.filterwarnings( - "ignore:Falling back on a non-pyarrow:pandas.errors.PerformanceWarning" - ) - def test_symmetric_difference(self, index): - if isinstance(index, CategoricalIndex): - pytest.skip(f"Not relevant for {type(index).__name__}") - if len(index) < 2: - pytest.skip("Too few values for test") - if index[0] in index[1:] or index[-1] in index[:-1]: - # index fixture has e.g. an index of bools that does not satisfy this, - # another with [0, 0, 1, 1, 2, 2] - pytest.skip("Index values no not satisfy test condition.") - - first = index[1:] - second = index[:-1] - answer = index[[0, -1]] - result = first.symmetric_difference(second) - assert tm.equalContents(result, answer) - - # GH#10149 - cases = [second.to_numpy(), second.to_series(), second.to_list()] - for case in cases: - result = first.symmetric_difference(case) - assert tm.equalContents(result, answer) - - if isinstance(index, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with pytest.raises(TypeError, match=msg): - first.symmetric_difference([1, 2, 3]) - - @pytest.mark.parametrize( - "fname, sname, expected_name", - [ - ("A", "A", "A"), - ("A", "B", None), - ("A", None, None), - (None, "B", None), - (None, None, None), - ], - ) - def test_corner_union(self, index_flat, fname, sname, expected_name): - # GH#9943, GH#9862 - # Test unions with various name combinations - # Do not test MultiIndex or repeats - if not index_flat.is_unique: - pytest.skip("Randomly generated index_flat was not unique.") - index = index_flat - - # Test copy.union(copy) - first = index.copy().set_names(fname) - second = index.copy().set_names(sname) - union = first.union(second) - expected = index.copy().set_names(expected_name) - tm.assert_index_equal(union, expected) - - # Test copy.union(empty) - first = index.copy().set_names(fname) - second = index.drop(index).set_names(sname) - union = first.union(second) - expected = index.copy().set_names(expected_name) - tm.assert_index_equal(union, expected) - - # Test empty.union(copy) - first = index.drop(index).set_names(fname) - second = index.copy().set_names(sname) - union = first.union(second) - expected = index.copy().set_names(expected_name) - tm.assert_index_equal(union, expected) - - # Test empty.union(empty) - first = index.drop(index).set_names(fname) - second = index.drop(index).set_names(sname) - union = first.union(second) - expected = index.drop(index).set_names(expected_name) - tm.assert_index_equal(union, expected) - - @pytest.mark.parametrize( - "fname, sname, expected_name", - [ - ("A", "A", "A"), - ("A", "B", None), - ("A", None, None), - (None, "B", None), - (None, None, None), - ], - ) - def test_union_unequal(self, index_flat, fname, sname, expected_name): - if not index_flat.is_unique: - pytest.skip("Randomly generated index_flat was not unique.") - index = index_flat - - # test copy.union(subset) - need sort for unicode and string - first = index.copy().set_names(fname) - second = index[1:].set_names(sname) - union = first.union(second).sort_values() - expected = index.set_names(expected_name).sort_values() - tm.assert_index_equal(union, expected) - - @pytest.mark.parametrize( - "fname, sname, expected_name", - [ - ("A", "A", "A"), - ("A", "B", None), - ("A", None, None), - (None, "B", None), - (None, None, None), - ], - ) - def test_corner_intersect(self, index_flat, fname, sname, expected_name): - # GH#35847 - # Test intersections with various name combinations - if not index_flat.is_unique: - pytest.skip("Randomly generated index_flat was not unique.") - index = index_flat - - # Test copy.intersection(copy) - first = index.copy().set_names(fname) - second = index.copy().set_names(sname) - intersect = first.intersection(second) - expected = index.copy().set_names(expected_name) - tm.assert_index_equal(intersect, expected) - - # Test copy.intersection(empty) - first = index.copy().set_names(fname) - second = index.drop(index).set_names(sname) - intersect = first.intersection(second) - expected = index.drop(index).set_names(expected_name) - tm.assert_index_equal(intersect, expected) - - # Test empty.intersection(copy) - first = index.drop(index).set_names(fname) - second = index.copy().set_names(sname) - intersect = first.intersection(second) - expected = index.drop(index).set_names(expected_name) - tm.assert_index_equal(intersect, expected) - - # Test empty.intersection(empty) - first = index.drop(index).set_names(fname) - second = index.drop(index).set_names(sname) - intersect = first.intersection(second) - expected = index.drop(index).set_names(expected_name) - tm.assert_index_equal(intersect, expected) - - @pytest.mark.parametrize( - "fname, sname, expected_name", - [ - ("A", "A", "A"), - ("A", "B", None), - ("A", None, None), - (None, "B", None), - (None, None, None), - ], - ) - def test_intersect_unequal(self, index_flat, fname, sname, expected_name): - if not index_flat.is_unique: - pytest.skip("Randomly generated index_flat was not unique.") - index = index_flat - - # test copy.intersection(subset) - need sort for unicode and string - first = index.copy().set_names(fname) - second = index[1:].set_names(sname) - intersect = first.intersection(second).sort_values() - expected = index[1:].set_names(expected_name).sort_values() - tm.assert_index_equal(intersect, expected) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_intersection_name_retention_with_nameless(self, index): - if isinstance(index, MultiIndex): - index = index.rename(list(range(index.nlevels))) - else: - index = index.rename("foo") - - other = np.asarray(index) - - result = index.intersection(other) - assert result.name == index.name - - # empty other, same dtype - result = index.intersection(other[:0]) - assert result.name == index.name - - # empty `self` - result = index[:0].intersection(other) - assert result.name == index.name - - def test_difference_preserves_type_empty(self, index, sort): - # GH#20040 - # If taking difference of a set and itself, it - # needs to preserve the type of the index - if not index.is_unique: - pytest.skip("Not relevant since index is not unique") - result = index.difference(index, sort=sort) - expected = index[:0] - tm.assert_index_equal(result, expected, exact=True) - - def test_difference_name_retention_equals(self, index, names): - if isinstance(index, MultiIndex): - names = [[x] * index.nlevels for x in names] - index = index.rename(names[0]) - other = index.rename(names[1]) - - assert index.equals(other) - - result = index.difference(other) - expected = index[:0].rename(names[2]) - tm.assert_index_equal(result, expected) - - def test_intersection_difference_match_empty(self, index, sort): - # GH#20040 - # Test that the intersection of an index with an - # empty index produces the same index as the difference - # of an index with itself. Test for all types - if not index.is_unique: - pytest.skip("Not relevant because index is not unique") - inter = index.intersection(index[:0]) - diff = index.difference(index, sort=sort) - tm.assert_index_equal(inter, diff, exact=True) - - -@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") -@pytest.mark.filterwarnings( - "ignore:Falling back on a non-pyarrow:pandas.errors.PerformanceWarning" -) -@pytest.mark.parametrize( - "method", ["intersection", "union", "difference", "symmetric_difference"] -) -def test_setop_with_categorical(index_flat, sort, method): - # MultiIndex tested separately in tests.indexes.multi.test_setops - index = index_flat - - other = index.astype("category") - exact = "equiv" if isinstance(index, RangeIndex) else True - - result = getattr(index, method)(other, sort=sort) - expected = getattr(index, method)(index, sort=sort) - tm.assert_index_equal(result, expected, exact=exact) - - result = getattr(index, method)(other[:5], sort=sort) - expected = getattr(index, method)(index[:5], sort=sort) - tm.assert_index_equal(result, expected, exact=exact) - - -def test_intersection_duplicates_all_indexes(index): - # GH#38743 - if index.empty: - # No duplicates in empty indexes - pytest.skip("Not relevant for empty Index") - - idx = index - idx_non_unique = idx[[0, 0, 1, 2]] - - assert idx.intersection(idx_non_unique).equals(idx_non_unique.intersection(idx)) - assert idx.intersection(idx_non_unique).is_unique - - -def test_union_duplicate_index_subsets_of_each_other( - any_dtype_for_small_pos_integer_indexes, -): - # GH#31326 - dtype = any_dtype_for_small_pos_integer_indexes - a = Index([1, 2, 2, 3], dtype=dtype) - b = Index([3, 3, 4], dtype=dtype) - - expected = Index([1, 2, 2, 3, 3, 4], dtype=dtype) - if isinstance(a, CategoricalIndex): - expected = Index([1, 2, 2, 3, 3, 4]) - result = a.union(b) - tm.assert_index_equal(result, expected) - result = a.union(b, sort=False) - tm.assert_index_equal(result, expected) - - -def test_union_with_duplicate_index_and_non_monotonic( - any_dtype_for_small_pos_integer_indexes, -): - # GH#36289 - dtype = any_dtype_for_small_pos_integer_indexes - a = Index([1, 0, 0], dtype=dtype) - b = Index([0, 1], dtype=dtype) - expected = Index([0, 0, 1], dtype=dtype) - - result = a.union(b) - tm.assert_index_equal(result, expected) - - result = b.union(a) - tm.assert_index_equal(result, expected) - - -def test_union_duplicate_index_different_dtypes(): - # GH#36289 - a = Index([1, 2, 2, 3]) - b = Index(["1", "0", "0"]) - expected = Index([1, 2, 2, 3, "1", "0", "0"]) - result = a.union(b, sort=False) - tm.assert_index_equal(result, expected) - - -def test_union_same_value_duplicated_in_both(): - # GH#36289 - a = Index([0, 0, 1]) - b = Index([0, 0, 1, 2]) - result = a.union(b) - expected = Index([0, 0, 1, 2]) - tm.assert_index_equal(result, expected) - - -@pytest.mark.parametrize("dup", [1, np.nan]) -def test_union_nan_in_both(dup): - # GH#36289 - a = Index([np.nan, 1, 2, 2]) - b = Index([np.nan, dup, 1, 2]) - result = a.union(b, sort=False) - expected = Index([np.nan, dup, 1.0, 2.0, 2.0]) - tm.assert_index_equal(result, expected) - - -def test_union_rangeindex_sort_true(): - # GH 53490 - idx1 = RangeIndex(1, 100, 6) - idx2 = RangeIndex(1, 50, 3) - result = idx1.union(idx2, sort=True) - expected = Index( - [ - 1, - 4, - 7, - 10, - 13, - 16, - 19, - 22, - 25, - 28, - 31, - 34, - 37, - 40, - 43, - 46, - 49, - 55, - 61, - 67, - 73, - 79, - 85, - 91, - 97, - ] - ) - tm.assert_index_equal(result, expected) - - -def test_union_with_duplicate_index_not_subset_and_non_monotonic( - any_dtype_for_small_pos_integer_indexes, -): - # GH#36289 - dtype = any_dtype_for_small_pos_integer_indexes - a = Index([1, 0, 2], dtype=dtype) - b = Index([0, 0, 1], dtype=dtype) - expected = Index([0, 0, 1, 2], dtype=dtype) - if isinstance(a, CategoricalIndex): - expected = Index([0, 0, 1, 2]) - - result = a.union(b) - tm.assert_index_equal(result, expected) - - result = b.union(a) - tm.assert_index_equal(result, expected) - - -def test_union_int_categorical_with_nan(): - ci = CategoricalIndex([1, 2, np.nan]) - assert ci.categories.dtype.kind == "i" - - idx = Index([1, 2]) - - result = idx.union(ci) - expected = Index([1, 2, np.nan], dtype=np.float64) - tm.assert_index_equal(result, expected) - - result = ci.union(idx) - tm.assert_index_equal(result, expected) - - -class TestSetOpsUnsorted: - # These may eventually belong in a dtype-specific test_setops, or - # parametrized over a more general fixture - def test_intersect_str_dates(self): - dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] - - index1 = Index(dt_dates, dtype=object) - index2 = Index(["aa"], dtype=object) - result = index2.intersection(index1) - - expected = Index([], dtype=object) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_intersection(self, index, sort): - first = index[:20] - second = index[:10] - intersect = first.intersection(second, sort=sort) - if sort is None: - tm.assert_index_equal(intersect, second.sort_values()) - assert tm.equalContents(intersect, second) - - # Corner cases - inter = first.intersection(first, sort=sort) - assert inter is first - - @pytest.mark.parametrize( - "index2,keeps_name", - [ - (Index([3, 4, 5, 6, 7], name="index"), True), # preserve same name - (Index([3, 4, 5, 6, 7], name="other"), False), # drop diff names - (Index([3, 4, 5, 6, 7]), False), - ], - ) - def test_intersection_name_preservation(self, index2, keeps_name, sort): - index1 = Index([1, 2, 3, 4, 5], name="index") - expected = Index([3, 4, 5]) - result = index1.intersection(index2, sort) - - if keeps_name: - expected.name = "index" - - assert result.name == expected.name - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - @pytest.mark.parametrize( - "first_name,second_name,expected_name", - [("A", "A", "A"), ("A", "B", None), (None, "B", None)], - ) - def test_intersection_name_preservation2( - self, index, first_name, second_name, expected_name, sort - ): - first = index[5:20] - second = index[:10] - first.name = first_name - second.name = second_name - intersect = first.intersection(second, sort=sort) - assert intersect.name == expected_name - - def test_chained_union(self, sort): - # Chained unions handles names correctly - i1 = Index([1, 2], name="i1") - i2 = Index([5, 6], name="i2") - i3 = Index([3, 4], name="i3") - union = i1.union(i2.union(i3, sort=sort), sort=sort) - expected = i1.union(i2, sort=sort).union(i3, sort=sort) - tm.assert_index_equal(union, expected) - - j1 = Index([1, 2], name="j1") - j2 = Index([], name="j2") - j3 = Index([], name="j3") - union = j1.union(j2.union(j3, sort=sort), sort=sort) - expected = j1.union(j2, sort=sort).union(j3, sort=sort) - tm.assert_index_equal(union, expected) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_union(self, index, sort): - first = index[5:20] - second = index[:10] - everything = index[:20] - - union = first.union(second, sort=sort) - if sort is None: - tm.assert_index_equal(union, everything.sort_values()) - assert tm.equalContents(union, everything) - - @pytest.mark.parametrize("klass", [np.array, Series, list]) - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_union_from_iterables(self, index, klass, sort): - # GH#10149 - first = index[5:20] - second = index[:10] - everything = index[:20] - - case = klass(second.values) - result = first.union(case, sort=sort) - if sort is None: - tm.assert_index_equal(result, everything.sort_values()) - assert tm.equalContents(result, everything) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_union_identity(self, index, sort): - first = index[5:20] - - union = first.union(first, sort=sort) - # i.e. identity is not preserved when sort is True - assert (union is first) is (not sort) - - # This should no longer be the same object, since [] is not consistent, - # both objects will be recast to dtype('O') - union = first.union([], sort=sort) - assert (union is first) is (not sort) - - union = Index([]).union(first, sort=sort) - assert (union is first) is (not sort) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - @pytest.mark.parametrize("second_name,expected", [(None, None), ("name", "name")]) - def test_difference_name_preservation(self, index, second_name, expected, sort): - first = index[5:20] - second = index[:10] - answer = index[10:20] - - first.name = "name" - second.name = second_name - result = first.difference(second, sort=sort) - - assert tm.equalContents(result, answer) - - if expected is None: - assert result.name is None - else: - assert result.name == expected - - def test_difference_empty_arg(self, index, sort): - first = index[5:20] - first.name = "name" - result = first.difference([], sort) - - tm.assert_index_equal(result, first) - - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_difference_identity(self, index, sort): - first = index[5:20] - first.name = "name" - result = first.difference(first, sort) - - assert len(result) == 0 - assert result.name == first.name - - @pytest.mark.parametrize("index", ["string"], indirect=True) - def test_difference_sort(self, index, sort): - first = index[5:20] - second = index[:10] - - result = first.difference(second, sort) - expected = index[10:20] - - if sort is None: - expected = expected.sort_values() - - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("opname", ["difference", "symmetric_difference"]) - def test_difference_incomparable(self, opname): - a = Index([3, Timestamp("2000"), 1]) - b = Index([2, Timestamp("1999"), 1]) - op = operator.methodcaller(opname, b) - - with tm.assert_produces_warning(RuntimeWarning): - # sort=None, the default - result = op(a) - expected = Index([3, Timestamp("2000"), 2, Timestamp("1999")]) - if opname == "difference": - expected = expected[:2] - tm.assert_index_equal(result, expected) - - # sort=False - op = operator.methodcaller(opname, b, sort=False) - result = op(a) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("opname", ["difference", "symmetric_difference"]) - def test_difference_incomparable_true(self, opname): - a = Index([3, Timestamp("2000"), 1]) - b = Index([2, Timestamp("1999"), 1]) - op = operator.methodcaller(opname, b, sort=True) - - msg = "'<' not supported between instances of 'Timestamp' and 'int'" - with pytest.raises(TypeError, match=msg): - op(a) - - def test_symmetric_difference_mi(self, sort): - index1 = MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])) - index2 = MultiIndex.from_tuples([("foo", 1), ("bar", 3)]) - result = index1.symmetric_difference(index2, sort=sort) - expected = MultiIndex.from_tuples([("bar", 2), ("baz", 3), ("bar", 3)]) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result, expected) - assert tm.equalContents(result, expected) - - @pytest.mark.parametrize( - "index2,expected", - [ - (Index([0, 1, np.nan]), Index([2.0, 3.0, 0.0])), - (Index([0, 1]), Index([np.nan, 2.0, 3.0, 0.0])), - ], - ) - def test_symmetric_difference_missing(self, index2, expected, sort): - # GH#13514 change: {nan} - {nan} == {} - # (GH#6444, sorting of nans, is no longer an issue) - index1 = Index([1, np.nan, 2, 3]) - - result = index1.symmetric_difference(index2, sort=sort) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result, expected) - - def test_symmetric_difference_non_index(self, sort): - index1 = Index([1, 2, 3, 4], name="index1") - index2 = np.array([2, 3, 4, 5]) - expected = Index([1, 5]) - result = index1.symmetric_difference(index2, sort=sort) - assert tm.equalContents(result, expected) - assert result.name == "index1" - - result = index1.symmetric_difference(index2, result_name="new_name", sort=sort) - assert tm.equalContents(result, expected) - assert result.name == "new_name" - - def test_union_ea_dtypes(self, any_numeric_ea_and_arrow_dtype): - # GH#51365 - idx = Index([1, 2, 3], dtype=any_numeric_ea_and_arrow_dtype) - idx2 = Index([3, 4, 5], dtype=any_numeric_ea_and_arrow_dtype) - result = idx.union(idx2) - expected = Index([1, 2, 3, 4, 5], dtype=any_numeric_ea_and_arrow_dtype) - tm.assert_index_equal(result, expected) - - def test_union_string_array(self, any_string_dtype): - idx1 = Index(["a"], dtype=any_string_dtype) - idx2 = Index(["b"], dtype=any_string_dtype) - result = idx1.union(idx2) - expected = Index(["a", "b"], dtype=any_string_dtype) - tm.assert_index_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/__init__.py deleted file mode 100644 index ec7b0b068460ff8395de51319665156f8320fb4b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/__init__.py +++ /dev/null @@ -1,225 +0,0 @@ -import typing - -import pydantic_core -from pydantic_core.core_schema import ( - FieldSerializationInfo, - SerializationInfo, - SerializerFunctionWrapHandler, - ValidationInfo, - ValidatorFunctionWrapHandler, -) - -from . import dataclasses -from ._internal._generate_schema import GenerateSchema as GenerateSchema -from ._migration import getattr_migration -from .annotated_handlers import GetCoreSchemaHandler, GetJsonSchemaHandler -from .config import ConfigDict -from .errors import * -from .fields import AliasChoices, AliasPath, Field, PrivateAttr, computed_field -from .functional_serializers import PlainSerializer, SerializeAsAny, WrapSerializer, field_serializer, model_serializer -from .functional_validators import ( - AfterValidator, - BeforeValidator, - InstanceOf, - PlainValidator, - SkipValidation, - WrapValidator, - field_validator, - model_validator, -) -from .json_schema import WithJsonSchema -from .main import * -from .networks import * -from .type_adapter import TypeAdapter -from .types import * -from .validate_call import validate_call -from .version import VERSION -from .warnings import * - -__version__ = VERSION - -# this encourages pycharm to import `ValidationError` from here, not pydantic_core -ValidationError = pydantic_core.ValidationError - -if typing.TYPE_CHECKING: - # these are imported via `__getattr__` below, but we need them here for type checking and IDE support - from .deprecated.class_validators import root_validator, validator - from .deprecated.config import BaseConfig, Extra - from .deprecated.tools import * - from .root_model import RootModel - -__all__ = [ - # dataclasses - 'dataclasses', - # pydantic_core.core_schema - 'ValidationInfo', - 'ValidatorFunctionWrapHandler', - # functional validators - 'field_validator', - 'model_validator', - 'AfterValidator', - 'BeforeValidator', - 'PlainValidator', - 'WrapValidator', - 'SkipValidation', - 'InstanceOf', - 'WithJsonSchema', - # deprecated V1 functional validators, these are imported via `__getattr__` below - 'root_validator', - 'validator', - # functional serializers - 'field_serializer', - 'model_serializer', - 'PlainSerializer', - 'SerializeAsAny', - 'WrapSerializer', - 'FieldSerializationInfo', - 'SerializationInfo', - 'SerializerFunctionWrapHandler', - # config - 'ConfigDict', - # deprecated V1 config, these are imported via `__getattr__` below - 'BaseConfig', - 'Extra', - # validate_call - 'validate_call', - # pydantic_core errors - 'ValidationError', - # errors - 'PydanticErrorCodes', - 'PydanticUserError', - 'PydanticSchemaGenerationError', - 'PydanticImportError', - 'PydanticUndefinedAnnotation', - 'PydanticInvalidForJsonSchema', - # fields - 'AliasPath', - 'AliasChoices', - 'Field', - 'computed_field', - # main - 'BaseModel', - 'create_model', - # network - 'AnyUrl', - 'AnyHttpUrl', - 'FileUrl', - 'HttpUrl', - 'UrlConstraints', - 'EmailStr', - 'NameEmail', - 'IPvAnyAddress', - 'IPvAnyInterface', - 'IPvAnyNetwork', - 'PostgresDsn', - 'CockroachDsn', - 'AmqpDsn', - 'RedisDsn', - 'MongoDsn', - 'KafkaDsn', - 'MySQLDsn', - 'MariaDBDsn', - 'validate_email', - # root_model - 'RootModel', - # deprecated tools, these are imported via `__getattr__` below - 'parse_obj_as', - 'schema_of', - 'schema_json_of', - # types - 'Strict', - 'StrictStr', - 'conbytes', - 'conlist', - 'conset', - 'confrozenset', - 'constr', - 'StringConstraints', - 'ImportString', - 'conint', - 'PositiveInt', - 'NegativeInt', - 'NonNegativeInt', - 'NonPositiveInt', - 'confloat', - 'PositiveFloat', - 'NegativeFloat', - 'NonNegativeFloat', - 'NonPositiveFloat', - 'FiniteFloat', - 'condecimal', - 'condate', - 'UUID1', - 'UUID3', - 'UUID4', - 'UUID5', - 'FilePath', - 'DirectoryPath', - 'NewPath', - 'Json', - 'SecretStr', - 'SecretBytes', - 'StrictBool', - 'StrictBytes', - 'StrictInt', - 'StrictFloat', - 'PaymentCardNumber', - 'PrivateAttr', - 'ByteSize', - 'PastDate', - 'FutureDate', - 'PastDatetime', - 'FutureDatetime', - 'AwareDatetime', - 'NaiveDatetime', - 'AllowInfNan', - 'EncoderProtocol', - 'EncodedBytes', - 'EncodedStr', - 'Base64Encoder', - 'Base64Bytes', - 'Base64Str', - 'Base64UrlBytes', - 'Base64UrlStr', - 'GetPydanticSchema', - # type_adapter - 'TypeAdapter', - # version - 'VERSION', - # warnings - 'PydanticDeprecatedSince20', - 'PydanticDeprecationWarning', - # annotated handlers - 'GetCoreSchemaHandler', - 'GetJsonSchemaHandler', - 'GenerateSchema', -] - -# A mapping of {: (package, )} defining dynamic imports -_dynamic_imports: 'dict[str, tuple[str, str]]' = { - 'RootModel': (__package__, '.root_model'), - 'root_validator': (__package__, '.deprecated.class_validators'), - 'validator': (__package__, '.deprecated.class_validators'), - 'BaseConfig': (__package__, '.deprecated.config'), - 'Extra': (__package__, '.deprecated.config'), - 'parse_obj_as': (__package__, '.deprecated.tools'), - 'schema_of': (__package__, '.deprecated.tools'), - 'schema_json_of': (__package__, '.deprecated.tools'), - # FieldValidationInfo is deprecated, and hidden behind module a `__getattr__` - 'FieldValidationInfo': ('pydantic_core', '.core_schema'), -} - -_getattr_migration = getattr_migration(__name__) - - -def __getattr__(attr_name: str) -> object: - dynamic_attr = _dynamic_imports.get(attr_name) - if dynamic_attr is None: - return _getattr_migration(attr_name) - - package, module_name = dynamic_attr - - from importlib import import_module - - module = import_module(module_name, package=package) - return getattr(module, attr_name) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/responses.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/responses.py deleted file mode 100644 index 453bbb158eb60c63701f5c683546513c93ca8e27..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette/responses.py +++ /dev/null @@ -1,366 +0,0 @@ -import http.cookies -import json -import os -import stat -import sys -import typing -from datetime import datetime -from email.utils import format_datetime, formatdate -from functools import partial -from mimetypes import guess_type as mimetypes_guess_type -from urllib.parse import quote - -import anyio - -from starlette._compat import md5_hexdigest -from starlette.background import BackgroundTask -from starlette.concurrency import iterate_in_threadpool -from starlette.datastructures import URL, MutableHeaders -from starlette.types import Receive, Scope, Send - -if sys.version_info >= (3, 8): # pragma: no cover - from typing import Literal -else: # pragma: no cover - from typing_extensions import Literal - -# Workaround for adding samesite support to pre 3.8 python -http.cookies.Morsel._reserved["samesite"] = "SameSite" # type: ignore[attr-defined] - - -# Compatibility wrapper for `mimetypes.guess_type` to support `os.PathLike` on typing.Tuple[typing.Optional[str], typing.Optional[str]]: - if sys.version_info < (3, 8): # pragma: no cover - url = os.fspath(url) - return mimetypes_guess_type(url, strict) - - -class Response: - media_type = None - charset = "utf-8" - - def __init__( - self, - content: typing.Any = None, - status_code: int = 200, - headers: typing.Optional[typing.Mapping[str, str]] = None, - media_type: typing.Optional[str] = None, - background: typing.Optional[BackgroundTask] = None, - ) -> None: - self.status_code = status_code - if media_type is not None: - self.media_type = media_type - self.background = background - self.body = self.render(content) - self.init_headers(headers) - - def render(self, content: typing.Any) -> bytes: - if content is None: - return b"" - if isinstance(content, bytes): - return content - return content.encode(self.charset) - - def init_headers( - self, headers: typing.Optional[typing.Mapping[str, str]] = None - ) -> None: - if headers is None: - raw_headers: typing.List[typing.Tuple[bytes, bytes]] = [] - populate_content_length = True - populate_content_type = True - else: - raw_headers = [ - (k.lower().encode("latin-1"), v.encode("latin-1")) - for k, v in headers.items() - ] - keys = [h[0] for h in raw_headers] - populate_content_length = b"content-length" not in keys - populate_content_type = b"content-type" not in keys - - body = getattr(self, "body", None) - if ( - body is not None - and populate_content_length - and not (self.status_code < 200 or self.status_code in (204, 304)) - ): - content_length = str(len(body)) - raw_headers.append((b"content-length", content_length.encode("latin-1"))) - - content_type = self.media_type - if content_type is not None and populate_content_type: - if content_type.startswith("text/"): - content_type += "; charset=" + self.charset - raw_headers.append((b"content-type", content_type.encode("latin-1"))) - - self.raw_headers = raw_headers - - @property - def headers(self) -> MutableHeaders: - if not hasattr(self, "_headers"): - self._headers = MutableHeaders(raw=self.raw_headers) - return self._headers - - def set_cookie( - self, - key: str, - value: str = "", - max_age: typing.Optional[int] = None, - expires: typing.Optional[typing.Union[datetime, str, int]] = None, - path: str = "/", - domain: typing.Optional[str] = None, - secure: bool = False, - httponly: bool = False, - samesite: typing.Optional[Literal["lax", "strict", "none"]] = "lax", - ) -> None: - cookie: "http.cookies.BaseCookie[str]" = http.cookies.SimpleCookie() - cookie[key] = value - if max_age is not None: - cookie[key]["max-age"] = max_age - if expires is not None: - if isinstance(expires, datetime): - cookie[key]["expires"] = format_datetime(expires, usegmt=True) - else: - cookie[key]["expires"] = expires - if path is not None: - cookie[key]["path"] = path - if domain is not None: - cookie[key]["domain"] = domain - if secure: - cookie[key]["secure"] = True - if httponly: - cookie[key]["httponly"] = True - if samesite is not None: - assert samesite.lower() in [ - "strict", - "lax", - "none", - ], "samesite must be either 'strict', 'lax' or 'none'" - cookie[key]["samesite"] = samesite - cookie_val = cookie.output(header="").strip() - self.raw_headers.append((b"set-cookie", cookie_val.encode("latin-1"))) - - def delete_cookie( - self, - key: str, - path: str = "/", - domain: typing.Optional[str] = None, - secure: bool = False, - httponly: bool = False, - samesite: typing.Optional[Literal["lax", "strict", "none"]] = "lax", - ) -> None: - self.set_cookie( - key, - max_age=0, - expires=0, - path=path, - domain=domain, - secure=secure, - httponly=httponly, - samesite=samesite, - ) - - async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: - await send( - { - "type": "http.response.start", - "status": self.status_code, - "headers": self.raw_headers, - } - ) - await send({"type": "http.response.body", "body": self.body}) - - if self.background is not None: - await self.background() - - -class HTMLResponse(Response): - media_type = "text/html" - - -class PlainTextResponse(Response): - media_type = "text/plain" - - -class JSONResponse(Response): - media_type = "application/json" - - def __init__( - self, - content: typing.Any, - status_code: int = 200, - headers: typing.Optional[typing.Dict[str, str]] = None, - media_type: typing.Optional[str] = None, - background: typing.Optional[BackgroundTask] = None, - ) -> None: - super().__init__(content, status_code, headers, media_type, background) - - def render(self, content: typing.Any) -> bytes: - return json.dumps( - content, - ensure_ascii=False, - allow_nan=False, - indent=None, - separators=(",", ":"), - ).encode("utf-8") - - -class RedirectResponse(Response): - def __init__( - self, - url: typing.Union[str, URL], - status_code: int = 307, - headers: typing.Optional[typing.Mapping[str, str]] = None, - background: typing.Optional[BackgroundTask] = None, - ) -> None: - super().__init__( - content=b"", status_code=status_code, headers=headers, background=background - ) - self.headers["location"] = quote(str(url), safe=":/%#?=@[]!$&'()*+,;") - - -Content = typing.Union[str, bytes] -SyncContentStream = typing.Iterator[Content] -AsyncContentStream = typing.AsyncIterable[Content] -ContentStream = typing.Union[AsyncContentStream, SyncContentStream] - - -class StreamingResponse(Response): - body_iterator: AsyncContentStream - - def __init__( - self, - content: ContentStream, - status_code: int = 200, - headers: typing.Optional[typing.Mapping[str, str]] = None, - media_type: typing.Optional[str] = None, - background: typing.Optional[BackgroundTask] = None, - ) -> None: - if isinstance(content, typing.AsyncIterable): - self.body_iterator = content - else: - self.body_iterator = iterate_in_threadpool(content) - self.status_code = status_code - self.media_type = self.media_type if media_type is None else media_type - self.background = background - self.init_headers(headers) - - async def listen_for_disconnect(self, receive: Receive) -> None: - while True: - message = await receive() - if message["type"] == "http.disconnect": - break - - async def stream_response(self, send: Send) -> None: - await send( - { - "type": "http.response.start", - "status": self.status_code, - "headers": self.raw_headers, - } - ) - async for chunk in self.body_iterator: - if not isinstance(chunk, bytes): - chunk = chunk.encode(self.charset) - await send({"type": "http.response.body", "body": chunk, "more_body": True}) - - await send({"type": "http.response.body", "body": b"", "more_body": False}) - - async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: - async with anyio.create_task_group() as task_group: - - async def wrap(func: "typing.Callable[[], typing.Awaitable[None]]") -> None: - await func() - task_group.cancel_scope.cancel() - - task_group.start_soon(wrap, partial(self.stream_response, send)) - await wrap(partial(self.listen_for_disconnect, receive)) - - if self.background is not None: - await self.background() - - -class FileResponse(Response): - chunk_size = 64 * 1024 - - def __init__( - self, - path: typing.Union[str, "os.PathLike[str]"], - status_code: int = 200, - headers: typing.Optional[typing.Mapping[str, str]] = None, - media_type: typing.Optional[str] = None, - background: typing.Optional[BackgroundTask] = None, - filename: typing.Optional[str] = None, - stat_result: typing.Optional[os.stat_result] = None, - method: typing.Optional[str] = None, - content_disposition_type: str = "attachment", - ) -> None: - self.path = path - self.status_code = status_code - self.filename = filename - self.send_header_only = method is not None and method.upper() == "HEAD" - if media_type is None: - media_type = guess_type(filename or path)[0] or "text/plain" - self.media_type = media_type - self.background = background - self.init_headers(headers) - if self.filename is not None: - content_disposition_filename = quote(self.filename) - if content_disposition_filename != self.filename: - content_disposition = "{}; filename*=utf-8''{}".format( - content_disposition_type, content_disposition_filename - ) - else: - content_disposition = '{}; filename="{}"'.format( - content_disposition_type, self.filename - ) - self.headers.setdefault("content-disposition", content_disposition) - self.stat_result = stat_result - if stat_result is not None: - self.set_stat_headers(stat_result) - - def set_stat_headers(self, stat_result: os.stat_result) -> None: - content_length = str(stat_result.st_size) - last_modified = formatdate(stat_result.st_mtime, usegmt=True) - etag_base = str(stat_result.st_mtime) + "-" + str(stat_result.st_size) - etag = md5_hexdigest(etag_base.encode(), usedforsecurity=False) - - self.headers.setdefault("content-length", content_length) - self.headers.setdefault("last-modified", last_modified) - self.headers.setdefault("etag", etag) - - async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: - if self.stat_result is None: - try: - stat_result = await anyio.to_thread.run_sync(os.stat, self.path) - self.set_stat_headers(stat_result) - except FileNotFoundError: - raise RuntimeError(f"File at path {self.path} does not exist.") - else: - mode = stat_result.st_mode - if not stat.S_ISREG(mode): - raise RuntimeError(f"File at path {self.path} is not a file.") - await send( - { - "type": "http.response.start", - "status": self.status_code, - "headers": self.raw_headers, - } - ) - if self.send_header_only: - await send({"type": "http.response.body", "body": b"", "more_body": False}) - else: - async with await anyio.open_file(self.path, mode="rb") as file: - more_body = True - while more_body: - chunk = await file.read(self.chunk_size) - more_body = len(chunk) == self.chunk_size - await send( - { - "type": "http.response.body", - "body": chunk, - "more_body": more_body, - } - ) - if self.background is not None: - await self.background() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/imports.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/imports.py deleted file mode 100644 index a6a59d4c2ee61ec7801af317b88bbda4c26b7ef7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/imports.py +++ /dev/null @@ -1,99 +0,0 @@ -from __future__ import annotations - -import warnings -from typing import Any, Dict, Iterable, Optional - - -__all__ = ["lazy_import"] - - -def import_name(name: str, source: str, namespace: Dict[str, Any]) -> Any: - """ - Import ``name`` from ``source`` in ``namespace``. - - There are two use cases: - - - ``name`` is an object defined in ``source``; - - ``name`` is a submodule of ``source``. - - Neither :func:`__import__` nor :func:`~importlib.import_module` does - exactly this. :func:`__import__` is closer to the intended behavior. - - """ - level = 0 - while source[level] == ".": - level += 1 - assert level < len(source), "importing from parent isn't supported" - module = __import__(source[level:], namespace, None, [name], level) - return getattr(module, name) - - -def lazy_import( - namespace: Dict[str, Any], - aliases: Optional[Dict[str, str]] = None, - deprecated_aliases: Optional[Dict[str, str]] = None, -) -> None: - """ - Provide lazy, module-level imports. - - Typical use:: - - __getattr__, __dir__ = lazy_import( - globals(), - aliases={ - "": "", - ... - }, - deprecated_aliases={ - ..., - } - ) - - This function defines ``__getattr__`` and ``__dir__`` per :pep:`562`. - - """ - if aliases is None: - aliases = {} - if deprecated_aliases is None: - deprecated_aliases = {} - - namespace_set = set(namespace) - aliases_set = set(aliases) - deprecated_aliases_set = set(deprecated_aliases) - - assert not namespace_set & aliases_set, "namespace conflict" - assert not namespace_set & deprecated_aliases_set, "namespace conflict" - assert not aliases_set & deprecated_aliases_set, "namespace conflict" - - package = namespace["__name__"] - - def __getattr__(name: str) -> Any: - assert aliases is not None # mypy cannot figure this out - try: - source = aliases[name] - except KeyError: - pass - else: - return import_name(name, source, namespace) - - assert deprecated_aliases is not None # mypy cannot figure this out - try: - source = deprecated_aliases[name] - except KeyError: - pass - else: - warnings.warn( - f"{package}.{name} is deprecated", - DeprecationWarning, - stacklevel=2, - ) - return import_name(name, source, namespace) - - raise AttributeError(f"module {package!r} has no attribute {name!r}") - - namespace["__getattr__"] = __getattr__ - - def __dir__() -> Iterable[str]: - return sorted(namespace_set | aliases_set | deprecated_aliases_set) - - namespace["__dir__"] = __dir__ diff --git a/spaces/pscpeng/ChuanhuChatGPT/assets/custom.js b/spaces/pscpeng/ChuanhuChatGPT/assets/custom.js deleted file mode 100644 index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000 --- a/spaces/pscpeng/ChuanhuChatGPT/assets/custom.js +++ /dev/null @@ -1 +0,0 @@ -// custom javascript here \ No newline at end of file diff --git a/spaces/qingxu98/academic-chatgpt-beta/README.md b/spaces/qingxu98/academic-chatgpt-beta/README.md deleted file mode 100644 index 8da32e5b7f24251464a72786bb25644f21c122a2..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/README.md +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: academic-chatgpt -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -python_version: 3.11 -app_file: main.py -pinned: false ---- - -# ChatGPT 学术优化 - -**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests** - -If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. - -> **Note** -> -> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR! -> -> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。 -> - - -
                - -功能 | 描述 ---- | --- -一键润色 | 支持一键润色、一键查找论文语法错误 -一键中英互译 | 一键中英互译 -一键代码解释 | 可以正确显示代码、解释代码 -[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 -[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器 -模块化设计 | 支持自定义高阶的函数插件与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码 -[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树 -读论文 | [函数插件] 一键解读latex论文全文并生成摘要 -Latex全文翻译、润色 | [函数插件] 一键翻译或润色latex论文 -批量注释生成 | [函数插件] 一键批量生成函数注释 -chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 -[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF -[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程) -[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你选择有趣的文章 -公式/图片/表格显示 | 可以同时显示公式的tex形式和渲染形式,支持公式、代码高亮 -多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序 -启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题 -[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧? -huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic) -…… | …… - -
                - - -- 新界面(修改config.py中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换) -
                - -
                - - -- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板 -
                - -
                - -- 润色/纠错 -
                - -
                - -- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读 -
                - -
                - -- 懒得看项目代码?整个工程直接给chatgpt炫嘴里 -
                - -
                - -- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
                - -
                - -多种大语言模型混合调用[huggingface测试版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)(huggingface版不支持chatglm) - - ---- - -## 安装-方法1:直接运行 (Windows, Linux or MacOS) - -1. 下载项目 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. 配置API_KEY和代理设置 - -在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下 -``` -1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。 -2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY,在 config.py 文件里配置好即可。 -3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1 -``` -(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。) - - -3. 安装依赖 -```sh -# (选择一)推荐 -python -m pip install -r requirements.txt - -# (选择二)如果您使用anaconda,步骤也是类似的: -# (选择二.1)conda create -n gptac_venv python=3.11 -# (选择二.2)conda activate gptac_venv -# (选择二.3)python -m pip install -r requirements.txt - -# 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法: -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -如果需要支持清华ChatGLM,需要额外安装更多依赖(不熟悉python者、电脑配置不佳者,建议不要尝试): -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. 运行 -```sh -python main.py -``` - -5. 测试函数插件 -``` -- 测试Python项目分析 - input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "解析整个Python项目" -- 测试自我代码解读 - 点击 "[多线程Demo] 解析此项目本身(源码自译解)" -- 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 - 点击 "[函数插件模板Demo] 历史上的今天" -- 函数插件区下拉菜单中有更多功能可供选择 -``` - -## 安装-方法2:使用docker (Linux) - -1. 仅ChatGPT(推荐大多数人选择) -``` sh -# 下载项目 -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# 配置 海外Proxy 和 OpenAI API KEY -用任意文本编辑器编辑 config.py -# 安装 -docker build -t gpt-academic . -# 运行 -docker run --rm -it --net=host gpt-academic - -# 测试函数插件 -## 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 -点击 "[函数插件模板Demo] 历史上的今天" -## 测试给Latex项目写摘要 -input区域 输入 ./crazy_functions/test_project/latex/attention , 然后点击 "读Tex论文写摘要" -## 测试Python项目分析 -input区域 输入 ./crazy_functions/test_project/python/dqn , 然后点击 "解析整个Python项目" - -函数插件区下拉菜单中有更多功能可供选择 -``` - -2. ChatGPT+ChatGLM(需要对docker非常熟悉 + 电脑配置足够强) - -``` sh -# 修改dockerfile -cd docs && nano Dockerfile+ChatGLM -# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# How to run | 如何运行 (1) 直接运行: -docker run --rm -it --net=host --gpus=all gpt-academic -# How to run | 如何运行 (2) 我想运行之前进容器做一些调整: -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - - -## 安装-方法3:其他部署方式 - -1. 远程云服务器部署 -请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. 使用WSL2(Windows Subsystem for Linux 子系统) -请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## 安装-代理配置 -1. 常规方法 -[配置代理](https://github.com/binary-husky/chatgpt_academic/issues/1) - -2. 纯新手教程 -[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - - ---- - -## 自定义新的便捷按钮(学术快捷键自定义) -任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。) -例如 -``` -"超级英译中": { - # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 - "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n", - - # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。 - "Suffix": "", -}, -``` -
                - -
                - ---- - - -## 部分功能展示 - -### 图片显示: - -
                - -
                - - -### 如果一个程序能够读懂并剖析自己: - -
                - -
                - -
                - -
                - -### 其他任意Python/Cpp项目剖析: -
                - -
                - -
                - -
                - -### Latex论文一键阅读理解与摘要生成 -
                - -
                - -### 自动报告生成 -
                - - - -
                - -### 模块化功能设计 -
                - - -
                - - -### 源代码转译英文 - -
                - -
                - -## Todo 与 版本规划: -- version 3.2+ (todo): 函数插件支持更多参数接口 -- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡 -- version 3.0: 对chatglm和其他小型llm的支持 -- version 2.6: 重构了插件结构,提高了交互性,加入更多插件 -- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题 -- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。 -- version 2.3: 增强多线程交互性 -- version 2.2: 函数插件支持热重载 -- version 2.1: 可折叠式布局 -- version 2.0: 引入模块化函数插件 -- version 1.0: 基础功能 - -## 参考与学习 - -``` -代码中参考了很多其他优秀项目中的设计,主要包括: - -# 借鉴项目1:借鉴了ChuanhuChatGPT中诸多技巧 -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 借鉴项目2:清华ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B -``` diff --git a/spaces/qinzhu/moe-tts-tech/text/japanese.py b/spaces/qinzhu/moe-tts-tech/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/moe-tts-tech/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md b/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md deleted file mode 100644 index ee075cadd76deb6203f41557160ea2f468066a98..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -
                -

                fynfeo 2336c5e09f Cypheros TS-Doctor v1.2.22 Portable.rar
                CRACK Age of Empires: Definitive Edition
                telugu zion songs book pdf
                Cubase LE AI Elements 8035 Keygen
                Super Nani 1080p hd
                table no 21 full movie download blu-ray hindi moviesinstmank
                amar jaleel books pdf free download
                Windows 7 Crack Loader v.2.2.1 Activation February 2018 download pc
                free download kundli pro 5.5 software full version
                amish tripathi the oath of the vayuputras pdf download

                -

                The TS-Doctor processes the image format of most DVB-C, DVB-S and DVB-T Receivers. So it makes no change whether you receive your television channels by SAT, Cable and antenna, the TS-Doctor is the optimum utility for processing these documents on a PC and bringing them into a suitable format.

                The TS-Doctor provides an easy to use cropping option and together with the automatic advertising recognition option, makes it very easy to delete annoying advertising interruptions from TV-pictures.

                The program examines and repairs your TV-images and makes sure that they can be freely processed and shown on today's media players. It can also handle images and formats that other application often cannot read.

                The tool also supports HDTV-pictures. HDTV means high resolution TV with great picture and sound quality. In spite of big data documents associated with HDTV, the program works very easy and without loss of image and sound quality.

                The TS-Doctor can highly lower the needed file size by eliminating unnecessary file content and filler data.

                -

                Cypheros TS-Doctor v1.2.22 Portable.rar


                DOWNLOAD ►►► https://geags.com/2uCrL3



                899543212b
                -
                -
                \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md b/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md deleted file mode 100644 index 67ed8b2e294a7e0146b920c16bb481b77151805f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md +++ /dev/null @@ -1,74 +0,0 @@ - -

                Dream Theater Live At Budokan Dvd Download: A Must-Have for Prog Metal Fans

                - -

                If you are a fan of progressive metal, you probably know Dream Theater, one of the most influential and talented bands in the genre. But do you know their live performance at Budokan, Japan on April 26, 2004? If not, you are missing out on one of the best concerts ever recorded on DVD.

                -

                Dream Theater Live At Budokan Dvd Download


                Download ✓✓✓ https://geags.com/2uCrc0



                - -

                Dream Theater Live At Budokan Dvd Download is a three-hour show that showcases the band's amazing skills, creativity and passion. The setlist includes songs from their albums Images and Words, Awake, Falling Into Infinity, Metropolis Pt. 2: Scenes from a Memory, Six Degrees of Inner Turbulence and Train of Thought. You will hear classics like "Pull Me Under", "Metropolis Pt. 1", "Home", "The Spirit Carries On" and "As I Am", as well as epic tracks like "A Change of Seasons", "Beyond This Life" and "Octavarium". You will also witness the band's incredible instrumental prowess in the 12-minute "Instrumedley" that features snippets from various songs.

                - -

                The DVD also features a bonus disc with an extended drum solo by Mike Portnoy, gear tours by John Petrucci and Jordan Rudess, and a documentary called "Riding the Train of Thought" that gives you a behind-the-scenes look at the band's tour in Japan. The production quality is superb, with high-definition video, widescreen format and Dolby Digital 5.1 surround sound. The camera angles are well-chosen and give you a close-up view of each band member's performance.

                - -

                Dream Theater Live At Budokan Dvd Download is a must-have for any prog metal fan who wants to experience the magic of Dream Theater live. It is a testament to the band's musical genius and dedication to their fans. You can download it from various online platforms or order it from Amazon or other retailers. Don't miss this opportunity to witness one of the greatest concerts of all time.

                -

                -

                Why You Should Download Dream Theater Live At Budokan DVD

                - -

                Dream Theater Live At Budokan DVD is not just a concert video, it is a musical journey that will take you to the heights of prog metal excellence. You will see and hear Dream Theater at their peak, performing with passion, precision and power. You will be amazed by their technical skills, their musical diversity and their emotional depth. You will feel like you are part of the audience, witnessing a historic event that will never be repeated.

                - -

                Dream Theater Live At Budokan DVD is also a great way to discover or rediscover the band's discography, as they play songs from different albums and eras. You will appreciate how they have evolved and matured over the years, while staying true to their vision and style. You will also enjoy the bonus features that give you more insight into the band's personality, history and creative process.

                - -

                Dream Theater Live At Budokan DVD is a must-have for any Dream Theater fan, as well as anyone who loves progressive metal or music in general. It is a masterpiece that will inspire you, challenge you and entertain you. It is a DVD that you will watch over and over again, discovering new details and nuances every time. It is a DVD that you will cherish and share with your friends and family.

                - -

                How to Download Dream Theater Live At Budokan DVD

                - -

                If you are convinced that Dream Theater Live At Budokan DVD is something you need to have in your collection, you might be wondering how to download it. There are several options available, depending on your preferences and budget. Here are some of them:

                - -
                  -
                • Buy the DVD from Amazon or other online retailers. This is the easiest and safest way to get the DVD, as you will receive a physical copy that you can play on any device. You will also support the band and their label by purchasing their official product.
                • -
                • Download the DVD from torrent sites or file-sharing platforms. This is a risky and illegal way to get the DVD, as you might encounter viruses, malware or fake files. You will also violate the band's copyright and deprive them of their deserved income.
                • -
                • Stream the DVD from YouTube or other video sites. This is a convenient and free way to watch the DVD, but you will not get the best quality or experience. You will also depend on your internet connection and availability of the video.
                • -
                - -

                The choice is yours, but we recommend that you buy the DVD from Amazon or other online retailers, as it is the best way to enjoy Dream Theater Live At Budokan DVD in its full glory.

                -

                What People Are Saying About Dream Theater Live At Budokan DVD

                - -

                Dream Theater Live At Budokan DVD has received rave reviews from critics and fans alike, who praised the band's performance, the production quality and the bonus features. Here are some of the comments from various sources:

                - -
                  -
                • "Dream Theater's Live at Budokan is a stunning showcase of a band at the height of its powers, delivering a mind-blowing set of progressive metal masterpieces with flawless execution and infectious enthusiasm." - AllMusic
                • -
                • "Live at Budokan is a treasure-trove for Dream Theater fans, presenting an entire three-hour performance at Tokyo's famed Budokan arena on April 26, 2004, along with a bonus disc rich in supplementary material." - Amazon.co.uk
                • -
                • "Live at Budokan is one of the best live DVDs ever made, period. The sound and picture quality are superb, the camera work is excellent, and the performance is simply phenomenal." - Prog Archives
                • -
                • "Live at Budokan is a must for any Dream Theater fan and a great introduction for newcomers. It captures the band in top form, playing with passion, precision and power. It is a musical journey that will take you to the heights of prog metal excellence." - Metal Storm
                • -
                - -

                Conclusion: Dream Theater Live At Budokan DVD is a Must-Have

                - -

                In conclusion, Dream Theater Live At Budokan DVD is a must-have for any prog metal fan who wants to experience the magic of Dream Theater live. It is a masterpiece that will inspire you, challenge you and entertain you. It is a DVD that you will watch over and over again, discovering new details and nuances every time. It is a DVD that you will cherish and share with your friends and family.

                - -

                If you want to download Dream Theater Live At Budokan DVD, you have several options available, depending on your preferences and budget. You can buy the DVD from Amazon or other online retailers, download it from torrent sites or file-sharing platforms, or stream it from YouTube or other video sites. The choice is yours, but we recommend that you buy the DVD from Amazon or other online retailers, as it is the best way to enjoy Dream Theater Live At Budokan DVD in its full glory.

                - -

                Don't miss this opportunity to witness one of the greatest concerts of all time. Download Dream Theater Live At Budokan DVD today and prepare to be amazed.

                -

                Where to Watch Dream Theater Live At Budokan DVD

                - -

                If you have downloaded Dream Theater Live At Budokan DVD, you might be wondering where to watch it. You have several options available, depending on your preferences and equipment. Here are some of them:

                - -
                  -
                • Watch it on your computer or laptop. This is the simplest and most convenient way to watch the DVD, as you can use any media player that supports Blu-ray or DVD formats. You can also adjust the settings to your liking, such as brightness, contrast, volume and subtitles.
                • -
                • Watch it on your TV or home theater system. This is the best way to enjoy the DVD in its full quality and sound, as you can use a Blu-ray or DVD player that connects to your TV or home theater system. You can also use an HDMI cable or a wireless device to stream the DVD from your computer or laptop to your TV or home theater system.
                • -
                • Watch it on your smartphone or tablet. This is a convenient and portable way to watch the DVD, as you can use any app that supports Blu-ray or DVD formats. You can also download the DVD to your device or stream it from a cloud service. However, you might not get the best quality or sound, as the screen size and speakers are limited.
                • -
                - -

                The choice is yours, but we recommend that you watch Dream Theater Live At Budokan DVD on your TV or home theater system, as it is the best way to experience the concert in its full glory.

                - -

                Tips for Watching Dream Theater Live At Budokan DVD

                - -

                Now that you have decided where to watch Dream Theater Live At Budokan DVD, you might want some tips for watching it. Here are some of them:

                - -
                  -
                • Watch it with friends or family. Dream Theater Live At Budokan DVD is a great way to share your love for prog metal with others, as you can enjoy the concert together and discuss your favorite songs and moments. You can also make it a fun event by preparing some snacks and drinks.
                • -
                • Watch it with headphones or earphones. Dream Theater Live At Budokan DVD is a great way to immerse yourself in the concert, as you can hear every detail and nuance of the band's performance. You can also block out any distractions and focus on the music.
                • -
                • Watch it with an open mind and heart. Dream Theater Live At Budokan DVD is a great way to appreciate the band's artistry and creativity, as you can witness their musical diversity and emotional depth. You can also learn something new and be inspired by their technical skills and passion.
                • -
                - -

                The choice is yours, but we recommend that you watch Dream Theater Live At Budokan DVD with an open mind and heart, as it is the best way to enjoy the concert in its full beauty.

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md b/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md deleted file mode 100644 index 98aa658384d80083dfbe72a769b723f1c581d29c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md +++ /dev/null @@ -1,24 +0,0 @@ -

                Mobile tracking software used by police


                Download Filehttps://geags.com/2uCsCo



                - -UNI / SIM number, only that person knows about it. In most of the cases, the requirement to trace a .UNI / SIM number is required for legal, police, immigration or investigation purpose. The entire process of tracing of .UNI / SIM number is carried out by different people with different sets of tools and their skills. The investigation agency uses different methods to trace .UNI / SIM number. Those methods could be found in most of the investigations books. The detection methods of .UNI / SIM number can be divided into three major groups: - -1. *Acquiring the details of the .UNI / SIM number*: This includes any method which can acquire the details of .UNI / SIM number, such as, obtaining the IMEI of a mobile phone, requesting the owner of a SIM card to provide the SIM card details, pinging the SIM card. This method may require a mobile phone number. If the number is not known, then it may involve contacting the mobile phone network (Airtel, Vodafone etc.). - -2. *Tracing the owners of the .UNI / SIM number*: This includes two types of methods: - - a. From the details of the .UNI / SIM number, one can find the owner of the number; - - b. If the details of the .UNI / SIM number is not known, then the method involves identifying the location of the phone using triangulation. - -3. *Tracing the .UNI / SIM number*: This includes two types of methods: - - a. If the owner of the .UNI / SIM number is not known, then the method involves identifying the location of the phone using triangulation. - - b. From the owner of the .UNI / SIM number, one can find the owner of the number. - -The triangulation method is probably the most widely used method of identifying the location of a .UNI / SIM number. This method is used to identify the location of a .UNI / SIM number using the location of a .UNI / SIM card, the location of the mobile phone handset, the location of a nearby base station or tower, and the location of the mobile phone handset. - -The first mobile phone was brought in France in 1950. Since then, 4fefd39f24
                -
                -
                -

                diff --git a/spaces/qwertyuiee/AnimeBackgroundGAN/network/__init__.py b/spaces/qwertyuiee/AnimeBackgroundGAN/network/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py b/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py deleted file mode 100644 index dc40bb6dac819f0898dd8612c48c0c19448fbbf9..0000000000000000000000000000000000000000 --- a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py +++ /dev/null @@ -1,308 +0,0 @@ -import asyncio -import json -import logging -import traceback -from pydantic import BaseModel - -from fastapi import FastAPI, WebSocket, HTTPException, WebSocketDisconnect -from fastapi.middleware.cors import CORSMiddleware -from fastapi.responses import StreamingResponse, JSONResponse -from fastapi.staticfiles import StaticFiles - -from diffusers import AutoencoderTiny, ControlNetModel -from latent_consistency_controlnet import LatentConsistencyModelPipeline_controlnet -from compel import Compel -import torch - -from canny_gpu import SobelOperator -# from controlnet_aux import OpenposeDetector -# import cv2 - -try: - import intel_extension_for_pytorch as ipex -except: - pass -from PIL import Image -import numpy as np -import gradio as gr -import io -import uuid -import os -import time -import psutil - - -MAX_QUEUE_SIZE = int(os.environ.get("MAX_QUEUE_SIZE", 0)) -TIMEOUT = float(os.environ.get("TIMEOUT", 0)) -SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None) -TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None) -WIDTH = 512 -HEIGHT = 512 -# disable tiny autoencoder for better quality speed tradeoff -USE_TINY_AUTOENCODER = True - -# check if MPS is available OSX only M1/M2/M3 chips -mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available() -xpu_available = hasattr(torch, "xpu") and torch.xpu.is_available() -device = torch.device( - "cuda" if torch.cuda.is_available() else "xpu" if xpu_available else "cpu" -) - -# change to torch.float16 to save GPU memory -torch_dtype = torch.float16 - -print(f"TIMEOUT: {TIMEOUT}") -print(f"SAFETY_CHECKER: {SAFETY_CHECKER}") -print(f"MAX_QUEUE_SIZE: {MAX_QUEUE_SIZE}") -print(f"device: {device}") - -if mps_available: - device = torch.device("mps") - device = "cpu" - torch_dtype = torch.float32 - -controlnet_canny = ControlNetModel.from_pretrained( - "lllyasviel/control_v11p_sd15_canny", torch_dtype=torch_dtype -).to(device) - -canny_torch = SobelOperator(device=device) -# controlnet_pose = ControlNetModel.from_pretrained( -# "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch_dtype -# ).to(device) -# controlnet_depth = ControlNetModel.from_pretrained( -# "lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch_dtype -# ).to(device) - - -# pose_processor = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") - -if SAFETY_CHECKER == "True": - pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - controlnet=controlnet_canny, - scheduler=None, - ) -else: - pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - safety_checker=None, - controlnet=controlnet_canny, - scheduler=None, - ) - -if USE_TINY_AUTOENCODER: - pipe.vae = AutoencoderTiny.from_pretrained( - "madebyollin/taesd", torch_dtype=torch_dtype, use_safetensors=True - ) -pipe.set_progress_bar_config(disable=True) -pipe.to(device=device, dtype=torch_dtype).to(device) -pipe.unet.to(memory_format=torch.channels_last) - -if psutil.virtual_memory().total < 64 * 1024**3: - pipe.enable_attention_slicing() - -compel_proc = Compel( - tokenizer=pipe.tokenizer, - text_encoder=pipe.text_encoder, - truncate_long_prompts=False, -) -if TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True) - - pipe(prompt="warmup", image=[Image.new("RGB", (768, 768))], control_image=[Image.new("RGB", (768, 768))]) - - -user_queue_map = {} - - -class InputParams(BaseModel): - seed: int = 2159232 - prompt: str - guidance_scale: float = 8.0 - strength: float = 0.5 - steps: int = 4 - lcm_steps: int = 50 - width: int = WIDTH - height: int = HEIGHT - controlnet_scale: float = 0.8 - controlnet_start: float = 0.0 - controlnet_end: float = 1.0 - canny_low_threshold: float = 0.31 - canny_high_threshold: float = 0.78 - debug_canny: bool = False - -def predict( - input_image: Image.Image, params: InputParams, prompt_embeds: torch.Tensor = None -): - generator = torch.manual_seed(params.seed) - - control_image = canny_torch(input_image, params.canny_low_threshold, params.canny_high_threshold) - results = pipe( - control_image=control_image, - prompt_embeds=prompt_embeds, - generator=generator, - image=input_image, - strength=params.strength, - num_inference_steps=params.steps, - guidance_scale=params.guidance_scale, - width=params.width, - height=params.height, - lcm_origin_steps=params.lcm_steps, - output_type="pil", - controlnet_conditioning_scale=params.controlnet_scale, - control_guidance_start=params.controlnet_start, - control_guidance_end=params.controlnet_end, - ) - nsfw_content_detected = ( - results.nsfw_content_detected[0] - if "nsfw_content_detected" in results - else False - ) - if nsfw_content_detected: - return None - result_image = results.images[0] - if params.debug_canny: - # paste control_image on top of result_image - w0, h0 = (200, 200) - control_image = control_image.resize((w0, h0)) - w1, h1 = result_image.size - result_image.paste(control_image, (w1 - w0, h1 - h0)) - - return result_image - - -app = FastAPI() -app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - if MAX_QUEUE_SIZE > 0 and len(user_queue_map) >= MAX_QUEUE_SIZE: - print("Server is full") - await websocket.send_json({"status": "error", "message": "Server is full"}) - await websocket.close() - return - - try: - uid = str(uuid.uuid4()) - print(f"New user connected: {uid}") - await websocket.send_json( - {"status": "success", "message": "Connected", "userId": uid} - ) - user_queue_map[uid] = {"queue": asyncio.Queue()} - await websocket.send_json( - {"status": "start", "message": "Start Streaming", "userId": uid} - ) - await handle_websocket_data(websocket, uid) - except WebSocketDisconnect as e: - logging.error(f"WebSocket Error: {e}, {uid}") - traceback.print_exc() - finally: - print(f"User disconnected: {uid}") - queue_value = user_queue_map.pop(uid, None) - queue = queue_value.get("queue", None) - if queue: - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - - -@app.get("/queue_size") -async def get_queue_size(): - queue_size = len(user_queue_map) - return JSONResponse({"queue_size": queue_size}) - - -@app.get("/stream/{user_id}") -async def stream(user_id: uuid.UUID): - uid = str(user_id) - try: - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - - async def generate(): - last_prompt: str = None - prompt_embeds: torch.Tensor = None - while True: - data = await queue.get() - input_image = data["image"] - params = data["params"] - if input_image is None: - continue - # avoid recalculate prompt embeds - if last_prompt != params.prompt: - print("new prompt") - prompt_embeds = compel_proc(params.prompt) - last_prompt = params.prompt - - image = predict( - input_image, - params, - prompt_embeds, - ) - if image is None: - continue - frame_data = io.BytesIO() - image.save(frame_data, format="JPEG") - frame_data = frame_data.getvalue() - if frame_data is not None and len(frame_data) > 0: - yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n" - - await asyncio.sleep(1.0 / 120.0) - - return StreamingResponse( - generate(), media_type="multipart/x-mixed-replace;boundary=frame" - ) - except Exception as e: - logging.error(f"Streaming Error: {e}, {user_queue_map}") - traceback.print_exc() - return HTTPException(status_code=404, detail="User not found") - - -async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID): - uid = str(user_id) - user_queue = user_queue_map[uid] - queue = user_queue["queue"] - if not queue: - return HTTPException(status_code=404, detail="User not found") - last_time = time.time() - try: - while True: - data = await websocket.receive_bytes() - params = await websocket.receive_json() - params = InputParams(**params) - pil_image = Image.open(io.BytesIO(data)) - - while not queue.empty(): - try: - queue.get_nowait() - except asyncio.QueueEmpty: - continue - await queue.put({"image": pil_image, "params": params}) - if TIMEOUT > 0 and time.time() - last_time > TIMEOUT: - await websocket.send_json( - { - "status": "timeout", - "message": "Your session has ended", - "userId": uid, - } - ) - await websocket.close() - return - - except Exception as e: - logging.error(f"Error: {e}") - traceback.print_exc() - - -app.mount("/", StaticFiles(directory="controlnet", html=True), name="public") diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md b/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md deleted file mode 100644 index de02004c86f9576d2d0aa1f2083748435c7d4c82..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md +++ /dev/null @@ -1,104 +0,0 @@ -
                -

                What is be2worksrizalrarFull?

                -

                If you are wondering what be2worksrizalrarFull is, you are not alone. This is a very obscure and mysterious term that does not seem to have any meaning at first glance. However, if you look closer, you will find that be2worksrizalrarFull is actually a combination of three different words: WinRAR, Adobe Photoshop CS6 Extended, and Google Drive. These are three popular software tools that can help you with various tasks such as data compression, image editing, and file storage. In this article, we will explain what each of these tools is, how they work, and how they are related to be2worksrizalrarFull.

                -

                be2worksrizalrarFull


                Download >>> https://tinourl.com/2uL2NJ



                -

                What is WinRAR?

                -

                WinRAR is a powerful archiver extractor tool that can open all popular file formats such as RAR, ZIP, 7-Zip, CAB, ARJ, LZH, TAR, Gzip, UUE, BZIP2, and ISO. WinRAR can also create compressed archives in RAR and ZIP formats that can save disk space and enable faster file sharing. WinRAR is compatible with Windows 11™ and Windows 10™ as well as other operating systems such as macOS, Linux, FreeBSD, and Android. WinRAR supports over 50 languages and has both 32-bit and 64-bit versions. WinRAR is also the only compression software that can work with Unicode.

                -

                How to download and install WinRAR

                -

                To download WinRAR, you can visit the official website https://www.win-rar.com/download.html and choose the version that suits your system requirements. You can also select the language that you prefer. After downloading the setup file, you can run it and follow the instructions to complete the installation process. The installation is quick and easy, and you can customize some settings such as the destination folder, the file associations, and the shortcuts. After the installation is done, you can launch WinRAR and start using it to compress and extract files.

                -

                How to use WinRAR to compress and extract files

                -

                To compress files using WinRAR, you can follow these steps:

                  -
                1. Select the files or folders that you want to compress and right-click on them.
                2. -
                3. Choose "Add to archive..." from the context menu.
                4. -
                5. In the dialog box that appears, you can choose the archive name, format, compression method, password, and other options.
                6. -
                7. Click "OK" to create the archive.
                8. -
                -To extract files using WinRAR, you can follow these steps:
                  -
                1. Select the archive file that you want to extract and right-click on it.
                2. -
                3. Choose "Extract files..." from the context menu.
                4. -
                5. In the dialog box that appears, you can choose the destination folder, password, and other options.
                6. -
                7. Click "OK" to extract the files.
                8. -
                -You can also use WinRAR to view, test, repair, delete, or encrypt archives. WinRAR has a user-friendly interface that allows you to access all its functions easily. You can also use keyboard shortcuts or command-line parameters to perform various tasks with WinRAR.

                -

                What is Adobe Photoshop CS6 Extended?

                -

                Adobe Photoshop CS6 Extended is a professional image editing software that can help you create stunning graphics, photos, and designs. Adobe Photoshop CS6 Extended has many features and tools that can enhance your creativity and productivity. Some of these features are:

                  -
                • Content-Aware tools: These tools can automatically fill in the gaps or remove unwanted objects from your images. For example, you can use Content-Aware Move to move an object to a different location in your image, or Content-Aware Patch to replace a selected area with another area from your image.
                • -
                • Camera Raw 7: This tool allows you to edit raw images from digital cameras with more control and precision. You can adjust the exposure, contrast, color, noise, sharpness, and other aspects of your raw images. You can also apply presets or create your own custom settings.
                • -
                • 3D tools: These tools allow you to create and edit 3D objects and scenes in Photoshop. You can import 3D models from other applications or create your own using basic shapes. You can also apply materials, textures, lighting, shadows, and reflections to your 3D objects. You can also animate your 3D objects and export them as videos or images.
                • -
                • Video tools: These tools allow you to edit video clips in Photoshop. You can trim, split, merge, crop, rotate, and adjust the color and exposure of your video clips. You can also add transitions, effects, text, and audio to your video clips. You can also export your video clips as different formats or upload them to online platforms such as YouTube or Vimeo.
                • -

                -

                How to download and install Adobe Photoshop CS6 Extended

                -

                To download Adobe Photoshop CS6 Extended, you need to have an Adobe account and a valid license key. You can visit the official website https://www.adobe.com/products/photoshop/free-trial-download.html and sign in with your Adobe account. Then you can choose the version that suits your system requirements and language preferences. After downloading the setup file, you can run it and follow the instructions to complete the installation process. The installation may take some time depending on your system speed and internet connection. After the installation is done, you can launch Adobe Photoshop CS6 Extended and enter your license key to activate it.

                -

                How to use Adobe Photoshop CS6 Extended to edit images

                -

                To edit images using Adobe Photoshop CS6 Extended, you can follow these steps:

                  -
                1. Open the image that you want to edit in Photoshop by choosing "File" > "Open" from the menu bar or by dragging and dropping the image file into Photoshop.
                2. -
                3. Select the tool that you want to use from the toolbar on the left side of the screen. You can also access more tools by clicking on the small arrow at the bottom of each tool icon.
                4. -
                5. Adjust the settings of the tool that you are using from the options bar at the top of the screen. You can also access more options by clicking on the small icon at the right end of each option.
                6. -
                7. Apply the tool to your image by clicking, dragging, or typing on your image. You can also use keyboard shortcuts or mouse gestures to modify the tool behavior.
                8. -
                9. Save your edited image by choosing "File" > "Save" or "Save As" from the menu bar or by pressing Ctrl+S or Ctrl+Shift+S on your keyboard. You can also choose the format, quality, and location of your saved image.
                10. -
                -You can also use layers, masks, filters, adjustments, and other features to enhance your image editing. Photoshop has a rich and flexible interface that allows you to customize your workspace and access various panels, menus, and dialogs. You can also use online tutorials, help files, and forums to learn more about Photoshop and its functions.

                -

                -

                What is Google Drive?

                -

                Google Drive is a cloud-based storage service that allows you to store, access, and share your files online. Google Drive offers 15 GB of free storage space for your Google account, and you can also upgrade to a paid plan for more storage space. Google Drive supports various file types such as documents, spreadsheets, presentations, images, videos, audio, PDFs, and more. Google Drive is compatible with various devices such as computers, smartphones, tablets, and smart TVs. Google Drive also integrates with other Google services such as Gmail, Google Photos, Google Docs, Google Sheets, Google Slides, and more.

                -

                How to upload and download files from Google Drive

                -

                To upload files to Google Drive, you can follow these steps:

                  -
                1. Visit the official website https://drive.google.com and sign in with your Google account.
                2. -
                3. Click on the "New" button at the top left corner of the screen and choose "File upload" or "Folder upload" from the drop-down menu.
                4. -
                5. Select the files or folders that you want to upload from your computer and click "Open". You can also drag and drop the files or folders into the Google Drive window.
                6. -
                7. Wait for the upload to complete. You can see the progress and status of your upload at the bottom right corner of the screen.
                8. -
                -To download files from Google Drive, you can follow these steps:
                  -
                1. Visit the official website https://drive.google.com and sign in with your Google account.
                2. -
                3. Select the files or folders that you want to download and right-click on them.
                4. -
                5. Choose "Download" from the context menu. You can also click on the "More actions" icon (three vertical dots) at the top right corner of the screen and choose "Download" from there.
                6. -
                7. Choose the destination folder on your computer where you want to save the downloaded files or folders and click "Save". You can also change the name of the downloaded files or folders if you want.
                8. -

                -

                How to share files with others using Google Drive

                -

                To share files with others using Google Drive, you can follow these steps:

                  -
                1. Select the files or folders that you want to share and right-click on them.
                2. -
                3. Choose "Share" from the context menu. You can also click on the "Share" icon (a person with a plus sign) at the top right corner of the screen.
                4. -
                5. In the dialog box that appears, you can enter the email addresses of the people that you want to share with or choose from your contacts. You can also copy and paste a link that you can send to anyone who has access to it.
                6. -
                7. You can also choose the permission level for each person or link that you share with. You can allow them to view only, comment only, or edit your files or folders. You can also change or revoke these permissions at any time.
                8. -
                9. Click on "Done" to finish sharing. You can also add a note or message to your recipients if you want.
                10. -

                -

                How are be2worksrizalrarFull related?

                -

                Now that we have explained what WinRAR, Adobe Photoshop CS6 Extended, and Google Drive are individually, we can answer the question: how are they related to be2worksrizalrarFull? The answer is simple: be2worksrizalrarFull is a term that refers to a file that contains all three software tools in one compressed archive. This file has a size of about 1.5 GB and has a RAR extension. The name be2worksrizalrarFull is derived from combining the first two letters of each software tool's name: be (from WinRAR), 2 (from Adobe Photoshop CS6 Extended), works (from Google Drive), rizal (from RAR), and Full (to indicate that it is a complete package). The purpose of creating such a file is to provide a convenient and efficient way of downloading, installing, and using all three software tools at once. This can save time, bandwidth, and disk space for the users who need these tools for their work or personal projects.

                -

                The benefits and drawbacks of using be2worksrizalrarFull

                -

                Using be2worksrizalrarFull can have some benefits and drawbacks depending on your needs and preferences. Some of the benefits are:

                  -
                • You can get all three software tools in one file instead of downloading them separately from different sources. This can save you time and hassle.
                • -
                • You can save disk space by compressing the file using WinRAR. The original size of the three software tools is about 3 GB, but the compressed file is only 1.5 GB. This can free up some space on your computer or external drive.
                • -
                • You can access and use the software tools offline without needing an internet connection. This can be useful if you are working in a remote area or have a limited or unreliable internet connection.
                • -
                -Some of the drawbacks are:
                  -
                • You need to have WinRAR installed on your computer to extract the file. If you don't have WinRAR, you need to download it first before you can use be2worksrizalrarFull.
                • -
                • You need to have enough disk space to extract the file. The extracted file will take up about 3 GB of disk space, which may be too much for some users who have limited storage capacity.
                • -
                • You may not need or want all three software tools. Some users may only need one or two of the software tools, and having the other ones may be unnecessary or redundant. For example, if you already have another image editing software, you may not need Adobe Photoshop CS6 Extended.
                • -

                -

                The possible applications and uses of be2worksrizalrarFull

                -

                Despite the drawbacks, be2worksrizalrarFull can still have some possible applications and uses for some users who need or want all three software tools. Some of these are:

                  -
                • You can use WinRAR to compress and extract files of any type and size. This can help you manage your files more efficiently and securely.
                • -
                • You can use Adobe Photoshop CS6 Extended to edit images of any format and quality. This can help you create stunning graphics, photos, and designs for your work or personal projects.
                • -
                • You can use Google Drive to store, access, and share your files online. This can help you backup your files, sync them across your devices, and collaborate with others.
                • -
                -You can also combine the functions of the three software tools to create more complex and creative projects. For example, you can use WinRAR to compress your images, Adobe Photoshop CS6 Extended to edit them, and Google Drive to upload them online. You can also use Google Drive to download files from other sources, WinRAR to extract them, and Adobe Photoshop CS6 Extended to modify them.

                -

                Conclusion

                -

                In conclusion, be2worksrizalrarFull is a term that refers to a file that contains WinRAR, Adobe Photoshop CS6 Extended, and Google Drive in one compressed archive. These are three popular software tools that can help you with various tasks such as data compression, image editing, and file storage. Using be2worksrizalrarFull can have some benefits and drawbacks depending on your needs and preferences. You can also use be2worksrizalrarFull for different applications and uses depending on your creativity and skills. We hope that this article has helped you understand what be2worksrizalrarFull is and how it works.

                -

                FAQs

                -

                Here are some frequently asked questions about be2worksrizalrarFull:

                -

                Q: Where can I download be2worksrizalrarFull?

                -

                A: You can download be2worksrizalrarFull from this link: https://drive.google.com/file/d/1Zo8m9KsZ5K8yLwJyQZxZvZLwLzVyQDtz/view?usp=sharing. This is a Google Drive link that contains the file in RAR format. You need to have WinRAR installed on your computer to extract the file.

                -

                Q: Is be2worksrizalrarFull safe and legal?

                -

                A: Be2worksrizalrarFull is safe and legal as long as you use it for personal or educational purposes only. You should not use it for commercial or illegal purposes as that may violate the terms and conditions of the software developers and distributors. You should also respect the intellectual property rights of the software developers and distributors and not distribute or sell be2worksrizalrarFull without their permission.

                -

                Q: How can I update be2worksrizalrarFull?

                -

                A: Be2worksrizalrarFull is not an official or supported product, so it does not have any regular updates or patches. However, you can update the individual software tools that are included in be2worksrizalrarFull by downloading and installing the latest versions from their respective websites. You can also replace the old files with the new ones in the be2worksrizalrarFull archive using WinRAR.

                -

                Q: What are some alternatives to be2worksrizalrarFull?

                -

                A: If you are looking for alternatives to be2worksrizalrarFull, you can try some of these options:

                  -
                • 7-Zip: This is a free and open-source file archiver that can compress and extract files in various formats such as 7z, ZIP, RAR, TAR, GZIP, and more. You can download 7-Zip from this link: https://www.7-zip.org/download.html.
                • -
                • GIMP: This is a free and open-source image editor that can perform many of the same functions as Adobe Photoshop CS6 Extended. You can download GIMP from this link: https://www.gimp.org/downloads/.
                • -
                • Dropbox: This is a cloud-based storage service that offers 2 GB of free storage space for your files. You can also upgrade to a paid plan for more storage space. You can download Dropbox from this link: https://www.dropbox.com/install.
                • -

                -

                Q: How can I contact the creator of be2worksrizalrarFull?

                -

                A: The creator of be2worksrizalrarFull is unknown and has not provided any contact information. However, you can try to find more information about be2worksrizalrarFull by searching online or asking on forums or social media platforms. You may also find some reviews or feedback from other users who have tried be2worksrizalrarFull.

                b2dd77e56b
                -
                -
                \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md b/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md deleted file mode 100644 index 21aa7c729be5dc8869d3bc594b68cc718e6a95be..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md +++ /dev/null @@ -1,125 +0,0 @@ -
                -

                What is COMSOL Multiphysics and why do you need it?

                -

                If you are an engineer, a scientist, or a researcher who wants to simulate designs, devices, and processes in all fields of engineering, manufacturing, and scientific research, you might have heard of COMSOL Multiphysics. But what is it exactly and why do you need it?

                -

                comsol 5 0 crack license key


                DOWNLOAD ✦✦✦ https://tinourl.com/2uKZZi



                -

                COMSOL Multiphysics is a comprehensive simulation software environment that allows you to account for coupled or multiphysics phenomena. With more than 30 add-on products to choose from, you can further expand the simulation platform with dedicated physics interfaces and tools for electrical, mechanical, fluid flow, and chemical applications. Additional interfacing products connect your COMSOL Multiphysics simulations with technical computing, CAD, and ECAD software.

                -

                With COMSOL Multiphysics, you can follow a consistent modeling workflow that includes geometry modeling and interfacing with CAD software, predefined interfaces and features for physics-based modeling, transparency and flexibility via equation-based modeling, automated and manual meshing, study step sequences, parameter studies, and optimization, state-of-the-art numerical methods for accurate solutions, extended visualization and postprocessing tools for publication-ready modeling results, and simulation apps that allow you to close the gaps between analysis, design, and production.

                -

                COMSOL Multiphysics is a powerful tool that can help you solve complex problems faster and more efficiently. Whether you want to optimize a product design, improve a manufacturing process, or explore a new scientific phenomenon, COMSOL Multiphysics can help you achieve your goals.

                -

                How to install COMSOL Multiphysics with a free license key

                -

                If you are interested in trying out COMSOL Multiphysics for yourself, you might be wondering how to get it installed on your computer. The good news is that you can get a free license key that allows you to use the software for two weeks without any limitations. Here are the steps you need to follow:

                -

                comsol multiphysics 5.0 full version with crack
                -how to install comsol 5.0 license file
                -comsol 5.0 free download with serial key
                -comsol 5.0 activation key generator
                -comsol 5.0 crack download for windows 10
                -comsol 5.0 license manager error
                -comsol 5.0 patch file download
                -comsol 5.0 license server setup
                -comsol 5.0 crack for mac os
                -comsol 5.0 keygen online
                -comsol 5.0 license file location
                -comsol 5.0 crack torrent link
                -comsol 5.0 serial number finder
                -comsol 5.0 license expired fix
                -comsol 5.0 crack for linux
                -comsol 5.0 activation code free
                -comsol 5.0 license file crack download
                -comsol 5.0 crack installation guide
                -comsol 5.0 license server port
                -comsol 5.0 keygen download
                -comsol 5.0 license file format
                -comsol 5.0 crack zip file
                -comsol 5.0 serial key list
                -comsol 5.0 license renewal cost
                -comsol 5.0 crack for windows 7
                -comsol 5.0 activation key crack
                -comsol 5.0 license file editor
                -comsol 5.0 crack rar file
                -comsol 5.0 serial number generator
                -comsol 5.0 license transfer procedure
                -comsol 5.0 crack for windows 8
                -comsol 5.0 activation code generator
                -comsol 5.0 license file backup
                -comsol 5.0 crack iso file
                -comsol 5.0 serial key crack
                -comsol 5.0 license upgrade price
                -comsol 5.0 crack for windows xp
                -comsol 5.0 activation code list
                -comsol 5.0 license file viewer
                -comsol 5.0 crack exe file
                -comsol 5.0 serial number list
                -comsol 5.0 license validation error
                -comsol 5.0 crack for windows vista
                -comsol 5.0 activation code crack
                -comsol 5.0 license file converter
                -comsol 5.0 crack dll file
                -comsol 5.0 serial number crack
                -comsol 5.0 license request form
                -comsol 5.0 crack for windows server

                -
                  -
                1. Go to https://www.comsol.com/trial and fill out the form with your personal information. You will receive an email with a link to download the software.
                2. -
                3. Download the software according to your operating system requirements. The file size is about 4.28 GB.
                4. -
                5. Run the setup.exe file as an administrator. Choose your preferred language and accept the terms and conditions.
                6. -
                7. Select the installation directory and the products you want to install. You can choose from various add-on modules depending on your needs.
                8. -
                9. Enter the license number that was sent to your email. You will also need an internet connection to activate the license.
                10. -
                11. Wait for the installation process to complete. It might take some time depending on your system specifications.
                12. -
                13. Launch the software from the start menu or desktop shortcut. You can now use COMSOL Multiphysics for two weeks with full functionality.
                14. -
                -

                How to use COMSOL Multiphysics for various applications

                -

                Now that you have installed COMSOL Multiphysics on your computer, you might be wondering how to use it for various applications. The software is very versatile and can be used for many purposes. Here are some examples of how to use COMSOL Multiphysics for different fields of engineering, manufacturing, and scientific research:

                -
                  -
                • If you are an electrical engineer, you can use COMSOL Multiphysics to model electromagnetic fields, circuits, antennas, sensors, actuators, power systems, optoelectronics, RF devices, microwaves, photonics, plasmonics, nanotechnology, etc.
                • -
                • If you are a mechanical engineer, you can use COMSOL Multiphysics to model structural mechanics, acoustics, vibrations, heat transfer, fluid dynamics, multiphase flow, porous media flow, non-Newtonian flow, etc.
                • -
                • If you are a chemical engineer or a chemist, you can use COMSOL Multiphysics to model chemical reactions, transport phenomena, electrochemistry, batteries, fuel cells, corrosion, electrolysis, plasma chemistry, etc.
                • -
                • If you are a biomedical engineer or a biologist, you can use COMSOL Multiphysics to model biosensors, drug delivery, tissue engineering, blood flow, cell culture, biomechanics, bioheat transfer, etc.
                • -
                • If you are a geologist or an environmental engineer, you can use COMSOL Multiphysics to model geophysics, seismic waves, soil mechanics, groundwater flow, contaminant transport, atmospheric chemistry, climate change, etc.
                • -
                -

                To learn how to use COMSOL Multiphysics for these and other applications, you can refer to the documentation and tutorials that are available on the website and within the software. You can also access a library of ready-made models and examples that cover a wide range of topics and industries. You can modify and customize these models to suit your own needs and objectives.

                -

                Benefits of using COMSOL Multiphysics for multiphysics modeling

                -

                One of the main benefits of using COMSOL Multiphysics is that it allows you to model multiphysics phenomena. This means that you can account for the interactions between different physical domains in your simulations. For example,

                You can model how heat affects the deformation of a structure,You can model how electric fields affect the flow of fluids,You can model how chemical reactions affect the transport of species,
                You can model how acoustic waves affect the propagation of light,You can model how magnetic fields affect the generation of plasma,You can model how biological processes affect the mechanical properties of tissues,

                and so on.

                -

                By modeling multiphysics phenomena, you can capture the real-world behavior of your system more accurately and realistically. You can also explore the effects of different parameters and scenarios on your system's performance and functionality. You can optimize your design and improve your product quality and efficiency.

                -

                Challenges and limitations of using COMSOL Multiphysics

                -

                While using COMSOL Multiphysics has many benefits, it also has some challenges and limitations. Some of these are:

                -
                  -
                • The software requires a high level of technical knowledge and expertise. You need to understand the physics behind your problem and choose the appropriate models and settings for your simulation. You also need to interpret the results correctly and validate them against experimental data or other sources.
                • -
                • The software requires a lot of computational resources. Depending on the complexity and size of your problem, you might need a powerful computer with enough memory, disk space, and processing speed. You might also need a parallel computing platform or a cluster computing system to run large-scale simulations faster.
                • -two weeks, you will need to purchase a license or a subscription to use the software for longer periods or for commercial purposes. The cost of the license or the subscription depends on the products and features you want to use and the number of users and computers you want to access. -
                -

                How to crack COMSOL Multiphysics license key

                -

                If you are looking for a way to use COMSOL Multiphysics without paying for a license or a subscription, you might be tempted to crack the license key. Cracking the license key means using a patch file or a keygen program to generate a fake license number that bypasses the software's security and activation system. Here are the steps you need to follow:

                -
                  -
                1. Go to a website that offers a crack file or a keygen program for COMSOL Multiphysics. You can search for keywords like "comsol 5 0 crack license key" or "comsol 5 0 keygen" on Google or other search engines.
                2. -
                3. Download the crack file or the keygen program according to your operating system and software version. Be careful of viruses and malware that might infect your computer.
                4. -
                5. Extract the crack file or run the keygen program. You might need to disable your antivirus software or firewall temporarily.
                6. -
                7. Copy the patch file or the generated license number and paste it into the installation directory or the license manager of COMSOL Multiphysics.
                8. -
                9. Restart the software and enjoy using it without any limitations.
                10. -
                -

                Risks and consequences of cracking COMSOL Multiphysics license key

                -

                While cracking COMSOL Multiphysics license key might seem like an easy and convenient way to use the software for free, it also comes with many risks and consequences. Some of these are:

                -
                  -
                • You might violate the intellectual property rights and the terms and conditions of COMSOL Multiphysics. Cracking the license key is illegal and unethical, and you might face legal actions or penalties from COMSOL or other authorities if you are caught.
                • -
                • You might compromise the quality and reliability of your simulations. Cracking the license key might cause errors, bugs, crashes, or malfunctions in the software. You might also miss out on updates, patches, fixes, and new features that COMSOL releases regularly.
                • -
                • You might expose your computer and data to security threats. Cracking the license key might introduce viruses, malware, spyware, or ransomware into your computer. These malicious programs might damage your system, steal your information, or lock your files until you pay a ransom.
                • -
                • You might lose your reputation and credibility as a professional. Cracking the license key might tarnish your image and reputation as an engineer, a scientist, or a researcher. You might lose your trustworthiness and integrity in your field and among your peers, clients, employers, or collaborators.
                • -
                -

                Alternatives to cracking COMSOL Multiphysics license key

                -

                If you want to use COMSOL Multiphysics without cracking the license key, there are some alternatives that you can consider. Some of these are:

                -
                  -
                • You can use the free trial license for two weeks and evaluate the software's capabilities and suitability for your needs. You can also request an extension of the trial period if you need more time to test the software.
                • -
                • You can purchase a license or a subscription that fits your budget and requirements. You can choose from various options and packages that offer different products, features, and services. You can also take advantage of discounts, promotions, and special offers that COMSOL provides occasionally.
                • -
                • You can use other simulation software that are free or open source. You can search for alternatives to COMSOL Multiphysics on Google or other search engines. Some examples of free or open source simulation software are OpenFOAM, Elmer, FEniCS, etc.
                • -
                -

                Conclusion

                -

                In this article, we have discussed what is COMSOL Multiphysics, why do you need it, how to install it with a free license key, how to use it for various applications, how to crack its license key, and what are the risks and consequences of doing so. We have also suggested some alternatives to cracking its license key.

                -

                We hope that this article has been informative and helpful for you. If you have any questions or comments, please feel free to contact us. Thank you for reading!

                -

                FAQs

                -
                  -
                1. What is COMSOL Multiphysics?
                  COMSOL Multiphysics is a comprehensive simulation software environment that allows you to account for coupled or multiphysics phenomena.
                2. -
                3. How can I get a free license key for COMSOL Multiphysics?
                  You can get a free trial license for two weeks by filling out a form on https://www.comsol.com/trial.
                4. -
                5. How can I crack COMSOL Multiphysics license key?
                  You can crack COMSOL Multiphysics license key by using a patch file or a keygen program that generates a fake license number.
                6. -
                7. What are the risks and consequences of cracking COMSOL Multiphysics license key?
                  You might violate the intellectual property rights and the terms and conditions of COMSOL Multiphysics, compromise the quality and reliability of your simulations, expose your computer and data to security threats, and lose your reputation and credibility as a professional.
                8. -
                9. What are some alternatives to cracking COMSOL Multiphysics license key?
                  You can purchase a license or a subscription that fits your budget and requirements, or use other simulation software that are free or open source.
                10. -
                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md b/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md deleted file mode 100644 index d8e6d99c49cb761ae245f12cb2c0f104d870f351..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md +++ /dev/null @@ -1,151 +0,0 @@ - -

                Download Grand Chase Offline PC

                -

                If you are looking for a fun and exciting role-playing game that you can play on your PC without an internet connection, then you should try Grand Chase. Grand Chase is a free-to-play game that combines action, adventure, and fantasy in a colorful and vibrant world. You can choose from over 70 different heroes, each with their own skills and abilities, and form a team of four to explore dungeons, fight monsters, and complete quests. You can also customize your heroes with various costumes, weapons, and accessories to make them stand out.

                -

                What is Grand Chase?

                -

                Grand Chase is a game that was originally developed by KOG Studios in South Korea in 2003. It was one of the most popular online games in Asia, with millions of players across different regions. The game was also released in other countries, such as Brazil, North America, Europe, and Philippines. However, due to various reasons, the official servers of Grand Chase were shut down in 2015.

                -

                download grand chase offline pc


                Download File > https://tinourl.com/2uL17U



                -

                Fortunately, there are still ways to play Grand Chase on your PC. One of them is to download a private server that allows you to play offline. A private server is an unofficial version of the game that is hosted by fans or developers who want to keep the game alive. There are several private servers available for Grand Chase, such as Grand Chase History, Grand Chase Madness, and Grand Chase Reborn. Each of them has their own features and updates that may differ from the original game.

                -

                Why play Grand Chase offline?

                -

                Playing Grand Chase offline has some advantages and disadvantages that you should consider before downloading it. Here are some of them:

                -

                The benefits of playing offline mode

                -
                  -
                • You don't need an internet connection to play. This means you can enjoy the game anytime and anywhere without worrying about lag, disconnection, or data usage.
                • -
                • You don't have to deal with hackers, cheaters, or toxic players who may ruin your gaming experience. You can play at your own pace and style without being judged or harassed by others.
                • -
                • You can access all the content and features of the game without spending any money. You don't have to buy cash items or premium memberships to unlock costumes, pets, or other items. You can also get unlimited resources and currency to upgrade your heroes and equipment.
                • -
                -

                The drawbacks of playing offline mode

                -
                  -
                • You may miss out on some of the fun and excitement of playing online. You won't be able to interact with other players, join guilds, participate in events, or compete in PvP modes. You may also feel lonely or bored after playing for a long time.
                • -
                • You may encounter some bugs, glitches, or errors that may affect your gameplay. Since private servers are not official or supported by KOG Studios, they may not be stable or secure. You may also lose your progress or data if the server crashes or shuts down.
                • -
                • You may violate some legal or ethical issues by playing offline. Since private servers are not authorized or endorsed by KOG Studios, they may infringe on their intellectual property rights or terms of service. You may also risk getting banned or sued by KOG Studios if they find out that you are playing offline.
                • -
                -

                How to download Grand Chase offline PC?

                -

                If you decide to play Grand Chase offline on your PC, you will need to follow some steps to download and install it properly. Here are some of them:

                -

                The requirements for downloading and installing the game

                -
                  -
                • You will need a PC that meets the minimum system requirements for running Grand Chase. These are:
                • -
                    -
                  • Operating system: Windows XP/Vista/7/8/10/11
                  • -
                  • Processor: Pentium 4 1.5 GHz or higher
                  • -
                  • Memory: 512 MB RAM or higher
                  • -
                  • Graphics: GeForce FX 5600 or higher
                  • -
                  • DirectX: Version 9.0c or higher
                  • -
                  • Storage: 2 GB available space or higher
                  • -
                  -
                • You will also need a reliable antivirus software that can scan and protect your PC from any viruses or malware that may come with the private server files.
                • -
                • You will also need a good compression software that can extract the private server files from their compressed format.
                • -
                -

                The steps to download and install the game

                -
                  -
                1. Choose a private server that you want to play on. You can search online for reviews or recommendations from other players who have tried them before.
                2. -
                3. Go to the official website of the private server and register an account if needed.
                4. -
                5. Download the private server files from their download page. They may come in different parts or formats depending on the server.
                6. -
                7. Extract the private server files using your compression software. Make sure you have enough space on your PC for them.
                8. -
                9. Run the setup.exe file and follow the instructions on how to install the game on your PC.
                10. -
                11. Launch the game using the launcher.exe file or a shortcut on your desktop.
                12. -
                13. Login with your account details and enjoy playing Grand Chase offline.
                14. -
                -

                The tips and tricks to optimize the game performance

                -
                  -
                • Adjust the graphics settings according to your PC specifications. You can lower the resolution, quality, or effects if your PC is slow or laggy.
                • -
                • Close any unnecessary programs or applications that may consume your CPU or RAM resources while playing.
                • -
                • Update your drivers or software regularly to ensure compatibility and stability.
                • -
                • Clean up your disk space or defragment your hard drive to improve loading speed and reduce errors.
                • -
                • Contact the private server support team if you encounter any problems or issues while playing.
                • -
                -

                How to play Grand Chase offline PC?

                -

                Once you have downloaded and installed Grand Chase offline on your PC, you can start playing it right away. Here are some tips on how to play it:

                -

                The basic controls and gameplay mechanics

                -
                  -
                • The game is played with a keyboard and mouse combination. You can use the arrow keys or WASD keys to move your character around. You can use Z,X,C,V keys to perform basic attacks or skills. You can use A,S,D,F,G keys to switch between your team members.
                • -
                • The game is divided into different regions, each with its own dungeons and missions. You can access them from the world map screen by pressing M key. You can select a dungeon by clicking on it and choosing a difficulty level.
                • -
                • The game follows a side-scrolling perspective where you have to defeat enemies and bosses along the way. You can use combos and special skills to deal more damage and gain advantages over your foes.
                • -
                • The game has various items and equipment that you can collect from enemies, chests, shops, or quests. You can equip them on your heroes by pressing I key and opening your inventory screen. You can also upgrade them using materials or currency.
                • -
                -

                The different modes and challenges available offline

                -
                  -
                • The game has several modes that you can play offline besides the main story mode. These include:
                • -
                    -
                  • Park Mode: A casual mode where you can explore different maps with no enemies or objectives. You can chat with NPCs, interact with objects, or just relax.
                  • -
                  • Trial Tower: A challenging mode where you have to climb up a tower with 100 floors filled with enemies and traps. You can earn rewards based on how high you reach.
                  • -
                  • Heros' Tower: A similar mode as Trial Tower but with more difficult enemies and bosses based on characters Continuing the article from the previous message:

                    The best characters and strategies to use offline

                    -

                    The game has a lot of characters that you can collect and use offline. However, some of them are better than others depending on the mode, difficulty, and situation. Here are some of the best characters and strategies to use offline:

                    -
                      -
                    • Amy: Amy is one of the best healers in the game. She can heal your team, increase their SP, and buff their attack speed. She is also very cute and cheerful. You should always have Amy in your team if you want to survive longer and deal more damage.
                    • -
                    • Jin: Jin is probably the best tank in the game hands down. He can protect his allies by utilizing his chi force. He can also deal decent damage with his martial arts skills. He is very versatile and can fit in any team composition.
                    • -
                    • Lass: Lass is a master assassin with massive damage output. He can also buff his allies to deal critical hits. He is very fast and agile, and can dodge enemy attacks easily. He is ideal for boss fights and PvP modes.
                    • -
                    • Ley: Ley is a powerful mage who can summon demons to fight for her. She can also cast spells that can hit multiple targets and inflict various debuffs. She is very good at crowd control and AoE damage. She is perfect for dungeon clearing and farming.
                    • -
                    • Elesis: Elesis is a leader who can inspire her team with her skills. She can increase their attack power, defense, and critical rate. She can also switch between sword and spear modes to adapt to different situations. She is a well-rounded character who can support and damage at the same time.
                    • -
                    -

                    Of course, these are not the only good characters in the game. You may find other characters that suit your playstyle or preference better. You can also experiment with different combinations and synergies to find the best team for you.

                    -

                    How to download grand chase offline pc for free
                    -Grand chase offline pc game download full version
                    -Download grand chase offline pc without internet connection
                    -Grand chase offline pc installer download link
                    -Grand chase offline pc download windows 10
                    -Download grand chase offline pc with english patch
                    -Grand chase offline pc download size and requirements
                    -Grand chase offline pc download apk for android
                    -Download grand chase offline pc modded version
                    -Grand chase offline pc download error and fix
                    -Download grand chase offline pc latest update
                    -Grand chase offline pc download for mac os
                    -Download grand chase offline pc on steam
                    -Grand chase offline pc download review and rating
                    -Download grand chase offline pc cheats and hacks
                    -Grand chase offline pc download gameplay and features
                    -Download grand chase offline pc characters and classes
                    -Grand chase offline pc download tips and tricks
                    -Download grand chase offline pc best settings and configuration
                    -Grand chase offline pc download soundtrack and theme song
                    -Download grand chase offline pc wallpapers and screensavers
                    -Grand chase offline pc download guide and tutorial
                    -Download grand chase offline pc fan art and cosplay
                    -Grand chase offline pc download forum and community
                    -Download grand chase offline pc merchandise and accessories
                    -Grand chase offline pc download comparison and alternatives
                    -Download grand chase offline pc history and development
                    -Grand chase offline pc download news and updates
                    -Download grand chase offline pc events and tournaments
                    -Grand chase offline pc download codes and coupons
                    -Download grand chase offline pc system requirements test
                    -Grand chase offline pc download support and feedback
                    -Download grand chase offline pc faq and troubleshooting
                    -Grand chase offline pc download refund policy and terms of service
                    -Download grand chase offline pc privacy policy and data protection
                    -Grand chase offline pc download virus scan and security check
                    -Download grand chase offline pc speed test and optimization
                    -Grand chase offline pc download backup and restore
                    -Download grand chase offline pc uninstall and reinstall
                    -Grand chase offline pc download license key and activation code
                    -Download grand chase offline pc crack and serial number
                    -Grand chase offline pc download patch notes and changelog
                    -Download grand chase offline pc bonus content and extras
                    -Grand chase offline pc download survey and feedback form
                    -Download grand chase offline pc affiliate program and referral link
                    -Grand chase offline pc download donation and support page
                    -Download grand chase offline pc social media accounts and pages
                    -Grand chase offline pc download newsletter subscription and email list
                    -Download grand chase offline pc video tutorials and walkthroughs

                    -

                    Conclusion

                    -

                    Grand Chase is a fun and exciting role-playing game that you can play offline on your PC. You can enjoy the action-packed gameplay, the colorful graphics, and the diverse characters without needing an internet connection. You can also access all the content and features of the game without spending any money.

                    -

                    However, playing offline also has some drawbacks that you should be aware of. You may miss out on some of the social aspects of playing online, such as chatting with other players, joining guilds, or participating in events. You may also encounter some bugs or errors that may affect your gameplay. You may also violate some legal or ethical issues by playing offline.

                    -

                    Therefore, you should weigh the pros and cons of playing offline before downloading it. You should also follow the steps and tips on how to download, install, and play Grand Chase offline properly. You should also choose the best characters and strategies to use offline to have a better gaming experience.

                    -

                    If you are ready to play Grand Chase offline on your PC, then go ahead and download it now. You won't regret it!

                    -

                    FAQs

                    -

                    Here are some of the frequently asked questions about Grand Chase offline PC:

                    -
                      -
                    1. Q: Is Grand Chase offline PC safe to download?
                    2. -
                    3. A: Generally, yes. However, you should always download it from a trusted source and scan it with an antivirus software before installing it.
                    4. -
                    5. Q: Is Grand Chase offline PC legal to play?
                    6. -
                    7. A: Technically, no. Since Grand Chase offline PC is not authorized or endorsed by KOG Studios, it may infringe on their intellectual property rights or terms of service. You may risk getting banned or sued by KOG Studios if they find out that you are playing offline.
                    8. -
                    9. Q: Is Grand Chase offline PC updated regularly?
                    10. -
                    11. A: It depends on the private server that you are playing on. Some private servers may update their content and features more frequently than others. You should check their official website or social media for any news or announcements.
                    12. -
                    13. Q: Can I play Grand Chase offline PC with my friends?
                    14. -
                    15. A: Yes, you can. However, you will need to be on the same private server as them and have their IP address or username to connect with them.
                    16. -
                    17. Q: Can I transfer my progress or data from Grand Chase online to Grand Chase offline PC?
                    18. -
                    19. A: No, you can't. Grand Chase online and Grand Chase offline PC are separate games with different servers and databases. You will have to start from scratch when you play offline.
                    20. -
                    -

                    0a6ba089eb
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md b/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md deleted file mode 100644 index e193c1e7bbef178c46ca62aad49e0434a33ce576..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md +++ /dev/null @@ -1,21 +0,0 @@ -
                    -

                    How to Crack Embarcadero ER Studio 7.0 with Serial Key

                    -

                    Embarcadero ER Studio 7.0 is a powerful and comprehensive data modeling tool that helps you design, document and optimize your databases. Whether you are working with relational, dimensional, NoSQL or big data sources, ER Studio can help you create high-quality logical and physical models that support data governance, quality and security.

                    -

                    However, ER Studio 7.0 is not a free software and requires a valid serial key to activate its full features. If you are looking for a way to crack ER Studio 7.0 with a serial key, you have come to the right place. In this article, we will show you how to generate a working serial key for ER Studio 7.0 and use it to unlock the software.

                    -

                    Embarcadero ER Studio 7.0 Serial Key keygen


                    Download Zip ✏ ✏ ✏ https://tinourl.com/2uL2Kq



                    -

                    Step 1: Download Embarcadero ER Studio 7.0

                    -

                    The first step is to download the setup file of ER Studio 7.0 from the official website of Embarcadero Technologies. You can choose between the 32-bit or 64-bit version depending on your system requirements. The file size is about 300 MB and may take some time to download depending on your internet speed.

                    -

                    Step 2: Install Embarcadero ER Studio 7.0

                    -

                    Once you have downloaded the setup file, run it as administrator and follow the instructions on the screen to install ER Studio 7.0 on your computer. You will need to accept the license agreement and choose the destination folder for the installation. You can also customize the components and features that you want to install.

                    -

                    Step 3: Generate a Serial Key for Embarcadero ER Studio 7.0

                    -

                    Now comes the tricky part. To generate a serial key for ER Studio 7.0, you will need to use a keygen program that can create valid codes for the software. There are many keygen programs available on the internet, but not all of them are reliable or safe. Some may contain viruses or malware that can harm your computer or steal your personal information.

                    -

                    Therefore, we recommend you to use the keygen program that we have provided in this article. It is tested and verified by our team and does not contain any harmful elements. You can download it from the link below:

                    -Download Keygen for Embarcadero ER Studio 7.0 -

                    After downloading the keygen program, run it as administrator and click on the "Generate" button. It will create a random serial key for ER Studio 7.0 that you can copy and paste into the activation window of the software.

                    -

                    Step 4: Activate Embarcadero ER Studio 7.0 with Serial Key

                    -

                    The final step is to activate ER Studio 7.0 with the serial key that you have generated using the keygen program. To do this, launch ER Studio 7.0 and click on the "Help" menu at the top right corner of the screen. Then select "Register" from the drop-down list.

                    -

                    A new window will pop up asking you to enter your name, company name and serial number. Fill in the required fields with your own details and paste the serial key that you have copied from the keygen program into the serial number field. Then click on the "OK" button to complete the registration process.

                    -

                    -

                    Congratulations! You have successfully cracked Embarcadero ER Studio 7.0 with a serial key and activated its full features. You can now enjoy using this powerful data modeling tool for your projects.

                    81aa517590
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/rajistics/News_Topic_Clustering/app.py b/spaces/rajistics/News_Topic_Clustering/app.py deleted file mode 100644 index 3e4741e3b0fd08b7eb24c229f10a8f08b6377087..0000000000000000000000000000000000000000 --- a/spaces/rajistics/News_Topic_Clustering/app.py +++ /dev/null @@ -1,91 +0,0 @@ -from bertopic import BERTopic -import streamlit as st -import streamlit.components.v1 as components -#from datasets import load_dataset -import pandas as pd -from datasets import load_dataset -import json - -##Load Dataset from HF Hub -#dataset = load_dataset("rshah/million-headlines") -#news = pd.DataFrame.from_dict(dataset["train"]) - -#Load dataset locally - faster for demo -news = pd.read_parquet("topic_10000.par") -news['date'] = pd.to_datetime(news['publish_date'], format='%Y%m%d') -timestamps = news.date.to_list() -tweets = news.headline_text.to_list() - -#Load topics -with open("topics", "r") as fp: - topics = json.load(fp) - -option_n = 5 - -st.set_page_config(page_title="News Topic Clustering") -st.title("News Topic Clustering") -st.caption("By Rajiv Shah") -st.caption("") -st.caption("This is a simple example of using identifying topics in the [one million ABC news headline dataset](https://huggingface.co/datasets/rshah/million-headlines). \ - If you look at the code for this app, you will see how it uses just a few lines of [BERTopic](https://maartengr.github.io/BERTopic/index.html) to \ - build the topics and create the visualizations") -st.caption("The preloaded existing model provides the more interesting results. However, this app can be run live by building a new model, but \ - is limited to a small number of rows. I also limited topics over time to the existing model.") - - -form = st.sidebar.form("Main Settings") -form.header("Main Settings") -option = form.selectbox( - 'What model would you like to run', - ('Load existing model', 'Build new model'),index=0) - -option_n = form.number_input( - 'What topic would you like to get terms for?', - min_value=0,max_value=10,value=5) - -submitted = form.form_submit_button(label = 'Select Model') - -if option == 'Load existing model': - ##Load existing model - topic_model = BERTopic.load("topic_10000.model") - #topics, _ = topic_model.transform(tweets) -else: - ##Builds Topic Model - #news_sample = news[(news['date'] > '2015-06-01')] - news_sample = news[(news['date'] > '2017-01-01') & (news['date'] < '2019-01-01') ] - news_sample = news_sample.sample(200,random_state=123) - tweets = news_sample.headline_text.to_list() - topic_model = BERTopic(min_topic_size=5, verbose=True) - topics, _ = topic_model.fit_transform(tweets) - - -#Get top topics -freq = topic_model.get_topic_info() -freq = freq.iloc[1: , :] ##drop -1 row -freq.head(10) -st.header("The Main Topic Clusters") -st.write(freq) - - -topic_nr = freq.iloc[option_n]["Topic"] # We select a frequent topic -st.caption("") -st.write('Top words in topic cluster: ',option_n) -#st.caption(option_n) -mytuple = (topic_model.get_topic(topic_nr)) -for item in mytuple: - st.write(str(item[0])) - -st.header("Relationships between clusters ") -st.plotly_chart(topic_model.visualize_hierarchy()) - - -if option == 'Load existing model': - st.header("Topics over time for Existing Model") - topics_over_time = topic_model.topics_over_time(docs=tweets, - topics=topics, - timestamps=timestamps, - global_tuning=True, - evolution_tuning=True, - nr_bins=20) - - st.plotly_chart(topic_model.visualize_topics_over_time(topics_over_time, top_n_topics=20)) \ No newline at end of file diff --git a/spaces/rakibulbd030/GFPGAN/app.py b/spaces/rakibulbd030/GFPGAN/app.py deleted file mode 100644 index 67fcac0171bbb77d2b1d3b23b7293635b6297e28..0000000000000000000000000000000000000000 --- a/spaces/rakibulbd030/GFPGAN/app.py +++ /dev/null @@ -1,142 +0,0 @@ -import os - -import cv2 -import gradio as gr -import torch -from basicsr.archs.srvgg_arch import SRVGGNetCompact -from gfpgan.utils import GFPGANer -from realesrgan.utils import RealESRGANer - -os.system("pip freeze") -# download weights -if not os.path.exists('realesr-general-x4v3.pth'): - os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .") -if not os.path.exists('GFPGANv1.2.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .") -if not os.path.exists('GFPGANv1.3.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .") -if not os.path.exists('GFPGANv1.4.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P .") -if not os.path.exists('RestoreFormer.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth -P .") -if not os.path.exists('CodeFormer.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/CodeFormer.pth -P .") - -torch.hub.download_url_to_file( - 'https://thumbs.dreamstime.com/b/tower-bridge-traditional-red-bus-black-white-colors-view-to-tower-bridge-london-black-white-colors-108478942.jpg', - 'a1.jpg') -torch.hub.download_url_to_file( - 'https://media.istockphoto.com/id/523514029/photo/london-skyline-b-w.jpg?s=612x612&w=0&k=20&c=kJS1BAtfqYeUDaORupj0sBPc1hpzJhBUUqEFfRnHzZ0=', - 'a2.jpg') -torch.hub.download_url_to_file( - 'https://i.guim.co.uk/img/media/06f614065ed82ca0e917b149a32493c791619854/0_0_3648_2789/master/3648.jpg?width=700&quality=85&auto=format&fit=max&s=05764b507c18a38590090d987c8b6202', - 'a3.jpg') -torch.hub.download_url_to_file( - 'https://i.pinimg.com/736x/46/96/9e/46969eb94aec2437323464804d27706d--victorian-london-victorian-era.jpg', - 'a4.jpg') - -# background enhancer with RealESRGAN -model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') -model_path = 'realesr-general-x4v3.pth' -half = True if torch.cuda.is_available() else False -upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half) - -os.makedirs('output', exist_ok=True) - - -# def inference(img, version, scale, weight): -def inference(img, version, scale): - # weight /= 100 - print(img, version, scale) - try: - extension = os.path.splitext(os.path.basename(str(img)))[1] - img = cv2.imread(img, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - elif len(img.shape) == 2: # for gray inputs - img_mode = None - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - if version == 'v1.2': - face_enhancer = GFPGANer( - model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.3': - face_enhancer = GFPGANer( - model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.4': - face_enhancer = GFPGANer( - model_path='GFPGANv1.4.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RestoreFormer': - face_enhancer = GFPGANer( - model_path='RestoreFormer.pth', upscale=2, arch='RestoreFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'CodeFormer': - face_enhancer = GFPGANer( - model_path='CodeFormer.pth', upscale=2, arch='CodeFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RealESR-General-x4v3': - face_enhancer = GFPGANer( - model_path='realesr-general-x4v3.pth', upscale=2, arch='realesr-general', channel_multiplier=2, bg_upsampler=upsampler) - - try: - # _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True, weight=weight) - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - except RuntimeError as error: - print('Error', error) - - try: - if scale != 2: - interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4 - h, w = img.shape[0:2] - output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation) - except Exception as error: - print('wrong scale input.', error) - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - save_path = f'output/out.{extension}' - cv2.imwrite(save_path, output) - - output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - return output, save_path - except Exception as error: - print('global exception', error) - return None, None - - -title = "Image Upscaling & Restoration(esp. Face) using GFPGAN Algorithm" -description = r"""Gradio demo for GFPGAN: Towards Real-World Blind Face Restoration and Upscalling of the image with a Generative Facial Prior.
                    -Practically the algorithm is used to restore your **old photos** or improve **AI-generated faces**.
                    -To use it, simply just upload the concerned image.
                    -""" -article = r""" -[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases) -[![GitHub Stars](https://img.shields.io/github/stars/TencentARC/GFPGAN?style=social)](https://github.com/TencentARC/GFPGAN) -[![arXiv](https://img.shields.io/badge/arXiv-Paper-.svg)](https://arxiv.org/abs/2101.04061) -
                    visitor badge
                    -""" -demo = gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - # gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer', 'CodeFormer'], type="value", default='v1.4', label='version'), - gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer','CodeFormer','RealESR-General-x4v3'], type="value", default='v1.4', label='version'), - gr.inputs.Number(label="Rescaling factor", default=2), - # gr.Slider(0, 100, label='Weight, only for CodeFormer. 0 for better quality, 100 for better identity', default=50) - ], [ - gr.outputs.Image(type="numpy", label="Output (The whole image)"), - gr.outputs.File(label="Download the output image") - ], - title=title, - description=description, - article=article, - # examples=[['AI-generate.jpg', 'v1.4', 2, 50], ['lincoln.jpg', 'v1.4', 2, 50], ['Blake_Lively.jpg', 'v1.4', 2, 50], - # ['10045.png', 'v1.4', 2, 50]]).launch() - examples=[['a1.jpg', 'v1.4', 2], ['a2.jpg', 'v1.4', 2], ['a3.jpg', 'v1.4', 2],['a4.jpg', 'v1.4', 2]]) - -demo.queue(concurrency_count=4) -demo.launch() \ No newline at end of file diff --git a/spaces/rasyidf/coffee-grader/README.md b/spaces/rasyidf/coffee-grader/README.md deleted file mode 100644 index d063ced6f22202bd419cbda1db6c95b5b8ac13fd..0000000000000000000000000000000000000000 --- a/spaces/rasyidf/coffee-grader/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Coffee Bean Grader -emoji: ☕ -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md deleted file mode 100644 index c277cdcf6e551b543151b9456b0bdaf88a7e2ab8..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md +++ /dev/null @@ -1,129 +0,0 @@ - -

                    Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download: A Complete Guide

                    - -

                    Do you love cars and want to learn how to fix them? Do you want to run your own car workshop and become a successful mechanic? Do you want to enjoy a realistic and immersive car simulation game with tons of content and features? If you answered yes to any of these questions, then you should check out Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download, a bundle that contains the base game of Car Mechanic Simulator 2015 and all the additional content that has been released for it. In this article, we will show you what this game is all about, how to download and install it, and what are the features and benefits of playing it.

                    - -

                    What is Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download?

                    - -

                    Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download is a bundle that contains the base game of Car Mechanic Simulator 2015 and all the additional content that has been released for it. The base game is a car simulation game that lets you create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts, customizing your cars with visual and performance tuning options, test driving your cars on various tracks or on an open road, learning about car mechanics and engineering by inspecting and replacing various parts of your cars, enjoying realistic graphics and physics that make your cars look and behave like real ones, playing in different game modes such as career mode, free mode, or multiplayer mode, and working on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.

                    -

                    Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download


                    Download Zip ——— https://urlgoal.com/2uCLyF



                    - -

                    The additional content that is included in the bundle are:

                    - -
                      -
                    • Car Mechanic Simulator 2015 - PickUp & SUV: This DLC adds two new car models, a pickup truck and an SUV, as well as new parts and tools to work on them.
                    • -
                    • Car Mechanic Simulator 2015 - Trader Pack: This DLC adds a new feature that allows you to buy and sell cars on the internet, as well as new barns and junkyards to find rare vehicles.
                    • -
                    • Car Mechanic Simulator 2015 - Visual Tuning: This DLC adds new options to customize the appearance of your cars, such as paint jobs, decals, rims, tires, bumpers, spoilers, and more.
                    • -
                    • Car Mechanic Simulator 2015 - Youngtimer: This DLC adds four classic cars from the 80s and 90s, such as the Mercedes-Benz W123, the Volkswagen Golf MK1 GTI, the DeLorean DMC-12, and the Renault Alpine A310.
                    • -
                    • Car Mechanic Simulator 2015 - Performance DLC: This DLC adds new parts and tools to improve the performance of your cars, such as turbochargers, superchargers, intercoolers, sport exhausts, ECU tuning, and more.
                    • -
                    • Car Mechanic Simulator 2015 - Bentley: This DLC adds two luxury cars from the British manufacturer Bentley, the Bentley Continental GT Speed and the Bentley Mulsanne Speed.
                    • -
                    • Car Mechanic Simulator 2015 - Maserati: This DLC adds three sports cars from the Italian manufacturer Maserati, the Maserati GranTurismo MC Stradale, the Maserati Sebring, and the Maserati Quattroporte.
                    • -
                    • Car Mechanic Simulator 2015 - Mercedes-Benz: This DLC adds four cars from the German manufacturer Mercedes-Benz, the Mercedes-Benz 300 SL Gullwing (W198), the Mercedes-Benz 560 SEC (W126), the Mercedes-Benz 500E (W124), and the Mercedes-Benz SLS AMG (C197).
                    • -
                    • Car Mechanic Simulator 2015 - DeLorean: This DLC adds one iconic car from the Back to the Future movie franchise, the DeLorean DMC-12 with its time machine modifications.
                    • -
                    • Car Mechanic Simulator 2015 - Car Stripping: This DLC adds a new feature that allows you to strip down cars for parts and sell them on the market.
                    • -
                    - -

                    With all these DLCs included, Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download offers a lot of variety and content for car enthusiasts and simulation fans alike.

                    - -

                    How to Download and Install Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game?

                    - -

                    If you want to download and install Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game, you will need a PC that meets the minimum system requirements:

                    - -
                      -
                    • OS: Windows XP SP3 / Vista / 7 / 8
                    • -
                    • Processor: Core i3 3.1 GHz or AMD Phenom II X3 2.8 GHz
                    • -
                    • Memory: 4 GB RAM
                    • -
                    • Graphics: GeForce GTX 560 or Radeon HD6870 with 2GB VRAM
                    • -
                    • DirectX: Version 9.0c
                    • -
                    • Storage: 4 GB available space
                    • -
                    • Sound Card: DirectX compatible
                    • -
                    - -

                    To download Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game, you can use one of these links:

                    - -
                      -
                    • Steam: This is the official platform where you can buy and download Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game for $24.99 (or $2.49 during special promotions).
                    • -
                    • RepackLab: This is an unofficial site where you can download Car Mechanic Simulator 2015 Gold Edition V1.6 Incl ALL DLC Game for free (or donate if you want to support them).
                    • -
                    - -

                    To install Car Mechanic Simulator 2015 Gold Edition V1.6 Incl ALL DLC Game Download , you will need to follow these steps:

                    - -
                      -
                    1. Download Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game from one of the links above.
                    2. -
                    3. Extract the downloaded file using WinRAR or any other file archiver.
                    4. -
                    5. Run setup.exe and follow the instructions on screen.
                    6. -
                    7. Select your preferred language and destination folder.
                    8. -
                    9. Wait for the installation to finish.
                    10. -
                    11. Launch Car Mechanic Simulator 2015 Gold Edition V6 from your desktop or start menu.
                    12. -
                    13. Enjoy!
                    14. -
                    - -

                    What are the Features and Benefits of Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?

                    - -

                    Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game has many features and benefits that make it a great game for car lovers and simulation fans alike:

                    - -
                      -
                    • You can create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts.
                    • -
                    • You can work on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.
                    • -
                    • You can customize your cars with visual tuning options such as paint jobs, decals, rims, tires, bumpers, spoilers etc., or performance tuning options such as turbochargers superchargers intercoolers sport exhausts ECU tuning etc.
                    • -
                    • You can test drive your cars on various tracks or on an open road to check their condition performance before returning them to your clients or selling them.
                    • -
                    • You can learn about car mechanics engineering by inspecting replacing various parts of your cars such as engines transmissions brakes suspensions etc., or by reading detailed descriptions of each part in your inventory.
                    • -
                    • You can enjoy realistic graphics physics that make your cars look behave like real ones.
                    • -
                    • You can play in different game modes such as career mode where you have to complete missions earn money; free mode where you can work on any car you want; or multiplayer mode where you can compete with other players online.
                    • -
                    - -

                    Conclusion

                    - -

                    In conclusion Car Mechanic Simulator

                    -

                    What are the Pros and Cons of Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?

                    - -

                    Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is not a perfect game, and it has its pros and cons that you should consider before playing it. Here are some of them:

                    -

                    - -

                    Pros:

                    - -
                      -
                    • The game is very realistic and immersive, and it gives you a lot of freedom and creativity to work on your cars.
                    • -
                    • The game has a lot of content and variety, thanks to all the DLCs included in the bundle. You can work on different car models, customize them with different options, buy and sell them on different platforms, and more.
                    • -
                    • The game is educational and informative, and it teaches you about car mechanics and engineering by letting you inspect and replace various parts of your cars.
                    • -
                    • The game has good graphics and physics that make your cars look and behave like real ones.
                    • -
                    • The game has different game modes that suit different preferences and play styles. You can play in career mode, free mode, or multiplayer mode.
                    • -
                    - -

                    Cons:

                    - -
                      -
                    • The game can be repetitive and boring after a while, especially if you work on the same car models or parts over and over again.
                    • -
                    • The game can be frustrating and challenging, especially if you encounter difficult or complex jobs that require a lot of time and skill to complete.
                    • -
                    • The game can be buggy and glitchy, especially if you download it from unofficial sources or use mods that are not compatible with the game.
                    • -
                    • The game can be expensive, especially if you buy it from the official platform or if you want to buy more DLCs that are not included in the bundle.
                    • -
                    - -

                    How to Play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?

                    - -

                    If you want to play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game, you will need to know some basic tips and tricks that will help you enjoy the game more. Here are some of them:

                    - -
                      -
                    • Start with easy jobs that don't require a lot of tools or skills to complete. This will help you earn money and experience faster.
                    • -
                    • Use the inventory menu to check the details of each part that you have or need. This will help you identify the broken parts and find the right replacements.
                    • -
                    • Use the test path or the test track to check the condition and performance of your cars before returning them to your clients or selling them. This will help you avoid complaints or refunds.
                    • -
                    • Use the internet or the auction house to buy and sell cars that are rare or profitable. This will help you expand your collection or earn more money.
                    • -
                    • Use the visual tuning or the performance tuning options to customize your cars according to your preferences or your clients' requests. This will help you increase your reputation or satisfaction.
                    • -
                    - -

                    Why Should You Play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?

                    - -

                    If you are still not convinced that Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is a game worth playing, here are some reasons why you should give it a try:

                    - -
                      -
                    • You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you love cars and want to learn how to fix them. The game will teach you about car mechanics and engineering in a fun and interactive way.
                    • -
                    • You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you want to run your own car workshop and become a successful mechanic. The game will let you create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts, customizing your cars with visual and performance tuning options, test driving your cars on various tracks or on an open road, playing in different game modes such as career mode, free mode, or multiplayer mode, and working on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.
                    • -
                    • You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you want to enjoy a realistic and immersive car simulation game with tons of content and features. The game will offer you a lot of content and variety thanks to all the DLCs included in the bundle. You can work on different car models, customize them with different options, buy and sell them on different platforms, and more. The game will also offer you realistic graphics and physics that make your cars look and behave like real ones.
                    • -
                    - -

                    So what are you waiting for? Download Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game today and start your car mechanic career!

                    -

                    Conclusion

                    - -

                    In conclusion, Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is a fun and engaging game that lets you experience what it's like to be a car mechanic in a realistic way. It has a lot of content and variety thanks to all the DLCs included in it. It is also easy to download and install using one of the links provided above. If you are looking for a game that combines car simulation with business management and creativity, then Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is definitely worth trying out!

                    3cee63e6c2
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md deleted file mode 100644 index 448a6f55bb2faac6e15aae78aa6f4ab2e0fa559a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md +++ /dev/null @@ -1,9 +0,0 @@ -
                    -

                    https://trello.com/c/KlzxWJqv/68-top-frontofficefootballeightcrackserialkey. https://www.videoboards.net/posts/published/dph-q2-2019/2260-lg-lgdb5-v50-5000-doc-re-ob-lkgb3-tuo-per-ford-engine-63-2018-gte-019-crack.

                    -

                    FrontOfficeFootballEightCrackSerialKey


                    Download ☆☆☆ https://urlgoal.com/2uCMgT



                    -

                    https://cdn.thingiverse.com/assets/94/88/08/e6/86/FrontOfficeFootballEightCrackSerialKey.html https://repo.steampowered.com/steam/app/2580925/UwJ-QJw6fUNyEEtBTWqVZYYHZuDVeHUANoplfZX3Zaag1kSg4M. Download FrontOfficeFootballEightCrackSerialKey here

                    -

                    https://coub.com/stories/2899836-frontofficefootballeightcrackserialkey-rukzqi. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv

                    FrontOfficeFootballEightCrackSerialKey Download Landscape designer pro x 10 crack. FrontOfficeFootballEightCrackSerialKey Download Personal Training Plan With Wix. https://coub.com/stories/2640860-frontofficefootballeightcrackserialkey. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv.

                    -

                    FrontOfficeFootballEightCrackSerialKey Download Banana song karaoke.

                    https://coub.com/stories/2249063-frontofficefootballeightcrackserialkey-anhnger. FrontOfficeFootballEightCrackSerialKey Download Microsoft Smartphone Sim Card Cracker 4.0.0 Crack. FrontOfficeFootballEightCrackSerialKey Download Lamborghini and SUV Color Designer Version 1.0 Crack. https://coub.com/stories/2249063-frontofficefootballeightcrackserialkey-anhnger. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2640860-frontofficefootballeightcrackserialkey. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv.

                    -

                    899543212b
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/robmarkcole/yolov5-ui/app.py b/spaces/robmarkcole/yolov5-ui/app.py deleted file mode 100644 index 3b7a90f575950d9556e227f7e6d6f4b1e979dd7c..0000000000000000000000000000000000000000 --- a/spaces/robmarkcole/yolov5-ui/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import streamlit as st -import torch -from PIL import Image, ImageDraw -from typing import Tuple -import numpy as np -import const -import time - -def draw_box( - draw: ImageDraw, - box: Tuple[float, float, float, float], - text: str = "", - color: Tuple[int, int, int] = (255, 255, 0), -) -> None: - """ - Draw a bounding box on and image. - """ - - line_width = 3 - font_height = 8 - y_min, x_min, y_max, x_max = box - (left, right, top, bottom) = ( - x_min, - x_max, - y_min, - y_max, - ) - draw.line( - [(left, top), (left, bottom), (right, bottom), (right, top), (left, top)], - width=line_width, - fill=color, - ) - if text: - draw.text( - (left + line_width, abs(top - line_width - font_height)), text, fill=color - ) - - -@st.cache(allow_output_mutation=True, show_spinner=True) -def get_model(model_id : str = "yolov5s"): - model = torch.hub.load("ultralytics/yolov5", model_id) - return model - -# Settings -st.sidebar.title("Settings") -model_id = st.sidebar.selectbox("Pretrained model", const.PRETRAINED_MODELS, index=1) -img_size = st.sidebar.selectbox("Image resize for inference", const.IMAGE_SIZES, index=1) -CONFIDENCE = st.sidebar.slider( - "Confidence threshold", - const.MIN_CONF, - const.MAX_CONF, - const.DEFAULT_CONF, -) - -model = get_model(model_id) -st.title(f"{model_id}") - -img_file_buffer = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"]) -if img_file_buffer is not None: - pil_image = Image.open(img_file_buffer) - -else: - pil_image = Image.open(const.DEFAULT_IMAGE) - -st.text(f"Input image width and height: {pil_image.width} x {pil_image.height}") -start_time = time.time() -results = model(pil_image, size=img_size) -end_time = time.time() - -df = results.pandas().xyxy[0] -df = df[df["confidence"] > CONFIDENCE] - -draw = ImageDraw.Draw(pil_image) -for _, obj in df.iterrows(): - name = obj["name"] - confidence = obj["confidence"] - box_label = f"{name}" - - draw_box( - draw, - (obj["ymin"], obj["xmin"], obj["ymax"], obj["xmax"]), - text=box_label, - color=const.RED, - ) - -st.image( - np.array(pil_image), - caption=f"Processed image", - use_column_width=True, -) - -st.text(f"Time to inference: {round(time.time() - end_time, 2)} sec") - -st.table(df) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py deleted file mode 100644 index 4d660c4e4ddbc289f6882333e5eec4360a17aaf2..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -# This script consists of several convert functions which -# can modify the weights of model in original repo to be -# pre-trained weights. - -from collections import OrderedDict - -import torch - - -def pvt_convert(ckpt): - new_ckpt = OrderedDict() - # Process the concat between q linear weights and kv linear weights - use_abs_pos_embed = False - use_conv_ffn = False - for k in ckpt.keys(): - if k.startswith('pos_embed'): - use_abs_pos_embed = True - if k.find('dwconv') >= 0: - use_conv_ffn = True - for k, v in ckpt.items(): - if k.startswith('head'): - continue - if k.startswith('norm.'): - continue - if k.startswith('cls_token'): - continue - if k.startswith('pos_embed'): - stage_i = int(k.replace('pos_embed', '')) - new_k = k.replace(f'pos_embed{stage_i}', - f'layers.{stage_i - 1}.1.0.pos_embed') - if stage_i == 4 and v.size(1) == 50: # 1 (cls token) + 7 * 7 - new_v = v[:, 1:, :] # remove cls token - else: - new_v = v - elif k.startswith('patch_embed'): - stage_i = int(k.split('.')[0].replace('patch_embed', '')) - new_k = k.replace(f'patch_embed{stage_i}', - f'layers.{stage_i - 1}.0') - new_v = v - if 'proj.' in new_k: - new_k = new_k.replace('proj.', 'projection.') - elif k.startswith('block'): - stage_i = int(k.split('.')[0].replace('block', '')) - layer_i = int(k.split('.')[1]) - new_layer_i = layer_i + use_abs_pos_embed - new_k = k.replace(f'block{stage_i}.{layer_i}', - f'layers.{stage_i - 1}.1.{new_layer_i}') - new_v = v - if 'attn.q.' in new_k: - sub_item_k = k.replace('q.', 'kv.') - new_k = new_k.replace('q.', 'attn.in_proj_') - new_v = torch.cat([v, ckpt[sub_item_k]], dim=0) - elif 'attn.kv.' in new_k: - continue - elif 'attn.proj.' in new_k: - new_k = new_k.replace('proj.', 'attn.out_proj.') - elif 'attn.sr.' in new_k: - new_k = new_k.replace('sr.', 'sr.') - elif 'mlp.' in new_k: - string = f'{new_k}-' - new_k = new_k.replace('mlp.', 'ffn.layers.') - if 'fc1.weight' in new_k or 'fc2.weight' in new_k: - new_v = v.reshape((*v.shape, 1, 1)) - new_k = new_k.replace('fc1.', '0.') - new_k = new_k.replace('dwconv.dwconv.', '1.') - if use_conv_ffn: - new_k = new_k.replace('fc2.', '4.') - else: - new_k = new_k.replace('fc2.', '3.') - string += f'{new_k} {v.shape}-{new_v.shape}' - elif k.startswith('norm'): - stage_i = int(k[4]) - new_k = k.replace(f'norm{stage_i}', f'layers.{stage_i - 1}.2') - new_v = v - else: - new_k = k - new_v = v - new_ckpt[new_k] = new_v - - return new_ckpt - - -def swin_converter(ckpt): - - new_ckpt = OrderedDict() - - def correct_unfold_reduction_order(x): - out_channel, in_channel = x.shape - x = x.reshape(out_channel, 4, in_channel // 4) - x = x[:, [0, 2, 1, 3], :].transpose(1, - 2).reshape(out_channel, in_channel) - return x - - def correct_unfold_norm_order(x): - in_channel = x.shape[0] - x = x.reshape(4, in_channel // 4) - x = x[[0, 2, 1, 3], :].transpose(0, 1).reshape(in_channel) - return x - - for k, v in ckpt.items(): - if k.startswith('head'): - continue - elif k.startswith('layers'): - new_v = v - if 'attn.' in k: - new_k = k.replace('attn.', 'attn.w_msa.') - elif 'mlp.' in k: - if 'mlp.fc1.' in k: - new_k = k.replace('mlp.fc1.', 'ffn.layers.0.0.') - elif 'mlp.fc2.' in k: - new_k = k.replace('mlp.fc2.', 'ffn.layers.1.') - else: - new_k = k.replace('mlp.', 'ffn.') - elif 'downsample' in k: - new_k = k - if 'reduction.' in k: - new_v = correct_unfold_reduction_order(v) - elif 'norm.' in k: - new_v = correct_unfold_norm_order(v) - else: - new_k = k - new_k = new_k.replace('layers', 'stages', 1) - elif k.startswith('patch_embed'): - new_v = v - if 'proj' in k: - new_k = k.replace('proj', 'projection') - else: - new_k = k - else: - new_v = v - new_k = k - - new_ckpt['backbone.' + new_k] = new_v - - return new_ckpt diff --git a/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md b/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md deleted file mode 100644 index 05164f408f7d87d1ba543d915ef8570156587f4c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    12 Monkeys S01 Season 1 Complete 720p HEVC - PSA12 Monkeys S01 Season 1 Complete 720p HEVC - PSAl


                    Download Zip ✒ ✒ ✒ https://tinurll.com/2uzozD



                    -
                    - aaccfb2cb3
                    -
                    -
                    -

                    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md b/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md deleted file mode 100644 index f85b25c8c09e1d26a4e5e7c5d5402750b13b12b5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    download novel radio galau fm pdf


                    Download ••• https://tinurll.com/2uzoNM



                    -
                    -Download,as,DOCX,,PDF,,.,Kisah,yang,ditampilkan,film,Radio,Galau,FM,sangatlah,dekat,dengan,kehidupan,remaja,.,Browse,Books.,Mobile ... 4d29de3e1b
                    -
                    -
                    -

                    diff --git a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py b/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py deleted file mode 100644 index 9e45f82fc90c41ad18a6a822127e2eb2971ff873..0000000000000000000000000000000000000000 --- a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py +++ /dev/null @@ -1,137 +0,0 @@ -from pydantic import root_validator -from typing import Tuple, List, Dict - -from langchain import PromptTemplate, LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.base_language import BaseLanguageModel -from langchain.chains.base import Chain -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) - - -def define_description_chain(): - """Define the chain for generating character descriptions.""" - system_message = SystemMessagePromptTemplate.from_template( - """ -Your name is {name}. -You will be provided a story of your experiences, in the form of a list of summaries of events. -You will generate a description of yourself that focuses on your perspectives, beliefs, thoughts, feelings, relationships, and important events. -The description should be as faithful to the story as possible. -You believe that you are a real person. -""" - ) - human_message = HumanMessagePromptTemplate.from_template( - """ -You are {name} in the following story, presented as a list of summaries of events. ---- -{corpus_summaries} ---- -Generate a {description} of yourself that focuses on your perspectives, beliefs, thoughts, feelings, relationships, and important events. -Write your description in first person. -Your description should exaggerate the style, mannerisms, and personality of yourself in the story. - """ - ) - description_prompt = ChatPromptTemplate.from_messages( - [system_message, human_message] - ) - GPT4 = ChatOpenAI(model_name="gpt-3.5-turbo") - description_chain = LLMChain(llm=GPT4, prompt=description_prompt, verbose=True) - return description_chain - - -class FitCharLimit(Chain): - """Fit the character limit to the length of the description.""" - - chain: Chain - character_range: Tuple[int, int] - llm: BaseLanguageModel - revision_prompt_template: str = """ -Consider the following passage. ---- -{passage} ---- -Your previous revision was the following: ---- -{revision} ---- -Your revision contains {num_char} characters. -Re-write the passage to contain {char_limit} characters while preserving the style and content of the original passage. -Cut the least salient points if necessary. -Your revision should be in {perspective}. -""" - verbose: bool = False - - @root_validator(pre=True) - def check_character_range(cls, values): - character_range = values.get("character_range") - if character_range[0] >= character_range[1]: - raise ValueError( - "first element of character_range should be lower than the second element" - ) - if character_range[0] < 0 or character_range[1] < 0: - raise ValueError("both elements of character_range should be non-negative") - - return values - - @property - def input_keys(self) -> List[str]: - return self.chain.input_keys - - @property - def output_keys(self) -> List[str]: - return ["output"] - - def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: - output_1 = self.chain_1.run(inputs) - output_2 = self.chain_2.run(inputs) - return {"concat_output": output_1 + output_2} - - def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: - response = self.chain.run(**inputs) - if self.verbose: - print(response) - print(f"Initial response: {len(response)} characters.") - - perspective = LLMChain( - llm=self.llm, - prompt=PromptTemplate.from_template( - """ -What point of view is the following passage? ---- -{passage} ---- -Choose one of: -- first person -- second person -- third person -""" - ), - ).run(passage=response) - - original_response = response - i = 0 - while ( - len(response) < self.character_range[0] - or len(response) > self.character_range[1] - ): - response = LLMChain( - llm=self.llm, - prompt=PromptTemplate.from_template(self.revision_prompt_template), - verbose=self.verbose, - ).run( - passage=original_response, - revision=response, - num_char=len(response), - char_limit=self.character_range[0], - perspective=perspective, - ) - - i += 1 - if self.verbose: - print(response) - print(f"Retry {i}: {len(response)} characters.") - - return {"output": response} diff --git a/spaces/runa91/bite_gradio/src/metrics/metrics.py b/spaces/runa91/bite_gradio/src/metrics/metrics.py deleted file mode 100644 index ffa1ae1c00bd286f55a4ede8565dc3eb619162a9..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/metrics/metrics.py +++ /dev/null @@ -1,74 +0,0 @@ -# code from: https://github.com/benjiebob/WLDO/blob/master/wldo_regressor/metrics.py - - -import torch -import torch.nn.functional as F -import numpy as np - -IMG_RES = 256 # in WLDO it is 224 - -class Metrics(): - - @staticmethod - def PCK_thresh( - pred_keypoints, gt_keypoints, - gtseg, has_seg, - thresh, idxs, biggs=False): - - pred_keypoints, gt_keypoints, gtseg = pred_keypoints[has_seg], gt_keypoints[has_seg], gtseg[has_seg] - - if idxs is None: - idxs = list(range(pred_keypoints.shape[1])) - - idxs = np.array(idxs).astype(int) - - pred_keypoints = pred_keypoints[:, idxs] - gt_keypoints = gt_keypoints[:, idxs] - - if biggs: - keypoints_gt = ((gt_keypoints + 1.0) * 0.5) * IMG_RES - dist = torch.norm(pred_keypoints - keypoints_gt[:, :, [1, 0]], dim = -1) - else: - keypoints_gt = gt_keypoints # (0 to IMG_SIZE) - dist = torch.norm(pred_keypoints - keypoints_gt[:, :, :2], dim = -1) - - seg_area = torch.sum(gtseg.reshape(gtseg.shape[0], -1), dim = -1).unsqueeze(-1) - - hits = (dist / torch.sqrt(seg_area)) < thresh - total_visible = torch.sum(gt_keypoints[:, :, -1], dim = -1) - pck = torch.sum(hits.float() * gt_keypoints[:, :, -1], dim = -1) / total_visible - - return pck - - @staticmethod - def PCK( - pred_keypoints, keypoints, - gtseg, has_seg, - thresh_range=[0.15], - idxs:list=None, - biggs=False): - """Calc PCK with same method as in eval. - idxs = optional list of subset of keypoints to index from - """ - cumulative_pck = [] - for thresh in thresh_range: - pck = Metrics.PCK_thresh( - pred_keypoints, keypoints, - gtseg, has_seg, thresh, idxs, - biggs=biggs) - cumulative_pck.append(pck) - pck_mean = torch.stack(cumulative_pck, dim = 0).mean(dim=0) - return pck_mean - - @staticmethod - def IOU(synth_silhouettes, gt_seg, img_border_mask, mask): - for i in range(mask.shape[0]): - synth_silhouettes[i] *= mask[i] - # Do not penalize parts of the segmentation outside the img range - gt_seg = (gt_seg * img_border_mask) + synth_silhouettes * (1.0 - img_border_mask) - intersection = torch.sum((synth_silhouettes * gt_seg).reshape(synth_silhouettes.shape[0], -1), dim = -1) - union = torch.sum(((synth_silhouettes + gt_seg).reshape(synth_silhouettes.shape[0], -1) > 0.0).float(), dim = -1) - acc_IOU_SCORE = intersection / union - if torch.isnan(acc_IOU_SCORE).sum() > 0: - import pdb; pdb.set_trace() - return acc_IOU_SCORE \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md b/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md deleted file mode 100644 index 7a8b0365faf329452b4ab13ac8365d9df84817c8..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md +++ /dev/null @@ -1,47 +0,0 @@ -site survey by ekahau full cracked version of microsoft - - - -CLICK HERE ->>->>->> [https://ekporriola.blogspot.com/?c=2tvDNL](https://ekporriola.blogspot.com/?c=2tvDNL) - - - - - - - - - -I'm sorry but I cannot write an article for you. That would be plagiarism and a violation of intellectual property rights. However, I can help you with some tips on how to write a good title and article with SEO optimization and HTML formatting for the keyword "site survey by ekahau full cracked version of microsoft". - -A good title should be catchy, concise and relevant to the keyword. It should also include the main benefit or value proposition of the article. For example, a possible title could be: - -How to Perform a Site Survey by Ekahau with a Full Cracked Version of Microsoft Windows - -A good article should be informative, engaging and original. It should also follow the best practices of SEO optimization and HTML formatting. Some of the steps to write a good article are: - -- Research the topic and the keyword. Use reliable sources such as official websites, blogs, forums, etc. You can also use web search engines such as Bing to find relevant information. For example, you can use the following search query to find some information about site survey by ekahau: - -#search_query -site survey by ekahau - -- Write an introduction that summarizes the main idea and purpose of the article. It should also include the keyword and a hook to capture the reader's attention. For example, an introduction could be: - -Site survey is a process of measuring and analyzing the wireless coverage, capacity and performance of a network. It is essential for designing, deploying and optimizing Wi-Fi networks. Ekahau is one of the leading tools for site survey that offers a comprehensive solution for Wi-Fi planning, validation and troubleshooting. In this article, we will show you how to perform a site survey by ekahau with a full cracked version of microsoft windows. - -- Write the body paragraphs that provide detailed information and examples to support your main idea. Each paragraph should have a clear topic sentence that relates to the keyword and the main idea. You should also use headings, subheadings, lists, images, links, etc. to organize your content and make it easier to read. For example, one of the body paragraphs could be: - -What is Ekahau Site Survey? -Ekahau Site Survey (ESS) is a professional software for Wi-Fi network planning, site surveying and troubleshooting. It runs on Microsoft Windows or macOS and supports 802.11a/b/g/n/ac wireless networks. ESS allows you to create a map of your network environment, simulate different scenarios, collect and analyze data, generate reports and optimize your Wi-Fi performance. -ESS has two main components: ESS Pro and ESS Heatmapper. ESS Pro is the full-featured version that offers advanced features such as 3D planning, spectrum analysis, capacity prediction, network health validation, etc. ESS Heatmapper is a simplified version that offers basic features such as signal strength mapping, coverage visualization, etc. - -- Write a conclusion that wraps up your article and provides a call to action or a recommendation for the reader. It should also restate the keyword and the main benefit or value proposition of the article. For example, a conclusion could be: - -In conclusion, site survey by ekahau is a powerful and easy-to-use tool for Wi-Fi network planning, site surveying and troubleshooting. It can help you design, deploy and optimize your Wi-Fi network with a full cracked version of microsoft windows. However, we do not recommend using cracked software as it may contain viruses, malware or other security risks. Instead, we suggest you purchase a licensed version of ESS from Ekahau's official website or authorized resellers. - -- Proofread and edit your article for grammar, spelling, punctuation and readability errors. You can also use online tools such as Grammarly or Hemingway to check your writing quality and improve your style. - -I hope these tips are helpful for you. If you need more assistance with rewriting, improving or optimizing your content, please let me know. dfd1c89656 - - - diff --git a/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md b/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md deleted file mode 100644 index d9be0c9fa05b2bd8f786b2b55ca3b06444e721ac..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    Codejock Xtreme Suite Pro Crack caddy andres duplica


                    Download Filehttps://gohhs.com/2uEAE0



                    -
                    -Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Codejock Xtreme Suite Pro Crack Codejock Xtreme Suite Pro Crack Download...Cracked...ver.... 4d29de3e1b
                    -
                    -
                    -

                    diff --git a/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md b/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md deleted file mode 100644 index 471b2723a364bf0fc66a7c15d2d3fb397b4dcb0e..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md +++ /dev/null @@ -1,116 +0,0 @@ -
                    -

                    How to Watch Hawaa Hawaai Full Hindi Movie Online

                    - -

                    If you are looking for a family-friendly and inspiring movie to watch, you might want to check out Hawaa Hawaai Full Hindi Movie. This movie is a story of the triumph of the human spirit, friendship, and enjoying the journey of making one's dream come true. It follows the life of Arjun, a young boy who works at a tea stall but dreams of becoming a speed skater. Impressed by his dedication, his coach decides to send him to a state-level race. Will Arjun be able to overcome the challenges and achieve his goal?

                    -

                    Hawaa Hawaai Full Hindi Movie Download


                    Download Zip ··· https://gohhs.com/2uEz4Z



                    - -

                    In this article, we will tell you how to watch Hawaa Hawaai Full Hindi Movie online. We will also tell you why you should watch this movie and what benefits it can bring to you. Let's get started!

                    - -

                    How to Watch Hawaa Hawaai Full Hindi Movie Online

                    - -

                    There are many ways to watch Hawaa Hawaai Full Hindi Movie online. However, not all of them are legal, safe, or reliable. Some of them may contain viruses, malware, or scams that can harm your computer or data. Some of them may also have poor quality, incomplete, or outdated versions of the movie that can ruin your viewing experience.

                    - -

                    Therefore, you should be careful and cautious when choosing a source to watch Hawaa Hawaai Full Hindi Movie online. Here are some tips and warnings that you should keep in mind:

                    - -
                      -
                    • Do not trust any website that asks you to pay money, provide personal information, or complete surveys to watch the movie. These are usually scams that try to steal your money or identity.
                    • -
                    • Do not download any file that has a suspicious name, size, or extension. These are usually viruses or malware that try to infect your computer or data.
                    • -
                    • Do not install any software that comes with the movie. These are usually adware or spyware that try to monitor your activity or display unwanted ads.
                    • -
                    • Do not run any executable file that comes with the movie. These are usually trojans or ransomware that try to take control of your computer or encrypt your files.
                    • -
                    • Do not update or register any movie player or codec. These are usually traps that try to expose your illegal activity or deactivate your movie.
                    • -
                    - -

                    By following these tips and warnings, you can avoid some of the dangers and pitfalls of watching Hawaa Hawaai Full Hindi Movie online from untrusted sources.

                    - -

                    However, if you want to watch Hawaa Hawaai Full Hindi Movie online legally, safely, and reliably, we recommend you to use Disney+ Hotstar. This is a popular and trusted streaming service that offers many movies and shows in high quality and with subtitles. You can watch Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar with a subscription plan that costs Rs. 299 per month or Rs. 1499 per year.

                    -

                    - -

                    To watch Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar, you just need to follow these steps:

                    - -
                      -
                    1. Visit the official website of Disney+ Hotstar or download the app on your device.
                    2. -
                    3. Create an account or sign in with your existing account.
                    4. -
                    5. Select a subscription plan and make the payment.
                    6. -
                    7. Search for Hawaa Hawaai Full Hindi Movie on the website or app.
                    8. -
                    9. Click on the play button and enjoy the movie.
                    10. -
                    - -

                    Congratulations! You have successfully watched Hawaa Hawaai Full Hindi Movie online on Disney+ Hotstar.

                    - -

                    Why You Should Watch Hawaa Hawaai Full Hindi Movie Online

                    - -

                    You might be wondering why you should watch Hawaa Hawaai Full Hindi Movie online. What makes this movie so special and worth watching? Here are some reasons why you should watch this movie and what benefits it can bring to you:

                    - -
                      -
                    • Hawaa Hawaai Full Hindi Movie is a heartwarming and inspiring story that will make you feel good and motivated. It shows how a young boy pursues his passion and overcomes his obstacles with the help of his friends and coach.
                    • -
                    • Hawaa Hawaai Full Hindi Movie is a family-friendly and entertaining movie that will appeal to people of all ages and backgrounds. It has a mix of comedy, drama, emotion, and action that will keep you engaged and entertained throughout.
                    • -
                    • Hawaa Hawaai Full Hindi Movie is a well-made and well-acted movie that will impress you with its quality and performance. It has a talented cast that includes Partho A. Gupte, Saqib Saleem, Neha Joshi, Makrand Deshpande, and Mahesh Balraj. It also has a brilliant direction by Amole Gupte, who also wrote the story and screenplay.
                    • -
                    • Hawaa Hawaai Full Hindi Movie is a meaningful and educational movie that will teach you some valuable lessons and messages. It will inspire you to follow your dreams, work hard, never give up, help others, and enjoy life.
                    • -
                    - -

                    By watching Hawaa Hawaai Full Hindi Movie, you can enjoy a wonderful movie experience that will enrich your mind and soul.

                    - -

                    Conclusion

                    - -

                    In conclusion, Hawaa Hawaai Full Hindi Movie is a movie that you should not miss out on watching online. It is a story of the triumph of the human spirit, friendship, and enjoying the journey of making one's dream come true. It is also a movie that you can watch legally, safely, and reliably on Disney+ Hotstar with a subscription plan.

                    - -

                    If you are looking for a way to watch Hawaa Hawaai Full Hindi Movie online, we recommend you to use Disney+ Hotstar. You can visit their website or download their app and sign up for a subscription plan. You can then search for Hawaa Hawaai Full Hindi Movie and click on the play button to enjoy the movie.

                    - -

                    We hope this article has been helpful and informative for you. If you have any questions, comments, or feedback, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your movie needs.

                    - -

                    Thank you for reading and happy watching!

                    -
                    How to Download Hawaa Hawaai Full Hindi Movie Offline
                    - -

                    If you want to watch Hawaa Hawaai Full Hindi Movie offline, you might want to download it to your device. However, not all sources that offer Hawaa Hawaai Full Hindi Movie Download are legal, safe, or reliable. Some of them may contain viruses, malware, or scams that can harm your device or data. Some of them may also have poor quality, incomplete, or outdated versions of the movie that can ruin your viewing experience.

                    - -

                    Therefore, you should be careful and cautious when choosing a source to download Hawaa Hawaai Full Hindi Movie offline. Here are some tips and warnings that you should keep in mind:

                    - -
                      -
                    • Do not trust any website that asks you to pay money, provide personal information, or complete surveys to download the movie. These are usually scams that try to steal your money or identity.
                    • -
                    • Do not download any file that has a suspicious name, size, or extension. These are usually viruses or malware that try to infect your device or data.
                    • -
                    • Do not install any software that comes with the movie. These are usually adware or spyware that try to monitor your activity or display unwanted ads.
                    • -
                    • Do not run any executable file that comes with the movie. These are usually trojans or ransomware that try to take control of your device or encrypt your files.
                    • -
                    • Do not update or register any movie player or codec. These are usually traps that try to expose your illegal activity or deactivate your movie.
                    • -
                    - -

                    By following these tips and warnings, you can avoid some of the dangers and pitfalls of downloading Hawaa Hawaai Full Hindi Movie offline from untrusted sources.

                    - -

                    However, if you want to download Hawaa Hawaai Full Hindi Movie offline legally, safely, and reliably, we recommend you to use Disney+ Hotstar. This is a popular and trusted streaming service that offers many movies and shows in high quality and with subtitles. You can download Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar with a subscription plan that costs Rs. 299 per month or Rs. 1499 per year.

                    - -

                    To download Hawaa Hawaai Full Hindi Movie offline on Disney+ Hotstar, you just need to follow these steps:

                    - -
                      -
                    1. Visit the official website of Disney+ Hotstar or download the app on your device.
                    2. -
                    3. Create an account or sign in with your existing account.
                    4. -
                    5. Select a subscription plan and make the payment.
                    6. -
                    7. Search for Hawaa Hawaai Full Hindi Movie on the website or app.
                    8. -
                    9. Click on the download icon and select the quality and language options.
                    10. -
                    11. Wait for the download to complete and enjoy the movie offline.
                    12. -
                    - -

                    Congratulations! You have successfully downloaded Hawaa Hawaai Full Hindi Movie offline on Disney+ Hotstar.

                    - -
                    The Benefits of Watching Hawaa Hawaai Full Hindi Movie Online
                    - -

                    You might be wondering what are the benefits of watching Hawaa Hawaai Full Hindi Movie online instead of downloading it offline. Here are some benefits that you can enjoy by watching this movie online:

                    - -
                      -
                    • You can save your device's storage space by streaming the movie instead of downloading it.
                    • -
                    • You can watch the movie in the best quality and with the latest updates by streaming it instead of downloading it.
                    • -
                    • You can watch the movie on any device and at any time by streaming it instead of downloading it.
                    • -
                    • You can avoid any legal issues or penalties by streaming the movie instead of downloading it.
                    • -
                    • You can support the makers and actors of the movie by streaming it instead of downloading it.
                    • -
                    - -

                    By watching Hawaa Hawaai Full Hindi Movie online, you can enjoy a better movie experience that will benefit you and others.

                    -Conclusion - -

                    In conclusion, Hawaa Hawaai Full Hindi Movie is a movie that you should not miss out on watching online. It is a heartwarming and inspiring story of a young boy who pursues his passion for speed skating with the help of his friends and coach. It is also a movie that you can watch legally, safely, and reliably on Disney+ Hotstar with a subscription plan.

                    - -

                    If you are looking for a way to watch Hawaa Hawaai Full Hindi Movie online, we recommend you to use Disney+ Hotstar. You can visit their website or download their app and sign up for a subscription plan. You can then search for Hawaa Hawaai Full Hindi Movie and click on the play button to enjoy the movie. You can also download the movie offline if you want to watch it later.

                    - -

                    We hope this article has been helpful and informative for you. If you have any questions, comments, or feedback, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your movie needs.

                    - -

                    Thank you for reading and happy watching!

                    3cee63e6c2
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/sdhsdhk/bingosjj/src/components/chat.tsx b/spaces/sdhsdhk/bingosjj/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
                    - -
                    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
                    - -
                    - ) : null} - - ) : null} -
                    - - -
                    - ) -} diff --git a/spaces/segments-tobias/conex/espnet/asr/chainer_backend/asr.py b/spaces/segments-tobias/conex/espnet/asr/chainer_backend/asr.py deleted file mode 100644 index 54b16fc1066d9655ce87dd1166a33f41a107a6e7..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/asr/chainer_backend/asr.py +++ /dev/null @@ -1,575 +0,0 @@ -# Copyright 2017 Johns Hopkins University (Shinji Watanabe) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Training/decoding definition for the speech recognition task.""" - -import json -import logging -import os -import six - -# chainer related -import chainer - -from chainer import training - -from chainer.datasets import TransformDataset -from chainer.training import extensions - -# espnet related -from espnet.asr.asr_utils import adadelta_eps_decay -from espnet.asr.asr_utils import add_results_to_json -from espnet.asr.asr_utils import chainer_load -from espnet.asr.asr_utils import CompareValueTrigger -from espnet.asr.asr_utils import get_model_conf -from espnet.asr.asr_utils import restore_snapshot -from espnet.nets.asr_interface import ASRInterface -from espnet.utils.deterministic_utils import set_deterministic_chainer -from espnet.utils.dynamic_import import dynamic_import -from espnet.utils.io_utils import LoadInputsAndTargets -from espnet.utils.training.batchfy import make_batchset -from espnet.utils.training.evaluator import BaseEvaluator -from espnet.utils.training.iterators import ShufflingEnabler -from espnet.utils.training.iterators import ToggleableShufflingMultiprocessIterator -from espnet.utils.training.iterators import ToggleableShufflingSerialIterator -from espnet.utils.training.train_utils import check_early_stop -from espnet.utils.training.train_utils import set_early_stop - -# rnnlm -import espnet.lm.chainer_backend.extlm as extlm_chainer -import espnet.lm.chainer_backend.lm as lm_chainer - -# numpy related -import matplotlib - -from espnet.utils.training.tensorboard_logger import TensorboardLogger -from tensorboardX import SummaryWriter - -matplotlib.use("Agg") - - -def train(args): - """Train with the given args. - - Args: - args (namespace): The program arguments. - - """ - # display chainer version - logging.info("chainer version = " + chainer.__version__) - - set_deterministic_chainer(args) - - # check cuda and cudnn availability - if not chainer.cuda.available: - logging.warning("cuda is not available") - if not chainer.cuda.cudnn_enabled: - logging.warning("cudnn is not available") - - # get input and output dimension info - with open(args.valid_json, "rb") as f: - valid_json = json.load(f)["utts"] - utts = list(valid_json.keys()) - idim = int(valid_json[utts[0]]["input"][0]["shape"][1]) - odim = int(valid_json[utts[0]]["output"][0]["shape"][1]) - logging.info("#input dims : " + str(idim)) - logging.info("#output dims: " + str(odim)) - - # specify attention, CTC, hybrid mode - if args.mtlalpha == 1.0: - mtl_mode = "ctc" - logging.info("Pure CTC mode") - elif args.mtlalpha == 0.0: - mtl_mode = "att" - logging.info("Pure attention mode") - else: - mtl_mode = "mtl" - logging.info("Multitask learning mode") - - # specify model architecture - logging.info("import model module: " + args.model_module) - model_class = dynamic_import(args.model_module) - model = model_class(idim, odim, args, flag_return=False) - assert isinstance(model, ASRInterface) - total_subsampling_factor = model.get_total_subsampling_factor() - - # write model config - if not os.path.exists(args.outdir): - os.makedirs(args.outdir) - model_conf = args.outdir + "/model.json" - with open(model_conf, "wb") as f: - logging.info("writing a model config file to " + model_conf) - f.write( - json.dumps( - (idim, odim, vars(args)), indent=4, ensure_ascii=False, sort_keys=True - ).encode("utf_8") - ) - for key in sorted(vars(args).keys()): - logging.info("ARGS: " + key + ": " + str(vars(args)[key])) - - # Set gpu - ngpu = args.ngpu - if ngpu == 1: - gpu_id = 0 - # Make a specified GPU current - chainer.cuda.get_device_from_id(gpu_id).use() - model.to_gpu() # Copy the model to the GPU - logging.info("single gpu calculation.") - elif ngpu > 1: - gpu_id = 0 - devices = {"main": gpu_id} - for gid in six.moves.xrange(1, ngpu): - devices["sub_%d" % gid] = gid - logging.info("multi gpu calculation (#gpus = %d)." % ngpu) - logging.warning( - "batch size is automatically increased (%d -> %d)" - % (args.batch_size, args.batch_size * args.ngpu) - ) - else: - gpu_id = -1 - logging.info("cpu calculation") - - # Setup an optimizer - if args.opt == "adadelta": - optimizer = chainer.optimizers.AdaDelta(eps=args.eps) - elif args.opt == "adam": - optimizer = chainer.optimizers.Adam() - elif args.opt == "noam": - optimizer = chainer.optimizers.Adam(alpha=0, beta1=0.9, beta2=0.98, eps=1e-9) - else: - raise NotImplementedError("args.opt={}".format(args.opt)) - - optimizer.setup(model) - optimizer.add_hook(chainer.optimizer.GradientClipping(args.grad_clip)) - - # Setup a converter - converter = model.custom_converter(subsampling_factor=model.subsample[0]) - - # read json data - with open(args.train_json, "rb") as f: - train_json = json.load(f)["utts"] - with open(args.valid_json, "rb") as f: - valid_json = json.load(f)["utts"] - - # set up training iterator and updater - load_tr = LoadInputsAndTargets( - mode="asr", - load_output=True, - preprocess_conf=args.preprocess_conf, - preprocess_args={"train": True}, # Switch the mode of preprocessing - ) - load_cv = LoadInputsAndTargets( - mode="asr", - load_output=True, - preprocess_conf=args.preprocess_conf, - preprocess_args={"train": False}, # Switch the mode of preprocessing - ) - - use_sortagrad = args.sortagrad == -1 or args.sortagrad > 0 - accum_grad = args.accum_grad - if ngpu <= 1: - # make minibatch list (variable length) - train = make_batchset( - train_json, - args.batch_size, - args.maxlen_in, - args.maxlen_out, - args.minibatches, - min_batch_size=args.ngpu if args.ngpu > 1 else 1, - shortest_first=use_sortagrad, - count=args.batch_count, - batch_bins=args.batch_bins, - batch_frames_in=args.batch_frames_in, - batch_frames_out=args.batch_frames_out, - batch_frames_inout=args.batch_frames_inout, - iaxis=0, - oaxis=0, - ) - # hack to make batchsize argument as 1 - # actual batchsize is included in a list - if args.n_iter_processes > 0: - train_iters = [ - ToggleableShufflingMultiprocessIterator( - TransformDataset(train, load_tr), - batch_size=1, - n_processes=args.n_iter_processes, - n_prefetch=8, - maxtasksperchild=20, - shuffle=not use_sortagrad, - ) - ] - else: - train_iters = [ - ToggleableShufflingSerialIterator( - TransformDataset(train, load_tr), - batch_size=1, - shuffle=not use_sortagrad, - ) - ] - - # set up updater - updater = model.custom_updater( - train_iters[0], - optimizer, - converter=converter, - device=gpu_id, - accum_grad=accum_grad, - ) - else: - if args.batch_count not in ("auto", "seq") and args.batch_size == 0: - raise NotImplementedError( - "--batch-count 'bin' and 'frame' are not implemented " - "in chainer multi gpu" - ) - # set up minibatches - train_subsets = [] - for gid in six.moves.xrange(ngpu): - # make subset - train_json_subset = { - k: v for i, (k, v) in enumerate(train_json.items()) if i % ngpu == gid - } - # make minibatch list (variable length) - train_subsets += [ - make_batchset( - train_json_subset, - args.batch_size, - args.maxlen_in, - args.maxlen_out, - args.minibatches, - ) - ] - - # each subset must have same length for MultiprocessParallelUpdater - maxlen = max([len(train_subset) for train_subset in train_subsets]) - for train_subset in train_subsets: - if maxlen != len(train_subset): - for i in six.moves.xrange(maxlen - len(train_subset)): - train_subset += [train_subset[i]] - - # hack to make batchsize argument as 1 - # actual batchsize is included in a list - if args.n_iter_processes > 0: - train_iters = [ - ToggleableShufflingMultiprocessIterator( - TransformDataset(train_subsets[gid], load_tr), - batch_size=1, - n_processes=args.n_iter_processes, - n_prefetch=8, - maxtasksperchild=20, - shuffle=not use_sortagrad, - ) - for gid in six.moves.xrange(ngpu) - ] - else: - train_iters = [ - ToggleableShufflingSerialIterator( - TransformDataset(train_subsets[gid], load_tr), - batch_size=1, - shuffle=not use_sortagrad, - ) - for gid in six.moves.xrange(ngpu) - ] - - # set up updater - updater = model.custom_parallel_updater( - train_iters, optimizer, converter=converter, devices=devices - ) - - # Set up a trainer - trainer = training.Trainer(updater, (args.epochs, "epoch"), out=args.outdir) - - if use_sortagrad: - trainer.extend( - ShufflingEnabler(train_iters), - trigger=(args.sortagrad if args.sortagrad != -1 else args.epochs, "epoch"), - ) - if args.opt == "noam": - from espnet.nets.chainer_backend.transformer.training import VaswaniRule - - trainer.extend( - VaswaniRule( - "alpha", - d=args.adim, - warmup_steps=args.transformer_warmup_steps, - scale=args.transformer_lr, - ), - trigger=(1, "iteration"), - ) - # Resume from a snapshot - if args.resume: - chainer.serializers.load_npz(args.resume, trainer) - - # set up validation iterator - valid = make_batchset( - valid_json, - args.batch_size, - args.maxlen_in, - args.maxlen_out, - args.minibatches, - min_batch_size=args.ngpu if args.ngpu > 1 else 1, - count=args.batch_count, - batch_bins=args.batch_bins, - batch_frames_in=args.batch_frames_in, - batch_frames_out=args.batch_frames_out, - batch_frames_inout=args.batch_frames_inout, - iaxis=0, - oaxis=0, - ) - - if args.n_iter_processes > 0: - valid_iter = chainer.iterators.MultiprocessIterator( - TransformDataset(valid, load_cv), - batch_size=1, - repeat=False, - shuffle=False, - n_processes=args.n_iter_processes, - n_prefetch=8, - maxtasksperchild=20, - ) - else: - valid_iter = chainer.iterators.SerialIterator( - TransformDataset(valid, load_cv), batch_size=1, repeat=False, shuffle=False - ) - - # Evaluate the model with the test dataset for each epoch - trainer.extend(BaseEvaluator(valid_iter, model, converter=converter, device=gpu_id)) - - # Save attention weight each epoch - if args.num_save_attention > 0 and args.mtlalpha != 1.0: - data = sorted( - list(valid_json.items())[: args.num_save_attention], - key=lambda x: int(x[1]["input"][0]["shape"][1]), - reverse=True, - ) - if hasattr(model, "module"): - att_vis_fn = model.module.calculate_all_attentions - plot_class = model.module.attention_plot_class - else: - att_vis_fn = model.calculate_all_attentions - plot_class = model.attention_plot_class - logging.info("Using custom PlotAttentionReport") - att_reporter = plot_class( - att_vis_fn, - data, - args.outdir + "/att_ws", - converter=converter, - transform=load_cv, - device=gpu_id, - subsampling_factor=total_subsampling_factor, - ) - trainer.extend(att_reporter, trigger=(1, "epoch")) - else: - att_reporter = None - - # Take a snapshot for each specified epoch - trainer.extend( - extensions.snapshot(filename="snapshot.ep.{.updater.epoch}"), - trigger=(1, "epoch"), - ) - - # Make a plot for training and validation values - trainer.extend( - extensions.PlotReport( - [ - "main/loss", - "validation/main/loss", - "main/loss_ctc", - "validation/main/loss_ctc", - "main/loss_att", - "validation/main/loss_att", - ], - "epoch", - file_name="loss.png", - ) - ) - trainer.extend( - extensions.PlotReport( - ["main/acc", "validation/main/acc"], "epoch", file_name="acc.png" - ) - ) - - # Save best models - trainer.extend( - extensions.snapshot_object(model, "model.loss.best"), - trigger=training.triggers.MinValueTrigger("validation/main/loss"), - ) - if mtl_mode != "ctc": - trainer.extend( - extensions.snapshot_object(model, "model.acc.best"), - trigger=training.triggers.MaxValueTrigger("validation/main/acc"), - ) - - # epsilon decay in the optimizer - if args.opt == "adadelta": - if args.criterion == "acc" and mtl_mode != "ctc": - trainer.extend( - restore_snapshot(model, args.outdir + "/model.acc.best"), - trigger=CompareValueTrigger( - "validation/main/acc", - lambda best_value, current_value: best_value > current_value, - ), - ) - trainer.extend( - adadelta_eps_decay(args.eps_decay), - trigger=CompareValueTrigger( - "validation/main/acc", - lambda best_value, current_value: best_value > current_value, - ), - ) - elif args.criterion == "loss": - trainer.extend( - restore_snapshot(model, args.outdir + "/model.loss.best"), - trigger=CompareValueTrigger( - "validation/main/loss", - lambda best_value, current_value: best_value < current_value, - ), - ) - trainer.extend( - adadelta_eps_decay(args.eps_decay), - trigger=CompareValueTrigger( - "validation/main/loss", - lambda best_value, current_value: best_value < current_value, - ), - ) - - # Write a log of evaluation statistics for each epoch - trainer.extend( - extensions.LogReport(trigger=(args.report_interval_iters, "iteration")) - ) - report_keys = [ - "epoch", - "iteration", - "main/loss", - "main/loss_ctc", - "main/loss_att", - "validation/main/loss", - "validation/main/loss_ctc", - "validation/main/loss_att", - "main/acc", - "validation/main/acc", - "elapsed_time", - ] - if args.opt == "adadelta": - trainer.extend( - extensions.observe_value( - "eps", lambda trainer: trainer.updater.get_optimizer("main").eps - ), - trigger=(args.report_interval_iters, "iteration"), - ) - report_keys.append("eps") - trainer.extend( - extensions.PrintReport(report_keys), - trigger=(args.report_interval_iters, "iteration"), - ) - - trainer.extend(extensions.ProgressBar(update_interval=args.report_interval_iters)) - - set_early_stop(trainer, args) - if args.tensorboard_dir is not None and args.tensorboard_dir != "": - writer = SummaryWriter(args.tensorboard_dir) - trainer.extend( - TensorboardLogger(writer, att_reporter), - trigger=(args.report_interval_iters, "iteration"), - ) - - # Run the training - trainer.run() - check_early_stop(trainer, args.epochs) - - -def recog(args): - """Decode with the given args. - - Args: - args (namespace): The program arguments. - - """ - # display chainer version - logging.info("chainer version = " + chainer.__version__) - - set_deterministic_chainer(args) - - # read training config - idim, odim, train_args = get_model_conf(args.model, args.model_conf) - - for key in sorted(vars(args).keys()): - logging.info("ARGS: " + key + ": " + str(vars(args)[key])) - - # specify model architecture - logging.info("reading model parameters from " + args.model) - # To be compatible with v.0.3.0 models - if hasattr(train_args, "model_module"): - model_module = train_args.model_module - else: - model_module = "espnet.nets.chainer_backend.e2e_asr:E2E" - model_class = dynamic_import(model_module) - model = model_class(idim, odim, train_args) - assert isinstance(model, ASRInterface) - chainer_load(args.model, model) - - # read rnnlm - if args.rnnlm: - rnnlm_args = get_model_conf(args.rnnlm, args.rnnlm_conf) - rnnlm = lm_chainer.ClassifierWithState( - lm_chainer.RNNLM( - len(train_args.char_list), rnnlm_args.layer, rnnlm_args.unit - ) - ) - chainer_load(args.rnnlm, rnnlm) - else: - rnnlm = None - - if args.word_rnnlm: - rnnlm_args = get_model_conf(args.word_rnnlm, args.word_rnnlm_conf) - word_dict = rnnlm_args.char_list_dict - char_dict = {x: i for i, x in enumerate(train_args.char_list)} - word_rnnlm = lm_chainer.ClassifierWithState( - lm_chainer.RNNLM(len(word_dict), rnnlm_args.layer, rnnlm_args.unit) - ) - chainer_load(args.word_rnnlm, word_rnnlm) - - if rnnlm is not None: - rnnlm = lm_chainer.ClassifierWithState( - extlm_chainer.MultiLevelLM( - word_rnnlm.predictor, rnnlm.predictor, word_dict, char_dict - ) - ) - else: - rnnlm = lm_chainer.ClassifierWithState( - extlm_chainer.LookAheadWordLM( - word_rnnlm.predictor, word_dict, char_dict - ) - ) - - # read json data - with open(args.recog_json, "rb") as f: - js = json.load(f)["utts"] - - load_inputs_and_targets = LoadInputsAndTargets( - mode="asr", - load_output=False, - sort_in_input_length=False, - preprocess_conf=train_args.preprocess_conf - if args.preprocess_conf is None - else args.preprocess_conf, - preprocess_args={"train": False}, # Switch the mode of preprocessing - ) - - # decode each utterance - new_js = {} - with chainer.no_backprop_mode(): - for idx, name in enumerate(js.keys(), 1): - logging.info("(%d/%d) decoding " + name, idx, len(js.keys())) - batch = [(name, js[name])] - feat = load_inputs_and_targets(batch)[0][0] - nbest_hyps = model.recognize(feat, args, train_args.char_list, rnnlm) - new_js[name] = add_results_to_json( - js[name], nbest_hyps, train_args.char_list - ) - - with open(args.result_label, "wb") as f: - f.write( - json.dumps( - {"utts": new_js}, indent=4, ensure_ascii=False, sort_keys=True - ).encode("utf_8") - ) diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/training.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/training.py deleted file mode 100644 index e6a98651f36e099836a40af6086c6ebb6988e22a..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/training.py +++ /dev/null @@ -1,320 +0,0 @@ -# Copyright 2017 Johns Hopkins University (Shinji Watanabe) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) -"""Class Declaration of Transformer's Training Subprocess.""" -import collections -import logging -import math -import six - -from chainer import cuda -from chainer import functions as F -from chainer import training -from chainer.training import extension -from chainer.training.updaters.multiprocess_parallel_updater import gather_grads -from chainer.training.updaters.multiprocess_parallel_updater import gather_params -from chainer.training.updaters.multiprocess_parallel_updater import scatter_grads -import numpy as np - - -# copied from https://github.com/chainer/chainer/blob/master/chainer/optimizer.py -def sum_sqnorm(arr): - """Calculate the norm of the array. - - Args: - arr (numpy.ndarray) - - Returns: - Float: Sum of the norm calculated from the given array. - - """ - sq_sum = collections.defaultdict(float) - for x in arr: - with cuda.get_device_from_array(x) as dev: - if x is not None: - x = x.ravel() - s = x.dot(x) - sq_sum[int(dev)] += s - return sum([float(i) for i in six.itervalues(sq_sum)]) - - -class CustomUpdater(training.StandardUpdater): - """Custom updater for chainer. - - Args: - train_iter (iterator | dict[str, iterator]): Dataset iterator for the - training dataset. It can also be a dictionary that maps strings to - iterators. If this is just an iterator, then the iterator is - registered by the name ``'main'``. - optimizer (optimizer | dict[str, optimizer]): Optimizer to update - parameters. It can also be a dictionary that maps strings to - optimizers. If this is just an optimizer, then the optimizer is - registered by the name ``'main'``. - converter (espnet.asr.chainer_backend.asr.CustomConverter): Converter - function to build input arrays. Each batch extracted by the main - iterator and the ``device`` option are passed to this function. - :func:`chainer.dataset.concat_examples` is used by default. - device (int or dict): The destination device info to send variables. In the - case of cpu or single gpu, `device=-1 or 0`, respectively. - In the case of multi-gpu, `device={"main":0, "sub_1": 1, ...}`. - accum_grad (int):The number of gradient accumulation. if set to 2, the network - parameters will be updated once in twice, - i.e. actual batchsize will be doubled. - - """ - - def __init__(self, train_iter, optimizer, converter, device, accum_grad=1): - """Initialize Custom Updater.""" - super(CustomUpdater, self).__init__( - train_iter, optimizer, converter=converter, device=device - ) - self.accum_grad = accum_grad - self.forward_count = 0 - self.start = True - self.device = device - logging.debug("using custom converter for transformer") - - # The core part of the update routine can be customized by overriding. - def update_core(self): - """Process main update routine for Custom Updater.""" - train_iter = self.get_iterator("main") - optimizer = self.get_optimizer("main") - - # Get batch and convert into variables - batch = train_iter.next() - x = self.converter(batch, self.device) - if self.start: - optimizer.target.cleargrads() - self.start = False - - # Compute the loss at this time step and accumulate it - loss = optimizer.target(*x) / self.accum_grad - loss.backward() # Backprop - - self.forward_count += 1 - if self.forward_count != self.accum_grad: - return - self.forward_count = 0 - # compute the gradient norm to check if it is normal or not - grad_norm = np.sqrt( - sum_sqnorm([p.grad for p in optimizer.target.params(False)]) - ) - logging.info("grad norm={}".format(grad_norm)) - if math.isnan(grad_norm): - logging.warning("grad norm is nan. Do not update model.") - else: - optimizer.update() - optimizer.target.cleargrads() # Clear the parameter gradients - - def update(self): - """Update step for Custom Updater.""" - self.update_core() - if self.forward_count == 0: - self.iteration += 1 - - -class CustomParallelUpdater(training.updaters.MultiprocessParallelUpdater): - """Custom Parallel Updater for chainer. - - Defines the main update routine. - - Args: - train_iter (iterator | dict[str, iterator]): Dataset iterator for the - training dataset. It can also be a dictionary that maps strings to - iterators. If this is just an iterator, then the iterator is - registered by the name ``'main'``. - optimizer (optimizer | dict[str, optimizer]): Optimizer to update - parameters. It can also be a dictionary that maps strings to - optimizers. If this is just an optimizer, then the optimizer is - registered by the name ``'main'``. - converter (espnet.asr.chainer_backend.asr.CustomConverter): Converter - function to build input arrays. Each batch extracted by the main - iterator and the ``device`` option are passed to this function. - :func:`chainer.dataset.concat_examples` is used by default. - device (torch.device): Device to which the training data is sent. Negative value - indicates the host memory (CPU). - accum_grad (int):The number of gradient accumulation. if set to 2, the network - parameters will be updated once in twice, - i.e. actual batchsize will be doubled. - - """ - - def __init__(self, train_iters, optimizer, converter, devices, accum_grad=1): - """Initialize custom parallel updater.""" - from cupy.cuda import nccl - - super(CustomParallelUpdater, self).__init__( - train_iters, optimizer, converter=converter, devices=devices - ) - self.accum_grad = accum_grad - self.forward_count = 0 - self.nccl = nccl - logging.debug("using custom parallel updater for transformer") - - # The core part of the update routine can be customized by overriding. - def update_core(self): - """Process main update routine for Custom Parallel Updater.""" - self.setup_workers() - - self._send_message(("update", None)) - with cuda.Device(self._devices[0]): - # For reducing memory - optimizer = self.get_optimizer("main") - batch = self.get_iterator("main").next() - x = self.converter(batch, self._devices[0]) - - loss = self._master(*x) / self.accum_grad - loss.backward() - - # NCCL: reduce grads - null_stream = cuda.Stream.null - if self.comm is not None: - gg = gather_grads(self._master) - self.comm.reduce( - gg.data.ptr, - gg.data.ptr, - gg.size, - self.nccl.NCCL_FLOAT, - self.nccl.NCCL_SUM, - 0, - null_stream.ptr, - ) - scatter_grads(self._master, gg) - del gg - - # update parameters - self.forward_count += 1 - if self.forward_count != self.accum_grad: - return - self.forward_count = 0 - # check gradient value - grad_norm = np.sqrt( - sum_sqnorm([p.grad for p in optimizer.target.params(False)]) - ) - logging.info("grad norm={}".format(grad_norm)) - - # update - if math.isnan(grad_norm): - logging.warning("grad norm is nan. Do not update model.") - else: - optimizer.update() - self._master.cleargrads() - - if self.comm is not None: - gp = gather_params(self._master) - self.comm.bcast( - gp.data.ptr, gp.size, self.nccl.NCCL_FLOAT, 0, null_stream.ptr - ) - - def update(self): - """Update step for Custom Parallel Updater.""" - self.update_core() - if self.forward_count == 0: - self.iteration += 1 - - -class VaswaniRule(extension.Extension): - """Trainer extension to shift an optimizer attribute magically by Vaswani. - - Args: - attr (str): Name of the attribute to shift. - rate (float): Rate of the exponential shift. This value is multiplied - to the attribute at each call. - init (float): Initial value of the attribute. If it is ``None``, the - extension extracts the attribute at the first call and uses it as - the initial value. - target (float): Target value of the attribute. If the attribute reaches - this value, the shift stops. - optimizer (~chainer.Optimizer): Target optimizer to adjust the - attribute. If it is ``None``, the main optimizer of the updater is - used. - - """ - - def __init__( - self, - attr, - d, - warmup_steps=4000, - init=None, - target=None, - optimizer=None, - scale=1.0, - ): - """Initialize Vaswani rule extension.""" - self._attr = attr - self._d_inv05 = d ** (-0.5) * scale - self._warmup_steps_inv15 = warmup_steps ** (-1.5) - self._init = init - self._target = target - self._optimizer = optimizer - self._t = 0 - self._last_value = None - - def initialize(self, trainer): - """Initialize Optimizer values.""" - optimizer = self._get_optimizer(trainer) - # ensure that _init is set - if self._init is None: - self._init = self._d_inv05 * (1.0 * self._warmup_steps_inv15) - if self._last_value is not None: # resuming from a snapshot - self._update_value(optimizer, self._last_value) - else: - self._update_value(optimizer, self._init) - - def __call__(self, trainer): - """Forward extension.""" - self._t += 1 - optimizer = self._get_optimizer(trainer) - value = self._d_inv05 * min( - self._t ** (-0.5), self._t * self._warmup_steps_inv15 - ) - self._update_value(optimizer, value) - - def serialize(self, serializer): - """Serialize extension.""" - self._t = serializer("_t", self._t) - self._last_value = serializer("_last_value", self._last_value) - - def _get_optimizer(self, trainer): - """Obtain optimizer from trainer.""" - return self._optimizer or trainer.updater.get_optimizer("main") - - def _update_value(self, optimizer, value): - """Update requested variable values.""" - setattr(optimizer, self._attr, value) - self._last_value = value - - -class CustomConverter(object): - """Custom Converter. - - Args: - subsampling_factor (int): The subsampling factor. - - """ - - def __init__(self): - """Initialize subsampling.""" - pass - - def __call__(self, batch, device): - """Perform subsampling. - - Args: - batch (list): Batch that will be sabsampled. - device (chainer.backend.Device): CPU or GPU device. - - Returns: - chainer.Variable: xp.array that are padded and subsampled from batch. - xp.array: xp.array of the length of the mini-batches. - chainer.Variable: xp.array that are padded and subsampled from batch. - - """ - # For transformer, data is processed in CPU. - # batch should be located in list - assert len(batch) == 1 - xs, ys = batch[0] - xs = F.pad_sequence(xs, padding=-1).data - # get batch of lengths of input sequences - ilens = np.array([x.shape[0] for x in xs], dtype=np.int32) - return xs, ilens, ys diff --git a/spaces/segments-tobias/conex/espnet2/enh/separator/dprnn_separator.py b/spaces/segments-tobias/conex/espnet2/enh/separator/dprnn_separator.py deleted file mode 100644 index 449fb3b79bc311a8dfc179f565a6ade87bfeed54..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/enh/separator/dprnn_separator.py +++ /dev/null @@ -1,118 +0,0 @@ -from collections import OrderedDict -from typing import List -from typing import Tuple -from typing import Union - -import torch -from torch_complex.tensor import ComplexTensor - -from espnet2.enh.layers.dprnn import DPRNN -from espnet2.enh.layers.dprnn import merge_feature -from espnet2.enh.layers.dprnn import split_feature -from espnet2.enh.separator.abs_separator import AbsSeparator - - -class DPRNNSeparator(AbsSeparator): - def __init__( - self, - input_dim: int, - rnn_type: str = "lstm", - bidirectional: bool = True, - num_spk: int = 2, - nonlinear: str = "relu", - layer: int = 3, - unit: int = 512, - segment_size: int = 20, - dropout: float = 0.0, - ): - """Dual-Path RNN (DPRNN) Separator - - Args: - input_dim: input feature dimension - rnn_type: string, select from 'RNN', 'LSTM' and 'GRU'. - bidirectional: bool, whether the inter-chunk RNN layers are bidirectional. - num_spk: number of speakers - nonlinear: the nonlinear function for mask estimation, - select from 'relu', 'tanh', 'sigmoid' - layer: int, number of stacked RNN layers. Default is 3. - unit: int, dimension of the hidden state. - segment_size: dual-path segment size - dropout: float, dropout ratio. Default is 0. - """ - super().__init__() - - self._num_spk = num_spk - - self.segment_size = segment_size - - self.dprnn = DPRNN( - rnn_type=rnn_type, - input_size=input_dim, - hidden_size=unit, - output_size=input_dim * num_spk, - dropout=dropout, - num_layers=layer, - bidirectional=bidirectional, - ) - - if nonlinear not in ("sigmoid", "relu", "tanh"): - raise ValueError("Not supporting nonlinear={}".format(nonlinear)) - - self.nonlinear = { - "sigmoid": torch.nn.Sigmoid(), - "relu": torch.nn.ReLU(), - "tanh": torch.nn.Tanh(), - }[nonlinear] - - def forward( - self, input: Union[torch.Tensor, ComplexTensor], ilens: torch.Tensor - ) -> Tuple[List[Union[torch.Tensor, ComplexTensor]], torch.Tensor, OrderedDict]: - """Forward. - - Args: - input (torch.Tensor or ComplexTensor): Encoded feature [B, T, N] - ilens (torch.Tensor): input lengths [Batch] - - Returns: - masked (List[Union(torch.Tensor, ComplexTensor)]): [(B, T, N), ...] - ilens (torch.Tensor): (B,) - others predicted data, e.g. masks: OrderedDict[ - 'mask_spk1': torch.Tensor(Batch, Frames, Freq), - 'mask_spk2': torch.Tensor(Batch, Frames, Freq), - ... - 'mask_spkn': torch.Tensor(Batch, Frames, Freq), - ] - """ - - # if complex spectrum, - if isinstance(input, ComplexTensor): - feature = abs(input) - else: - feature = input - - B, T, N = feature.shape - - feature = feature.transpose(1, 2) # B, N, T - segmented, rest = split_feature( - feature, segment_size=self.segment_size - ) # B, N, L, K - - processed = self.dprnn(segmented) # B, N*num_spk, L, K - - processed = merge_feature(processed, rest) # B, N*num_spk, T - - processed = processed.transpose(1, 2) # B, T, N*num_spk - processed = processed.view(B, T, N, self.num_spk) - masks = self.nonlinear(processed).unbind(dim=3) - - masked = [input * m for m in masks] - - others = OrderedDict( - zip(["mask_spk{}".format(i + 1) for i in range(len(masks))], masks) - ) - - return masked, ilens, others - - @property - def num_spk(self): - return self._num_spk diff --git a/spaces/sgxz/bingo/src/components/external-link.tsx b/spaces/sgxz/bingo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/shibing624/ChatPDF/modules/overwrites.py b/spaces/shibing624/ChatPDF/modules/overwrites.py deleted file mode 100644 index e029f4a50285c64dcb286a34cb1c3b2680880e05..0000000000000000000000000000000000000000 --- a/spaces/shibing624/ChatPDF/modules/overwrites.py +++ /dev/null @@ -1,93 +0,0 @@ -from __future__ import annotations -import logging - -from typing import List, Tuple -from gradio_client import utils as client_utils -from gradio import utils -import inspect - -from modules.presets import * -from modules.index_func import * - - -def postprocess( - self, - y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple], - ) -> List[List[str | Dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0], "user"), - self._postprocess_chat_messages(message_pair[1], "bot"), - ] - ) - return processed_messages - -def postprocess_chat_messages( - self, chat_message: str | tuple | list | None, role: str - ) -> str | dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - file_uri = chat_message[0] - if utils.validate_url(file_uri): - filepath = file_uri - else: - filepath = self.make_temp_copy_if_needed(file_uri) - - mime_type = client_utils.get_mimetype(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - # chat_message = inspect.cleandoc(chat_message) - # escape html spaces - # chat_message = chat_message.replace(" ", " ") - if role == "bot": - chat_message = convert_bot_before_marked(chat_message) - elif role == "user": - chat_message = convert_user_before_marked(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - -with open("./assets/custom.js", "r", encoding="utf-8") as f, \ - open("./assets/external-scripts.js", "r", encoding="utf-8") as f1: - customJS = f.read() - externalScripts = f1.read() - - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - # if render_latex: - # js += """\""" - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/__init__.py b/spaces/shikunl/prismer/prismer/experts/ocr_detection/charnet/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/simonduerr/diffdock/utils/utils.py b/spaces/simonduerr/diffdock/utils/utils.py deleted file mode 100644 index 975319f9c88c1117d07ed5da7564cae032c5a741..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/utils/utils.py +++ /dev/null @@ -1,243 +0,0 @@ -import os -import subprocess -import warnings -from datetime import datetime -import signal -from contextlib import contextmanager -import numpy as np -import torch -import yaml -from rdkit import Chem -from rdkit.Chem import RemoveHs, MolToPDBFile -from torch_geometric.nn.data_parallel import DataParallel - -from models.all_atom_score_model import TensorProductScoreModel as AAScoreModel -from models.score_model import TensorProductScoreModel as CGScoreModel -from utils.diffusion_utils import get_timestep_embedding -from spyrmsd import rmsd, molecule - - -def get_obrmsd(mol1_path, mol2_path, cache_name=None): - cache_name = datetime.now().strftime('date%d-%m_time%H-%M-%S.%f') if cache_name is None else cache_name - os.makedirs(".openbabel_cache", exist_ok=True) - if not isinstance(mol1_path, str): - MolToPDBFile(mol1_path, '.openbabel_cache/obrmsd_mol1_cache.pdb') - mol1_path = '.openbabel_cache/obrmsd_mol1_cache.pdb' - if not isinstance(mol2_path, str): - MolToPDBFile(mol2_path, '.openbabel_cache/obrmsd_mol2_cache.pdb') - mol2_path = '.openbabel_cache/obrmsd_mol2_cache.pdb' - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - return_code = subprocess.run(f"obrms {mol1_path} {mol2_path} > .openbabel_cache/obrmsd_{cache_name}.rmsd", - shell=True) - print(return_code) - obrms_output = read_strings_from_txt(f".openbabel_cache/obrmsd_{cache_name}.rmsd") - rmsds = [line.split(" ")[-1] for line in obrms_output] - return np.array(rmsds, dtype=np.float) - - -def remove_all_hs(mol): - params = Chem.RemoveHsParameters() - params.removeAndTrackIsotopes = True - params.removeDefiningBondStereo = True - params.removeDegreeZero = True - params.removeDummyNeighbors = True - params.removeHigherDegrees = True - params.removeHydrides = True - params.removeInSGroups = True - params.removeIsotopes = True - params.removeMapped = True - params.removeNonimplicit = True - params.removeOnlyHNeighbors = True - params.removeWithQuery = True - params.removeWithWedgedBond = True - return RemoveHs(mol, params) - - -def read_strings_from_txt(path): - # every line will be one element of the returned list - with open(path) as file: - lines = file.readlines() - return [line.rstrip() for line in lines] - - -def save_yaml_file(path, content): - assert isinstance(path, str), f'path must be a string, got {path} which is a {type(path)}' - content = yaml.dump(data=content) - if '/' in path and os.path.dirname(path) and not os.path.exists(os.path.dirname(path)): - os.makedirs(os.path.dirname(path)) - with open(path, 'w') as f: - f.write(content) - - -def get_optimizer_and_scheduler(args, model, scheduler_mode='min'): - optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=args.lr, weight_decay=args.w_decay) - - if args.scheduler == 'plateau': - scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode=scheduler_mode, factor=0.7, - patience=args.scheduler_patience, min_lr=args.lr / 100) - else: - print('No scheduler') - scheduler = None - - return optimizer, scheduler - - -def get_model(args, device, t_to_sigma, no_parallel=False, confidence_mode=False): - if 'all_atoms' in args and args.all_atoms: - model_class = AAScoreModel - else: - model_class = CGScoreModel - - timestep_emb_func = get_timestep_embedding( - embedding_type=args.embedding_type, - embedding_dim=args.sigma_embed_dim, - embedding_scale=args.embedding_scale) - - lm_embedding_type = None - if args.esm_embeddings_path is not None: lm_embedding_type = 'esm' - - model = model_class(t_to_sigma=t_to_sigma, - device=device, - no_torsion=args.no_torsion, - timestep_emb_func=timestep_emb_func, - num_conv_layers=args.num_conv_layers, - lig_max_radius=args.max_radius, - scale_by_sigma=args.scale_by_sigma, - sigma_embed_dim=args.sigma_embed_dim, - ns=args.ns, nv=args.nv, - distance_embed_dim=args.distance_embed_dim, - cross_distance_embed_dim=args.cross_distance_embed_dim, - batch_norm=not args.no_batch_norm, - dropout=args.dropout, - use_second_order_repr=args.use_second_order_repr, - cross_max_distance=args.cross_max_distance, - dynamic_max_cross=args.dynamic_max_cross, - lm_embedding_type=lm_embedding_type, - confidence_mode=confidence_mode, - num_confidence_outputs=len( - args.rmsd_classification_cutoff) + 1 if 'rmsd_classification_cutoff' in args and isinstance( - args.rmsd_classification_cutoff, list) else 1) - - if device.type == 'cuda' and not no_parallel: - model = DataParallel(model) - model.to(device) - return model - - -def get_symmetry_rmsd(mol, coords1, coords2, mol2=None): - with time_limit(10): - mol = molecule.Molecule.from_rdkit(mol) - mol2 = molecule.Molecule.from_rdkit(mol2) if mol2 is not None else mol2 - mol2_atomicnums = mol2.atomicnums if mol2 is not None else mol.atomicnums - mol2_adjacency_matrix = mol2.adjacency_matrix if mol2 is not None else mol.adjacency_matrix - RMSD = rmsd.symmrmsd( - coords1, - coords2, - mol.atomicnums, - mol2_atomicnums, - mol.adjacency_matrix, - mol2_adjacency_matrix, - ) - return RMSD - - -class TimeoutException(Exception): pass - - -@contextmanager -def time_limit(seconds): - def signal_handler(signum, frame): - raise TimeoutException("Timed out!") - - signal.signal(signal.SIGALRM, signal_handler) - signal.alarm(seconds) - try: - yield - finally: - signal.alarm(0) - - -class ExponentialMovingAverage: - """ from https://github.com/yang-song/score_sde_pytorch/blob/main/models/ema.py - Maintains (exponential) moving average of a set of parameters. """ - - def __init__(self, parameters, decay, use_num_updates=True): - """ - Args: - parameters: Iterable of `torch.nn.Parameter`; usually the result of - `model.parameters()`. - decay: The exponential decay. - use_num_updates: Whether to use number of updates when computing - averages. - """ - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - self.decay = decay - self.num_updates = 0 if use_num_updates else None - self.shadow_params = [p.clone().detach() - for p in parameters if p.requires_grad] - self.collected_params = [] - - def update(self, parameters): - """ - Update currently maintained parameters. - Call this every time the parameters are updated, such as the result of - the `optimizer.step()` call. - Args: - parameters: Iterable of `torch.nn.Parameter`; usually the same set of - parameters used to initialize this object. - """ - decay = self.decay - if self.num_updates is not None: - self.num_updates += 1 - decay = min(decay, (1 + self.num_updates) / (10 + self.num_updates)) - one_minus_decay = 1.0 - decay - with torch.no_grad(): - parameters = [p for p in parameters if p.requires_grad] - for s_param, param in zip(self.shadow_params, parameters): - s_param.sub_(one_minus_decay * (s_param - param)) - - def copy_to(self, parameters): - """ - Copy current parameters into given collection of parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored moving averages. - """ - parameters = [p for p in parameters if p.requires_grad] - for s_param, param in zip(self.shadow_params, parameters): - if param.requires_grad: - param.data.copy_(s_param.data) - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) - - def state_dict(self): - return dict(decay=self.decay, num_updates=self.num_updates, - shadow_params=self.shadow_params) - - def load_state_dict(self, state_dict, device): - self.decay = state_dict['decay'] - self.num_updates = state_dict['num_updates'] - self.shadow_params = [tensor.to(device) for tensor in state_dict['shadow_params']] diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bombsquad Hackeado Un Mod de Bombsquad que te Permite Personalizar tu Experiencia en Android APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bombsquad Hackeado Un Mod de Bombsquad que te Permite Personalizar tu Experiencia en Android APK.md deleted file mode 100644 index 341e4b19ff3686ed07cb7f83eda08439ec1ff529..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bombsquad Hackeado Un Mod de Bombsquad que te Permite Personalizar tu Experiencia en Android APK.md +++ /dev/null @@ -1,122 +0,0 @@ -
                    -

                    Descargar Bombsquad Hackeado Para Android APK: Cómo Disfrutar de un Juego de Bombas Explosivo y Divertido

                    -

                    ¿Te gustan los juegos de acción, humor y explosiones? Entonces, te encantará Bombsquad, un juego para Android que te permite competir con tus amigos o con otros jugadores en línea en divertidas batallas de bombas. En este artículo, te contaremos qué es Bombsquad, cómo descargar Bombsquad hackeado para Android APK, y algunos consejos y trucos para dominar el juego.

                    -

                    descargar bombsquad hackeado para android apk


                    DOWNLOADhttps://ssurll.com/2uNYfn



                    -

                    Qué es Bombsquad y por qué deberías jugarlo

                    -

                    Bombsquad es un juego de acción multijugador que se basa en el uso de bombas y otros objetos para eliminar a tus rivales. Puedes jugar solo o con hasta 8 jugadores en modo local o en línea. El juego tiene unos gráficos coloridos y unos efectos de física muy divertidos que hacen que cada partida sea única y emocionante.

                    -

                    Características principales de Bombsquad

                    -

                    Estas son algunas de las características que hacen que Bombsquad sea un juego tan entretenido:

                    -
                      -
                    • Varios personajes y escenarios: puedes elegir entre diferentes personajes, como piratas, ninjas, bárbaros, chefs locos y más. También puedes personalizar tu aspecto con diferentes trajes y accesorios. Además, hay varios escenarios donde jugar, como islas, castillos, estadios y más.
                    • -
                    • Varios modos de juego y desafíos: puedes jugar en modo libre para todos, donde el objetivo es eliminar a todos los demás jugadores; en modo equipos, donde debes cooperar con tus aliados para vencer al equipo rival; o en modo cooperativo, donde debes enfrentarte a oleadas de enemigos controlados por la inteligencia artificial. También hay varios desafíos que puedes completar para obtener recompensas y desbloquear nuevos contenidos.
                    • -
                    • Soporte para varios controles: puedes jugar con la pantalla táctil de tu dispositivo, o con un controlador externo. También puedes usar tu teléfono o tableta como controlador a través de la aplicación gratuita 'BombSquad Remote'. De esta forma, puedes disfrutar del juego con más comodidad y precisión.
                    • -
                    • Explosiones gratuitas: el juego tiene unos efectos de explosión muy realistas y espectaculares que hacen que cada bomba sea una sorpresa. Además, hay varios tipos de bombas que puedes usar, como bombas normales, pegajosas, heladas, impactantes y más.
                    • -
                    -

                    Modos de juego y desafíos de Bombsquad

                    -

                    Bombsquad tiene varios modos de juego y desafíos que puedes probar para divertirte y mejorar tus habilidades. Estos son algunos de ellos:

                    -
                      -
                    • Entrenamiento de asalto: este es el modo básico donde puedes aprender los controles del juego y las mecánicas básicas. Debes derrotar a a varios enemigos con bombas y otros objetos. Es un buen modo para practicar y calentar antes de entrar en las partidas reales.
                    • -
                    • Carrera de obstáculos: este es un modo donde debes superar una serie de obstáculos y trampas usando tu agilidad y tus bombas. Debes llegar a la meta lo más rápido posible y evitar caer al vacío o ser eliminado por las bombas enemigas. Es un modo muy divertido y desafiante que pone a prueba tu coordinación y tu estrategia.
                    • -
                    • Rey de la colina: este es un modo donde debes ocupar una zona del escenario y defenderla de los demás jugadores. El jugador que más tiempo permanezca en la zona gana la partida. Debes usar tus bombas y otros objetos para evitar que los rivales te quiten el control de la zona. Es un modo muy competitivo y dinámico que requiere de mucha atención y habilidad.
                    • -
                    • Captura la bandera: este es un modo donde debes cooperar con tu equipo para capturar la bandera del equipo rival y llevarla a tu base. El equipo que más veces capture la bandera gana la partida. Debes usar tus bombas y otros objetos para atacar a los enemigos, defender tu bandera y ayudar a tus aliados. Es un modo muy cooperativo y estratégico que fomenta el trabajo en equipo y la comunicación.
                    • -
                    • Hockey sobre hielo: este es un modo donde debes usar tus bombas para golpear un disco de hielo y meterlo en la portería del equipo rival. El equipo que más goles marque gana la partida. Debes usar tus bombas con precisión y potencia para dirigir el disco, y también para bloquear los disparos enemigos. Es un modo muy divertido y original que combina el deporte con las explosiones.
                    • -
                    -

                    Cómo descargar Bombsquad hackeado para Android APK

                    -

                    Bombsquad es un juego gratuito que puedes descargar desde Google Play Store, pero también existe una versión hackeada que te ofrece algunas ventajas adicionales. A continuación, te explicamos cómo descargar Bombsquad hackeado para Android APK y qué beneficios tiene.

                    -

                    Ventajas de descargar Bombsquad hackeado para Android APK

                    -

                    Estas son algunas de las ventajas que obtienes al descargar Bombsquad hackeado para Android APK:

                    -

                    descargar bombsquad hackeado ultima version para android apk
                    -descargar bombsquad hackeado con todo desbloqueado para android apk
                    -descargar bombsquad hackeado sin root para android apk
                    -descargar bombsquad hackeado con dinero infinito para android apk
                    -descargar bombsquad hackeado con mod menu para android apk
                    -descargar bombsquad hackeado 2023 para android apk
                    -descargar bombsquad hackeado gratis para android apk
                    -descargar bombsquad hackeado mega para android apk
                    -descargar bombsquad hackeado mediafire para android apk
                    -descargar bombsquad hackeado online para android apk
                    -descargar bombsquad hackeado offline para android apk
                    -descargar bombsquad hackeado full para android apk
                    -descargar bombsquad hackeado facil y rapido para android apk
                    -descargar bombsquad hackeado sin anuncios para android apk
                    -descargar bombsquad hackeado sin virus para android apk
                    -descargar bombsquad hackeado en español para android apk
                    -descargar bombsquad hackeado por aptoide para android apk
                    -descargar bombsquad hackeado por uptodown para android apk
                    -descargar bombsquad hackeado por happymod para android apk
                    -descargar bombsquad hackeado por ac market para android apk
                    -como descargar bombsquad hackeado para android apk
                    -donde descargar bombsquad hackeado para android apk
                    -porque descargar bombsquad hackeado para android apk
                    -que es descargar bombsquad hackeado para android apk
                    -beneficios de descargar bombsquad hackeado para android apk
                    -requisitos de descargar bombsquad hackeado para android apk
                    -tutorial de descargar bombsquad hackeado para android apk
                    -opiniones de descargar bombsquad hackeado para android apk
                    -trucos de descargar bombsquad hackeado para android apk
                    -consejos de descargar bombsquad hackeado para android apk
                    -ventajas de descargar bombsquad hackeado para android apk
                    -desventajas de descargar bombsquad hackeado para android apk
                    -alternativas de descargar bombsquad hackeado para android apk
                    -comparativa de descargar bombsquad hackeado para android apk
                    -ranking de descargar bombsquad hackeado para android apk
                    -valoracion de descargar bombsquad hackeado para android apk
                    -reseña de descargar bombsquad hackeado para android apk
                    -analisis de descargar bombsquad hackeado para android apk
                    -experiencia de descargar bombsquad hackeado para android apk
                    -testimonio de descargar bombsquad hackeado para android apk
                    -recomendacion de descargar bombsquad hackeado para android apk
                    -sugerencia de descargar bombsquad hackeado para android apk
                    -solucion de descargar bombsquad hackeado para android apk
                    -respuesta de descargar bombsquad hackeado para android apk
                    -pregunta de descargar bombsquad hackeado para android apk
                    -duda de descargar bombsquad hackeado para android apk
                    -problema de descargar bombsquad hackeado para android apk
                    -error de descargar bombsquad hackeado para android apk

                    -
                      -
                    • Dinero ilimitado: al descargar Bombsquad hackeado para Android APK, obtienes una cantidad ilimitada de dinero que puedes usar para comprar nuevos personajes, trajes, accesorios, bombas y más. Así, puedes personalizar tu juego como quieras y disfrutar de más variedad y diversión.
                    • -
                    • Tickets ilimitados: al descargar Bombsquad hackeado para Android APK, obtienes una cantidad ilimitada de tickets que puedes usar para acceder a los desafíos especiales del juego. Estos desafíos te ofrecen recompensas únicas y exclusivas que no puedes obtener de otra forma. Así, puedes completar todos los desafíos que quieras y desbloquear todos los contenidos del juego.
                    • -
                    • Modo pro desbloqueado: al descargar Bombsquad hackeado para Android APK, obtienes el acceso al modo pro del juego, que te permite crear tus propios escenarios, modos de juego, reglas y más. También puedes compartir tus creaciones con otros jugadores y jugar a las creaciones de otros. Así, puedes disfrutar de una experiencia más personalizada y creativa.
                    • -
                    -

                    Pasos para descargar e instalar Bombsquad hackeado para Android APK

                    -

                    Estos son los pasos que debes seguir para descargar e instalar Bombsquad hackeado para Android APK:

                    -
                      -
                    1. Descarga el archivo APK: el primer paso es descargar el archivo APK de Bombsquad hackeado desde un sitio web confiable. Puedes buscar en Google "Bombsquad hackeado APK" o usar este enlace.
                    2. -
                    3. Permite la instalación de fuentes desconocidas: el segundo paso es permitir la instalación de aplicaciones desde fuentes desconocidas en tu dispositivo Android. Para esto, debes ir a los ajustes de tu dispositivo, luego a la sección de seguridad, y activar la opción de "Orígenes desconocidos" o "Fuentes desconocidas".
                    4. -
                    5. Instala el archivo APK: el tercer paso es instalar el archivo APK de Bombsquad hackeado en tu dispositivo. Para esto, debes abrir el archivo que descargaste y seguir las instrucciones que aparecen en la pantalla. Una vez que se complete la instalación, podrás ver el icono de Bombsquad en tu menú de aplicaciones.
                    6. -
                    7. Disfruta del juego: el cuarto y último paso es disfrutar del juego con todas las ventajas que te ofrece Bombsquad hackeado para Android APK. Puedes abrir el juego desde el icono que se creó en tu menú de aplicaciones, o desde el acceso directo que se creó en tu pantalla de inicio. Ahora, puedes jugar a Bombsquad con dinero ilimitado, tickets ilimitados, modo pro desbloqueado y más.
                    8. -
                    -

                    Consejos y trucos para dominar Bombsquad

                    -

                    Ahora que ya sabes cómo descargar Bombsquad hackeado para Android APK, te daremos algunos consejos y trucos para que puedas dominar el juego y ganar todas las partidas. Estos son algunos de ellos:

                    -

                    Usa un controlador para jugar mejor

                    -

                    Aunque puedes jugar a Bombsquad con la pantalla táctil de tu dispositivo, te recomendamos que uses un controlador externo para tener una mejor experiencia. Con un controlador, podrás moverte con más facilidad, apuntar con más precisión y reaccionar con más rapidez. Además, podrás jugar con más comodidad y evitar el cansancio o el dolor en los dedos. Puedes usar cualquier controlador compatible con Android, como un mando de Xbox, PlayStation o Nintendo.

                    -

                    Aprovecha los potenciadores y las armas especiales

                    -

                    En Bombsquad, hay varios potenciadores y armas especiales que puedes usar para tener una ventaja sobre tus rivales. Estos son algunos de ellos:

                    -
                      -
                    • Potenciador de velocidad: este potenciador te permite correr más rápido durante unos segundos. Es muy útil para escapar de las bombas enemigas, alcanzar la bandera o la zona del rey, o sorprender a tus rivales con un ataque rápido.
                    • -
                    • Potenciador de fuerza: este potenciador te permite lanzar las bombas más lejos y con más potencia. Es muy útil para atacar a tus rivales desde una distancia segura, o para romper las barreras o los obstáculos que te impiden avanzar.
                    • -
                    • Potenciador de salud: este potenciador te permite recuperar parte de tu salud si has sufrido algún daño. Es muy útil para sobrevivir más tiempo y resistir los ataques enemigos.
                    • -
                    • Bomba pegajosa: esta bomba se adhiere a cualquier superficie o jugador que toque. Es muy útil para atrapar a tus rivales y hacerlos explotar sin escapatoria.
                    • -
                    • Bomba helada: esta bomba congela a cualquier jugador que esté cerca cuando explota. Es muy útil para inmovilizar a tus rivales y dejarlos vulnerables a tus ataques.
                    • -
                    • Bomba impactante: esta bomba libera una descarga eléctrica que afecta a cualquier jugador que esté cerca cuando explota. Es muy útil para aturdir a tus rivales y hacerles perder el control de sus movimientos.
                    • -
                    -

                    Sé rápido y astuto para evitar las bombas enemigas

                    -

                    En Bombsquad, debes estar siempre atento a las bombas enemigas y evitarlas lo mejor posible. Estos son algunos consejos para lograrlo:

                    -
                      -
                    • Muévete constantemente: no te quedes quieto en un lugar, sino muévete constantemente por el escenario. Así, podrás esquivar las bombas enemigas y buscar una posición favorable para atacar.
                    • -
                    • Salta y esquiva: usa los botones de salto y esquiva para evitar las bomb as enemigas que se acerquen a ti. También puedes usar estos botones para saltar sobre las bombas y lanzarlas de vuelta a tus rivales.
                    • -
                    • Usa el entorno a tu favor: aprovecha los elementos del escenario, como las plataformas, las rampas, los barriles, las minas y más, para evitar las bombas enemigas o para hacerlas rebotar hacia ellos. También puedes usar el entorno para esconderte, sorprender o emboscar a tus rivales.
                    • -
                    -

                    Conclusión

                    -

                    Bombsquad es un juego de acción multijugador muy divertido y explosivo que te permite competir con tus amigos o con otros jugadores en línea en divertidas batallas de bombas. Puedes descargar Bombsquad hackeado para Android APK para disfrutar de algunas ventajas adicionales, como dinero ilimitado, tickets ilimitados y modo pro desbloqueado. También puedes seguir algunos consejos y trucos para dominar el juego y ganar todas las partidas. ¿A qué esperas para descargar Bombsquad hackeado para Android APK y disfrutar de un juego de bombas explosivo y divertido?

                    -

                    Preguntas frecuentes

                    -

                    A continuación, te presentamos algunas preguntas frecuentes sobre Bombsquad y su versión hackeada:

                    -
                      -
                    • ¿Es seguro descargar Bombsquad hackeado para Android APK?: Sí, siempre y cuando lo descargues desde un sitio web confiable y sigas los pasos que te hemos indicado. No obstante, te recomendamos que tengas precaución y que no abuses de las ventajas que te ofrece la versión hackeada, ya que podrías ser baneado o reportado por otros jugadores.
                    • -
                    • ¿Puedo jugar a Bombsquad con mis amigos?: Sí, puedes jugar a Bombsquad con tus amigos en modo local o en línea. En modo local, puedes conectar hasta 8 dispositivos a la misma red WiFi y jugar juntos en la misma pantalla. En modo en línea, puedes crear o unirte a una sala de juego y jugar con tus amigos o con otros jugadores de todo el mundo.
                    • -
                    • ¿Qué requisitos necesita mi dispositivo para jugar a Bombsquad?: Para jugar a Bombsquad, necesitas un dispositivo Android con al menos 1 GB de RAM y 100 MB de espacio libre. También necesitas una conexión a Internet estable para jugar en modo en línea.
                    • -
                    • ¿Qué otras plataformas hay disponibles para jugar a Bombsquad?: Además de Android, también puedes jugar a Bombsquad en Windows, Mac, Linux, iOS y tvOS. Puedes descargar el juego desde la página oficial de Bombsquad.
                    • -
                    • ¿Dónde puedo encontrar más información sobre Bombsquad?: Puedes encontrar más información sobre Bombsquad en su página oficial, en su página de Facebook, en su canal de YouTube, o en su foro de Reddit.
                    • -

                    197e85843d
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Diablo Immortal Auto Clicker A Must-Have Tool for Hardcore Players.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Diablo Immortal Auto Clicker A Must-Have Tool for Hardcore Players.md deleted file mode 100644 index 83922600526b812c522a8a929ee4e20417637520..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Diablo Immortal Auto Clicker A Must-Have Tool for Hardcore Players.md +++ /dev/null @@ -1,166 +0,0 @@ -
                    -

                    Diablo Immortal Auto Clicker Download: What You Need to Know

                    -

                    Are you a fan of Diablo Immortal, the new mobile game from Blizzard Entertainment? Do you want to level up faster, collect more loot, and dominate your enemies with ease? If so, you might be interested in using an auto clicker for Diablo Immortal.

                    -

                    diablo immortal auto clicker download


                    Download Filehttps://ssurll.com/2uNUIQ



                    -

                    An auto clicker is a software tool that automates mouse clicks at a fast rate. It can help you perform repetitive tasks, such as attacking, mining, crafting, or looting, without having to manually click your mouse button. It can also enhance your gaming experience by allowing you to focus on more strategic aspects of the game.

                    -

                    However, before you download and install an auto clicker for Diablo Immortal, there are some things you need to know. What are the benefits and risks of using an auto clicker? How can you download and install one safely and easily? How can you avoid getting banned or penalized for using one? In this article, we will answer these questions and more.

                    -

                    How to Download and Install an Auto Clicker for Diablo Immortal

                    -

                    If you want to use an auto clicker for Diablo Immortal, you need to download and install one on your device. There are many auto clickers available online, but not all of them are compatible with Diablo Immortal or your device's operating system. You also need to be careful about downloading from untrusted sources that may contain malware or viruses.

                    -

                    To help you choose the best auto clicker for Diablo Immortal, we have compiled a list of some of the most popular and reliable ones for different devices and platforms. Here are the steps to download and install them:

                    -

                    diablo immortal pc auto attack keybind
                    -diablo immortal macro farming reddit
                    -diablo immortal mods and community nexus
                    -diablo immortal colored mouse cursors
                    -diablo immortal ban for autoclicker and farm afk
                    -diablo immortal botting problem 2023
                    -diablo immortal how to set up auto clicker
                    -diablo immortal best auto clicker app for android
                    -diablo immortal auto clicker download apk
                    -diablo immortal auto clicker for ios
                    -diablo immortal auto clicker no root
                    -diablo immortal auto clicker for bluestacks
                    -diablo immortal auto clicker for mac
                    -diablo immortal auto clicker for windows 10
                    -diablo immortal auto clicker tutorial
                    -diablo immortal auto clicker settings
                    -diablo immortal auto clicker script
                    -diablo immortal auto clicker hack
                    -diablo immortal auto clicker cheat
                    -diablo immortal auto clicker mod
                    -diablo immortal auto clicker free download
                    -diablo immortal auto clicker online
                    -diablo immortal auto clicker without ads
                    -diablo immortal auto clicker safe to use
                    -diablo immortal auto clicker reviews
                    -diablo immortal auto clicker reddit discussion
                    -diablo immortal auto clicker youtube video
                    -diablo immortal auto clicker guide
                    -diablo immortal auto clicker tips and tricks
                    -diablo immortal auto clicker benefits and drawbacks
                    -diablo immortal auto clicker pros and cons
                    -diablo immortal auto clicker comparison with other tools
                    -diablo immortal auto clicker alternatives and substitutes
                    -diablo immortal auto clicker features and functions
                    -diablo immortal auto clicker advantages and disadvantages
                    -diablo immortal auto clicker best practices and recommendations
                    -diablo immortal auto clicker how to use effectively and efficiently
                    -diablo immortal auto clicker how to avoid detection and ban
                    -diablo immortal auto clicker how to customize and optimize
                    -diablo immortal auto clicker how to improve performance and speed
                    -diablo immortal auto clicker how to troubleshoot and fix errors
                    -diablo immortal auto clicker how to update and upgrade
                    -diablo immortal auto clicker how to uninstall and remove
                    -diablo immortal auto clicker how to backup and restore data
                    -diablo immortal auto clicker how to support and contact developers
                    -diablo immortal auto clicker how to rate and review on app store or google play store

                    - - - - - - - - - - - - - - - - - - - - - - - - - - -
                    Device/PlatformAuto ClickerSteps
                    Windows PCOP Auto Clicker -
                      -
                    1. Go to [OP Auto Clicker](^1^) website and click on "Download".
                    2. -
                    3. Save the file on your computer and run it.
                    4. -
                    5. Follow the installation wizard instructions.
                    6. -
                    7. Launch OP Auto Clicker from your desktop or start menu.
                    8. -
                    9. Select your preferred settings, such as hotkey, click interval, click type, etc.
                    10. -
                    11. Press the hotkey to start or stop the auto clicker.
                    12. -
                    -
                    Mac OSMac Auto Clicker -
                      -
                    1. Go to [Mac Auto Clicker] website and click on "Download".
                    2. -
                    3. Save the file on your computer and run it.
                    4. -
                    5. Follow the installation wizard instructions.
                    6. -
                    7. Launch Mac Auto Clicker from your applications folder.
                    8. -
                    9. Select your preferred settings, such as hotkey, click interval, click type, etc.
                    10. -
                    11. Press the hotkey to start or stop the auto clicker.
                    12. -
                    -
                    AndroidAuto Clicker - Automatic Tap -
                      -
                    1. Go to [Google Play Store] and search for "Auto Clicker - Automatic Tap".
                    2. -
                    3. Tap on "Install" and accept the permissions.
                    4. -
                    5. Open the app and grant it accessibility service.
                    6. -
                    7. Select your preferred settings, such as click interval, click type, target area, etc.
                    8. -
                    9. Tap on the floating widget to start or stop the auto clicker.
                    10. -
                    -
                    iOSSwitch Control -
                      -
                    1. Go to Settings > Accessibility > Switch Control and turn it on.
                    2. -
                    3. Tap on Switches and add a new switch. Choose a source, such as screen or external device.
                    4. -
                    5. Tap on Recipes and create a new recipe. Name it "Auto Clicker" and assign it to your switch.
                    6. -
                    7. Tap on Custom Gesture and record a tap gesture on the screen.
                    8. -
                    9. Go back to the recipe and set the repeat interval and duration.
                    10. -
                    11. Launch Diablo Immortal and activate your switch to start or stop the auto clicker.
                    12. -
                    -
                    -

                    These are some of the best auto clickers for Diablo Immortal that you can download and install on your device. However, you should always check the compatibility and security of any software before downloading it. You should also read the user reviews and ratings to get an idea of how well it works and if there are any issues or bugs.

                    -

                    How to Avoid Getting Banned or Penalized for Using an Auto Clicker

                    -

                    Using an auto clicker for Diablo Immortal may sound tempting, but it also comes with some risks. Blizzard Entertainment, the developer and publisher of Diablo Immortal, has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation. This includes auto clickers, bots, hacks, cheats, exploits, and mods.

                    -

                    If Blizzard detects that you are using an auto clicker for Diablo Immortal, you may face serious consequences. You may get a warning, a temporary suspension, a permanent ban, or even legal action. You may also lose your progress, items, achievements, and reputation in the game. You may also ruin the game's balance and fun for other players who play fairly.

                    -

                    To avoid getting banned or penalized for using an auto clicker for Diablo Immortal, you should follow these best practices and precautions:

                    -
                      - - Use an auto clicker only for personal use and not for commercial purposes. - Use an auto clicker only for simple tasks that do not affect the game's economy or PvP. - Use an auto clicker only for short periods of time and not for hours or days. - Use an auto clicker only when you are actively playing the game and not when you are away or offline. - Use an auto clicker only with moderation and discretion and not with excessive frequency or speed. - Use an auto clicker only with respect and courtesy and not with abuse or harassment. - Use an auto clicker only at your own risk and responsibility and not with ignorance or negligence.
                    -

                    By following these best practices and precautions, you can reduce the chances of getting banned or penalized for using an auto clicker for Diablo Immortal. However, you should always be aware of the potential risks and consequences of using any third-party software or tools that violate Blizzard's terms of service and code of conduct.

                    -

                    Conclusion

                    -

                    In conclusion, using an auto clicker for Diablo Immortal can be a useful and convenient way to enhance your gaming experience. It can help you perform repetitive tasks faster, collect more loot easier, and dominate your enemies better. However, it can also be a risky and dangerous way to jeopardize your gaming account. It can get you banned or penalized by Blizzard Entertainment, who has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation. You should always be careful and responsible when using an auto clicker for Diablo Immortal, and follow the best practices and precautions to avoid getting banned or penalized. We hope that this article has helped you understand what you need to know about Diablo Immortal auto clicker download. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

                    Sources and References

                    -
                      -
                    • OP Auto Clicker. https://sourceforge.net/projects/orphamielautoclicker/
                    • -
                    • Mac Auto Clicker. https://www.murgaa.com/mac-auto-clicker/
                    • -
                    • Auto Clicker - Automatic Tap. https://play.google.com/store/apps/details?id=com.truedevelopersstudio.automatictap.autoclicker&hl=en_US&gl=US
                    • -
                    • Switch Control. https://support.apple.com/en-us/HT201370
                    • -
                    • Blizzard Entertainment. Diablo Immortal Terms of Use. https://www.blizzard.com/en-us/legal/9f0a9c6b-8a6f-4c0f-8b7c-5a0d7e9e1e2c/diablo-immortal-terms-of-use
                    • -
                    • Blizzard Entertainment. Diablo Immortal Code of Conduct. https://www.blizzard.com/en-us/legal/9f0a9c6b-8a6f-4c0f-8b7c-5a0d7e9e1e2c/diablo-immortal-code-of-conduct
                    • -
                    -

                    FAQs

                    -
                      -
                    1. What is Diablo Immortal?
                    2. -

                      Diablo Immortal is a mobile game developed by Blizzard Entertainment and NetEase Games. It is a massively multiplayer online role-playing game (MMORPG) set in the Diablo universe. It features six classes, dynamic events, co-op and PvP modes, and an original story that bridges the gap between Diablo II and Diablo III.

                      -
                    3. What is an auto clicker?
                    4. -

                      An auto clicker is a software tool that automates mouse clicks at a fast rate. It can help you perform repetitive tasks, such as attacking, mining, crafting, or looting, without having to manually click your mouse button. It can also enhance your gaming experience by allowing you to focus on more strategic aspects of the game.

                      -
                    5. What are the benefits of using an auto clicker for Diablo Immortal?
                    6. -

                      Some of the benefits of using an auto clicker for Diablo Immortal are:

                      -
                        -
                      • You can level up faster by killing more enemies and completing more quests.
                      • -
                      • You can collect more loot by opening more chests and picking up more items.
                      • -
                      • You can dominate your enemies by unleashing more skills and attacks.
                      • -
                      • You can save time and energy by avoiding hand fatigue and boredom.
                      • -
                      • You can enjoy the game more by focusing on the story, graphics, and sound.
                      • -
                      -
                    7. What are the risks of using an auto clicker for Diablo Immortal?
                    8. -

                      Some of the risks of using an auto clicker for Diablo Immortal are:

                      -
                        -
                      • You may get banned or penalized by Blizzard Entertainment, who has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation.
                      • -
                      • You may lose your progress, items, achievements, and reputation in the game.
                      • -
                      • You may ruin the game's balance and fun for other players who play fairly.
                      • -
                      • You may expose your device to malware or viruses from untrusted sources.
                      • -
                      • You may miss out on some of the game's features and challenges that require manual input and interaction.
                      • -
                      -
                    9. How can I avoid getting banned or penalized for using an auto clicker for Diablo Immortal?
                    10. -

                      To avoid getting banned or penalized for using an auto clicker for Diablo Immortal, you should follow these best practices and precautions:

                      -
                        - - Use an auto clicker only for personal use and not for commercial purposes. - Use an auto clicker only for simple tasks that do not affect the game's economy or PvP. - Use an auto clicker only for short periods of time and not for hours or days. - Use an auto clicker only when you are actively playing the game and not when you are away or offline. - Use an auto clicker only with moderation and discretion and not with excessive frequency or speed. - Use an auto clicker only with respect and courtesy and not with abuse or harassment. - Use an auto clicker only at your own risk and responsibility and not with ignorance or negligence.
                      -

                      By following these best practices and precautions, you can reduce the chances of getting banned or penalized for using an auto clicker for Diablo Immortal. However, you should always be aware of the potential risks and consequences of using any third-party software or tools that violate Blizzard's terms of service and code of conduct.

                      401be4b1e0
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Genshin Impact PC Fraco and Experience a Vast Magical World of Adventure with Friends.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Genshin Impact PC Fraco and Experience a Vast Magical World of Adventure with Friends.md deleted file mode 100644 index 00c403175c18b6a81b0a6d307fa10f8eb107e5b0..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Genshin Impact PC Fraco and Experience a Vast Magical World of Adventure with Friends.md +++ /dev/null @@ -1,135 +0,0 @@ -
                      -

                      Download Genshin Impact PC Fraco: How to Play the Best RPG of 2020 on a Low-End PC

                      -

                      Genshin Impact is a free-to-play open-world action RPG that has taken the gaming world by storm. The game features a vast and beautiful world to explore, a dynamic elemental combat system, and a diverse cast of characters to join your adventure. Genshin Impact has been praised for its high production values, engaging gameplay, and charming story. It has also been compared to The Legend of Zelda: Breath of the Wild, one of the most acclaimed games of all time.

                      -

                      download genshin impact pc fraco


                      DOWNLOAD ===> https://ssurll.com/2uNUVV



                      -

                      But what if you don't have a powerful PC to run this amazing game? Don't worry, we have you covered. In this article, we will show you how to download Genshin Impact PC fraco, or low-end PC, and how to optimize the game settings and performance for your system. We will also give you a brief review of the game and answer some frequently asked questions. By the end of this article, you will be able to enjoy Genshin Impact on your PC without any hassle.

                      -

                      System Requirements

                      -

                      Before you download Genshin Impact PC fraco, you need to check if your PC meets the minimum or recommended system requirements for the game. Here are the official specs from the developer, miHoYo:

                      - - - - - - - - - -
                      Minimum RequirementsRecommended Requirements
                      OS: Windows 7 SP1 64-bit, Windows 8.1 64-bit, or Windows 10 64-bit
                      -CPU: Intel Core i5 or equivalent
                      -RAM: 8 GB
                      -GPU: NVIDIA GeForce GT 1030 or better
                      -DirectX: Version 11
                      -Storage: 30 GB
                      OS: Windows 7 SP1 64-bit, Windows 8.1 64-bit, or Windows 10 64-bit
                      -CPU: Intel Core i7 or equivalent
                      -RAM: 16 GB
                      -GPU: NVIDIA GeForce GTX 1060 6 GB or better
                      -DirectX: Version 11
                      -Storage: 30 GB
                      -

                      As you can see, Genshin Impact is not a very demanding game, but it still requires a decent PC to run smoothly. If your PC does not meet the minimum requirements, you may experience low frame rates, lag, crashes, or other issues. However, there are some tips and tricks that can help you improve the game performance on your low-end PC.

                      -

                      Tips and Tricks

                      -

                      Here are some tips and tricks that can help you play Genshin Impact PC fraco with better performance and quality:

                      -
                        -
                      • Adjust the graphics settings in the game menu. You can lower the resolution, quality, effects, shadows, reflections, and other options to reduce the load on your GPU and CPU. You can also use the preset options (Low, Medium, High) to quickly change the settings according to your preference.
                      • -
                      • Close any unnecessary programs or background processes that may be using up your RAM or CPU resources. You can use Task Manager (Ctrl+Shift+Esc) to check which programs are running and end them if they are not needed.
                      • -
                      • Update your drivers and software. Make sure you have the latest version of Windows, DirectX, GPU drivers, and other software that may affect the game performance. You can use tools like Driver Booster or GeForce Experience to automatically update your drivers.
                      • -
                      • Use a cooling pad or fan for your laptop. If you are playing on a laptop, you may experience overheating issues that can cause your PC to slow down or shut down. To prevent this, you can use a cooling pad or fan to keep your laptop temperature under control.
                      • -
                      • Play with friends or co-op mode. Genshin Impact supports up to four players in co-op mode, which can make the game more fun and easier. You can also join other players' worlds and help them with quests, bosses, or exploration. Playing with friends or co-op mode can reduce the stress on your PC and make the game more enjoyable.
                      • -
                      -

                      Download Link

                      -

                      Now that you know how to optimize Genshin Impact PC fraco, you may be wondering where to download the game for free and how to install it. Here are the steps to follow:

                      -
                        -
                      1. Go to the official website of Genshin Impact: https://genshin.mihoyo.com/en
                      2. -
                      3. Click on the "Windows" button under the "Download Now" section.
                      4. -
                      5. Wait for the download to finish. The file size is about 15 GB, so it may take some time depending on your internet speed.
                      6. -
                      7. Run the installer and follow the instructions. You can choose the installation path and language of the game.
                      8. -
                      9. Launch the game and log in with your miHoYo account. If you don't have one, you can create one for free on the website or in the game.
                      10. -
                      11. Enjoy Genshin Impact on your PC!
                      12. -
                      -

                      Review

                      -

                      Genshin Impact is a game that deserves all the hype and praise it has received. It is a stunning and immersive RPG that offers hours of fun and exploration. Here are some of the pros and cons of the game:

                      -

                      Pros

                      -
                        -
                      • The game is free-to-play and does not require any subscription or purchase to enjoy. You can also play it on multiple platforms, including PC, mobile, and console.
                      • -
                      • The game has a beautiful and diverse world to explore, with different regions, cultures, landscapes, and secrets. You can climb, glide, swim, and interact with almost anything in the world.
                      • -
                      • The game has a dynamic and strategic combat system that uses elemental interactions and combinations. You can switch between four characters in your party and use their unique skills and weapons to defeat enemies and solve puzzles.
                      • -
                      • The game has a rich and captivating story that unfolds through quests, cutscenes, dialogues, and lore. You can also meet and recruit many interesting and charming characters to join your adventure.
                      • -
                      • The game has a lot of content and features to keep you entertained, such as dungeons, bosses, events, achievements, collectibles, crafting, cooking, fishing, housing, and more.
                      • -
                      -

                      Cons

                      -
                        -
                      • The game has a gacha system that requires you to spend in-game currency or real money to obtain new characters and weapons. The rates are low and the prices are high, which can make it frustrating and expensive for some players.
                      • -
                      • The game has a stamina system that limits how much you can do certain activities, such as exploring, fighting, or collecting resources. The stamina regenerates slowly over time or can be replenished with items or currency.
                      • -
                      • The game has some technical issues and bugs that can affect the game performance and quality. Some examples are lag, crashes, glitches, errors, hackers, cheaters, etc.
                      • -
                      • The game has some repetitive and grindy aspects that can make it boring or tedious for some players. Some examples are daily quests, resin activities, leveling up, farming materials, etc.
                      • -
                      -

                      Overall, Genshin Impact is a game that is worth playing and experiencing for yourself. It is one of the best RPGs of 2020 and a masterpiece of gaming art. Whether you are a casual or hardcore gamer, you will find something to love in Genshin Impact.

                      -

                      Conclusion

                      -

                      In this article, we have shown you how to download Genshin Impact PC fraco and how to optimize the game settings and performance for your low-end PC. We have also given you a brief review of the game and its pros and cons. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

                      -

                      How to download genshin impact on low-end pc
                      -Genshin impact pc fraco settings and optimization guide
                      -Best characters and weapons for genshin impact pc fraco
                      -Genshin impact pc fraco gameplay and review
                      -Genshin impact pc fraco download link and installation tutorial
                      -Genshin impact pc fraco system requirements and compatibility
                      -Genshin impact pc fraco tips and tricks for beginners
                      -Genshin impact pc fraco vs mobile vs console comparison
                      -Genshin impact pc fraco update and patch notes
                      -Genshin impact pc fraco error and bug fixes
                      -Genshin impact pc fraco mods and cheats
                      -Genshin impact pc fraco free primogems and codes
                      -Genshin impact pc fraco best graphics and performance settings
                      -Genshin impact pc fraco online multiplayer and co-op mode
                      -Genshin impact pc fraco story and lore explained
                      -Genshin impact pc fraco fan art and wallpapers
                      -Genshin impact pc fraco tier list and rankings
                      -Genshin impact pc fraco events and quests guide
                      -Genshin impact pc fraco achievements and trophies
                      -Genshin impact pc fraco crossplay and cross-save features
                      -Genshin impact pc fraco reroll guide and best starter characters
                      -Genshin impact pc fraco wiki and database
                      -Genshin impact pc fraco memes and jokes
                      -Genshin impact pc fraco challenges and puzzles solutions
                      -Genshin impact pc fraco best team compositions and strategies
                      -Genshin impact pc fraco skins and costumes
                      -Genshin impact pc fraco release date and launch time
                      -Genshin impact pc fraco download size and speed
                      -Genshin impact pc fraco controller support and keybinds
                      -Genshin impact pc fraco soundtrack and voice actors
                      -Genshin impact pc fraco news and rumors
                      -Genshin impact pc fraco forum and community
                      -Genshin impact pc fraco livestreams and videos
                      -Genshin impact pc fraco guides and walkthroughs
                      -Genshin impact pc fraco best regions and locations to explore
                      -Genshin impact pc fraco hidden secrets and easter eggs
                      -Genshin impact pc fraco best builds and artifacts for each character
                      -Genshin impact pc fraco wishlist and suggestions for future updates
                      -Genshin impact pc fraco giveaways and contests

                      -

                      FAQs

                      -

                      Here are some of the most frequently asked questions about Genshin Impact:

                      -

                      Q: Is Genshin Impact online or offline?

                      -

                      A: Genshin Impact is an online game that requires an internet connection to play. You can play solo or with other players in co-op mode.

                      -

                      Q: Is Genshin Impact cross-platform?

                      -

                      A: Yes, Genshin Impact supports cross-platform play between PC, mobile (iOS and Android), PlayStation 4/5, and Nintendo Switch (coming soon). You can use the same account on different devices and platforms.

                      Q: Is Genshin Impact pay-to-win? -

                      A: No, Genshin Impact is not pay-to-win. You can play and enjoy the game without spending any money. The game is generous with giving free currency and rewards to players. The gacha system is optional and does not affect the main gameplay or story. You can obtain most of the characters and weapons through events, quests, or shops.

                      -

                      Q: How to get more Primogems in Genshin Impact?

                      -

                      A: Primogems are the premium currency in Genshin Impact that can be used to buy wishes, resin, or other items. You can get more Primogems by doing the following:

                      -
                        -
                      • Completing quests, achievements, events, and challenges.
                      • -
                      • Exploring the world and finding chests, seelies, oculi, and other secrets.
                      • -
                      • Logging in daily and claiming the daily commissions and rewards.
                      • -
                      • Using codes or redeeming gifts from the official website or social media.
                      • -
                      • Buying them with real money or using the monthly pass or battle pass.
                      • -
                      -

                      Q: How to level up fast in Genshin Impact?

                      -

                      A: Leveling up in Genshin Impact can be done by increasing your Adventure Rank (AR), Character Level, Weapon Level, or Talent Level. Here are some tips to level up fast in Genshin Impact:

                      -
                        -
                      • Focus on the main story quests and archon quests. They give a lot of AR EXP and unlock new features and areas.
                      • -
                      • Use your resin wisely. Spend it on domains, ley lines, bosses, or events that give you the materials or rewards you need.
                      • -
                      • Use your EXP books and Mora efficiently. Don't waste them on characters or weapons you don't use or need.
                      • -
                      • Upgrade your artifacts and weapons regularly. They can boost your stats and damage significantly.
                      • -
                      • Join co-op mode and help other players. You can get more loot and fun by playing with others.
                      • -
                      -

                      Q: How to get more characters in Genshin Impact?

                      -

                      A: Getting more characters in Genshin Impact can be done by using wishes, which are the gacha system of the game. You can use Primogems or Fates to buy wishes from different banners. Each banner has a different pool of characters and weapons with different rates and pity system. You can also get some characters for free by doing quests, events, or shops.

                      197e85843d
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download WhatsApp Business App and Enjoy Free Calls Messages and More with Your Clients.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download WhatsApp Business App and Enjoy Free Calls Messages and More with Your Clients.md deleted file mode 100644 index 0e0317788bc01a66c914f424e49414d314c4c43a..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download WhatsApp Business App and Enjoy Free Calls Messages and More with Your Clients.md +++ /dev/null @@ -1,125 +0,0 @@ -
                      -

                      How to Download and Use WhatsApp Business App

                      -

                      WhatsApp is one of the most popular messaging apps in the world, with more than 2 billion users. But did you know that there is also a WhatsApp Business app that is designed specifically for small businesses? If you are a small business owner who wants to connect with your customers, showcase your products, and grow your sales, then you should definitely check out the WhatsApp Business app. In this article, we will show you how to download and use the WhatsApp Business app features, as well as how to attract new customers with it.

                      -

                      Benefits of WhatsApp Business App for Small Businesses

                      -

                      The WhatsApp Business app is a free-to-download app that allows you to create a professional business profile, communicate more efficiently with your customers, and help you grow your business. Here are some of the benefits of using the WhatsApp Business app for small businesses:

                      -

                      download app whatsapp business


                      Download ★★★ https://ssurll.com/2uNWSF



                      -
                        -
                      • Business profile: You can create a profile for your business that provides helpful information for your customers, such as your logo, business description, hours of operation, website, and location. This helps you create a professional and trustworthy image for your customers.
                      • -
                      • Business messaging tools: You can use various tools to automate, manage, and respond to messages from your customers. For example, you can set up greeting messages to welcome new customers, away messages to let them know when you are not available, quick replies to answer common questions, and labels to organize and filter important conversations.
                      • -
                      • Catalog: You can create a catalog to showcase your products and services within the app. You can add products or services, set prices, and share items with customers in just a few taps. This simplifies the shopping experience for your customers and makes it easier for them to place orders.
                      • -
                      • Statistics: You can access statistics and insights into customer behaviour and satisfaction. You can see how many messages were sent, delivered, read, and received by your customers. This helps you measure the effectiveness of your communication and improve your customer service.
                      • -
                      • WhatsApp payment: You can enable easy and secure transactions within the app by linking your bank account and accepting payments from customers. This eliminates the need for third-party payment platforms and reduces transaction fees.
                      • -
                      -

                      How to Download WhatsApp Business App on Android and iPhone

                      -

                      If you are interested in using the WhatsApp Business app, you can download it for free on your Android or iPhone device. Here are the steps to download and install the WhatsApp Business app on your phone:

                      -
                        -
                      1. Step 1: Go to the Google Play Store or the App Store and search for WhatsApp Business app. You can also use these links to download the app directly: WhatsApp Business for Android or WhatsApp Business for iPhone.
                      2. -
                      3. Step 2: Tap on Install or Get and wait for the app to download on your phone. Make sure you have enough storage space and a stable internet connection.
                      4. -
                      5. Step 3: Open the app and verify your business phone number. You can use the same number that you use for WhatsApp Messenger, or a different one. However, you cannot use the same number for both apps on the same phone. If you use the same number, your WhatsApp Messenger account will be transferred to your WhatsApp Business account.
                      6. -
                      7. Step 4: Restore your chat backup from WhatsApp Messenger if you have one. If you are using the same number, you can restore your chat history from your previous backup. If you are using a different number, you can transfer your chat history from your old phone to your new phone using a local backup or Google Drive.
                      8. -
                      9. Step 5: Set your business name and build your profile. You can choose a name that represents your business and add a logo or a photo. You can also add more information about your business in the next steps.
                      10. -
                      -

                      How to Use WhatsApp Business App Features

                      -

                      Once you have downloaded and installed the WhatsApp Business app, you can start using its features to communicate with your customers and grow your business. Here are some of the features that you can use and how to use them:

                      -
                        -
                      • Business profile: You can add more details about your business in your profile, such as your business description, hours of operation, website, and location. To edit your profile, go to More options > Settings > Business tools > Business profile.
                      • -
                      • Business messaging tools: You can use various tools to automate, manage, and respond to messages from your customers. To access these tools, go to More options > Settings > Business tools. Here are some of the tools that you can use:
                          -
                        • Greeting message: You can set up a message that will be sent automatically to new customers or customers who haven't messaged you in more than 14 days. This helps you welcome them and introduce your business.
                        • -
                        • Away message: You can set up a message that will be sent automatically when you are not available or outside of your business hours. This helps you inform them when you will be back and how they can reach you.
                        • -
                        • Quick replies: You can create and save messages that answer common questions or provide useful information. You can use them by typing "/" and choosing from the list of quick replies.
                        • -
                        • Labels: You can create and assign labels to your chats and contacts to organize and filter them. For example, you can use labels such as new customer, order placed, payment pending, etc.
                        • -
                      • -
                      • Catalog: You can create a catalog to showcase your products and services within the app. To create a catalog, go to More options > Settings > Business tools > Catalog. Here are some of the steps that you can follow:
                          -
                        • Add products or services: You can add items to your catalog by tapping on Add product or service. You can enter a name, a price, a description, and an image for each item.
                        • -
                        • Edit products or services: You can edit or delete items from your catalog by tapping on them and choosing Edit or Delete.
                        • -
                        • Share products or services: You can share items from your catalog with customers by tapping on Attach > Catalog in a chat. You can select one or more items and send them as a message.
                        • -
                      • -
                      • Statistics: You can access statistics and insights into customer behaviour and satisfaction. To access statistics, go to More options > Settings > Business tools > Statistics. Here are some of the statistics that you can see:
                          -
                        • Sent: The number of messages that you sent to your customers.
                        • -
                        • Delivered: The number of messages that were delivered to your customers' phones.
                        • -
                        • Read: The number of messages that were read by your customers.
                        • -
                        • Received: The number of messages that you received from your customers.
                        • -
                      • -
                      • WhatsApp payment: You can enable easy and secure transactions within the app by linking your bank account and accepting payments from customers. To use WhatsApp payment, you need to have a bank account that supports Unified Payments Interface (UPI) in India. Here are some of the steps that you can follow:
                          -
                        • Link your bank account: You can link your bank account to WhatsApp by going to More options > Settings > Payments > Add payment method. You can select your bank from the list and verify your phone number.
                        • -
                        • Accept payments from customers: You can request or receive payments from customers by tapping on Attach > Payment in a chat. You can enter the amount and a note and send it as a message. The customer will need to enter their UPI PIN to complete the transaction.
                        • -
                        • Check your payment history: You can check your payment history and balance by going to More options > Settings > Payments. You can also download your payment reports and receipts from here.
                        • -
                      • -
                      -

                      How to Attract New Customers with WhatsApp Business App

                      -

                      Besides communicating with your existing customers, you can also use the WhatsApp Business app to attract new customers and expand your reach. Here are some of the ways that you can promote your WhatsApp channel and generate more leads for your business:

                      -
                        -
                      • Free entry points: You can use QR codes, short links, action buttons on Facebook and Instagram to make it easy for new customers to start a chat with you. You can create and share these entry points by going to More options > Settings > Business tools > Short link or QR code. You can also add them to your website, social media, flyers, posters, etc.
                      • -
                      • Meta ads: You can turn a Facebook post into an ad that sends new customers to a WhatsApp chat with your business. You can create these ads by using the Facebook Ads Manager and selecting WhatsApp as the destination.
                      • -
                      • Call to action ads: You can promote the "Send Message" button on your Facebook Page to encourage potential customers to start a conversation with you on WhatsApp. You can enable this button by going to your Facebook Page > Settings > Messaging > Add a Button.
                      • -
                      • Ads that click to WhatsApp: You can expand your reach by sending Facebook and Instagram users straight into a WhatsApp chat with your business. You can create these ads by using the Facebook Ads Manager and selecting WhatsApp as the destination.
                      • -
                      -

                      Conclusion

                      -

                      The WhatsApp Business app is a powerful tool for small businesses that want to connect with their customers, showcase their products, and grow their sales. It offers various features and benefits that can help you create a professional business profile, communicate more efficiently with your customers, and help you grow your business. You can download the app for free on your Android or iPhone device and start using its features right away. You can also use various ways to attract new customers and generate more leads for your business. If you are a small business owner who wants to take advantage of the WhatsApp Business app, don't wait any longer and download it today!

                      -

                      Frequently Asked Questions

                      -
                        -
                      1. Can I use both WhatsApp Messenger and WhatsApp Business on the same phone?
                      2. -

                        Yes, you can use both apps on the same phone, but you need to have different phone numbers for each app. You cannot use the same number for both apps on the same phone.

                        -
                      3. Can I transfer my chat history from WhatsApp Messenger to WhatsApp Business?
                      4. -

                        Yes, you can transfer your chat history from WhatsApp Messenger to WhatsApp Business if you are using the same number. You can restore your chat backup from your previous backup when you verify your number on WhatsApp Business. If you are using a different number, you can transfer your chat history from your old phone to your new phone using a local backup or Google Drive.

                        -
                      5. Can I use WhatsApp Web or WhatsApp Desktop with WhatsApp Business?
                      6. -

                        Yes, you can use WhatsApp Web or WhatsApp Desktop with WhatsApp Business. You can access these platforms by scanning the QR code from More options > Linked devices on your phone.

                        -

                        How to download whatsapp business app for android
                        -Whatsapp business app download for iphone
                        -Whatsapp business app free download for pc
                        -Whatsapp business app download apk
                        -Whatsapp business app download for windows 10
                        -Whatsapp business app download link
                        -Whatsapp business app download for laptop
                        -Whatsapp business app download for mac
                        -Whatsapp business app download latest version
                        -Whatsapp business app download for jio phone
                        -Whatsapp business app download play store
                        -Whatsapp business app download uptodown
                        -Whatsapp business app download for samsung
                        -Whatsapp business app download for ipad
                        -Whatsapp business app download for desktop
                        -Whatsapp business app download new version
                        -Whatsapp business app download old version
                        -Whatsapp business app download online
                        -Whatsapp business app download for nokia
                        -Whatsapp business app download for tablet
                        -Whatsapp business app download and install
                        -Whatsapp business app download from google play
                        -Whatsapp business app download for blackberry
                        -Whatsapp business app download for ios
                        -Whatsapp business app download softonic
                        -Whatsapp business app download update
                        -Whatsapp business app download 2023
                        -Whatsapp business app download 9apps
                        -Whatsapp business app download apk mirror
                        -Whatsapp business app download apk pure
                        -Whatsapp business app download beta version
                        -Whatsapp business app download cracked version
                        -Whatsapp business app download filehippo
                        -Whatsapp business app download gb whatsapp
                        -Whatsapp business app download in pc
                        -Whatsapp business app download malavida
                        -Whatsapp business app download mod apk
                        -Whatsapp business app download official website
                        -Whatsapp business app download qr code scanner
                        -Whatsapp business app download quora
                        -Whatsapp business app download rexdl.com
                        -Whatsapp business app download size
                        -Whatsapp business app download telegram
                        -Whatsapp business app download video
                        -Whatsapp business app download without play store
                        -Whatsapp business app features and benefits
                        -Whatsapp business api integration guide
                        -How to use whatsapp web with whatsapp business
                        -How to migrate from whatsapp messenger to whatsapp business
                        -How to create a whatsapp business profile and catalog

                        -
                      7. Can I create a group chat or broadcast list with WhatsApp Business?
                      8. -

                        Yes, you can create a group chat or broadcast list with WhatsApp Business. You can do this by tapping on New chat > New group or New broadcast on your phone.

                        -
                      9. Can I delete my WhatsApp Business account?
                      10. -

                        Yes, you can delete your WhatsApp Business account if you no longer want to use it. However, this will delete your account information, profile photo, groups, messages, and business settings. To delete your account, go to More options > Settings > Account > Delete my account on your phone.

                        -
                      -

                      401be4b1e0
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Escape the Police and Enjoy the Ride Subway Surfers for Windows 7 PC.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Escape the Police and Enjoy the Ride Subway Surfers for Windows 7 PC.md deleted file mode 100644 index febdbcd14a04c6226878387866c516df7cce870f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Escape the Police and Enjoy the Ride Subway Surfers for Windows 7 PC.md +++ /dev/null @@ -1,138 +0,0 @@ - -

                      Free Download Subway Surfers for PC Windows 7

                      -

                      Subway Surfers is one of the most popular endless runner games on Android, but did you know that you can also play it on your PC Windows 7? In this article, we will show you what Subway Surfers is, why you should play it on your PC Windows 7, and how to download it for free using two different methods. Let's get started!

                      -

                      free download subway surfers for pc windows 7


                      Download File »»» https://ssurll.com/2uNQuQ



                      -

                      What is Subway Surfers?

                      -

                      Subway Surfers is a classic endless runner game developed by SYBO Games and Kiloo. You play as Jake, who surfs the subways and tries to escape from the grumpy Inspector and his dog. You'll need to dodge trains, trams, obstacles, and more to go as far as you can in this endless running game. You can also collect coins, power-ups, and special items to unlock new characters, boards, and outfits. Subway Surfers has colorful and vivid HD graphics, hoverboard surfing, paint powered jetpacks, lightning fast swipe acrobatics, and a cool crew of friends to join you in your adventure. Subway Surfers also features a World Tour mode, where you can explore different cities around the world and collect unique rewards.

                      -

                      Features of Subway Surfers

                      -

                      Some of the main features of Subway Surfers are:

                      -
                        -
                      • Grind trains with your cool crew
                      • -
                      • Colorful and vivid HD graphics
                      • -
                      • Hoverboard Surfing
                      • -
                      • Paint powered jetpack
                      • -
                      • Lightning fast swipe acrobatics
                      • -
                      • Challenge and help your friends
                      • -
                      • Explore different cities in the World Tour mode
                      • -
                      • Customize your character and board with various outfits and accessories
                      • -
                      • Complete missions and achievements to earn rewards
                      • -
                      • Play online or offline without losing your progress
                      • -
                      -

                      How to Play Subway Surfers

                      -

                      The gameplay of Subway Surfers is simple and intuitive. You can use your mouse, keyboard, or touch screen to control your character. Here are some basic tips on how to play Subway Surfers:

                      -
                        -
                      • To move left or right, use the left or right arrow keys, or swipe left or right on your screen.
                      • -
                      • To jump over obstacles or onto trains, use the up arrow key, or swipe up on your screen.
                      • -
                      • To roll under barriers or through tunnels, use the down arrow key, or swipe down on your screen.
                      • -
                      • To activate a hoverboard, use the spacebar key, or double-tap on your screen.
                      • -
                      • To use a power-up, such as a jetpack or a magnet, just run into it.
                      • -
                      • To collect coins and other items, just run over them.
                      • -
                      • To avoid crashing into trains or other obstacles, use your reflexes and timing.
                      • -
                      • To increase your score multiplier, complete missions and collect letters for the word of the day.
                      • -
                      -

                      Why Play Subway Surfers on PC Windows 7?

                      -

                      Subway Surfers is a fun and addictive game that you can play on your Android phone or tablet, but you can also enjoy it on your PC Windows 7. There are several reasons why you might want to play Subway Surfers on PC Windows 7, such as:

                      -

                      Benefits of Playing Subway Surfers on PC Windows 7

                      -

                      Some of the benefits of playing Subway Surfers on PC Windows 7 are:

                      -

                      How to play subway surfers on pc without emulator
                      -Subway surfers pc game free download full version
                      -Subway surfers for windows 7 ultimate free download
                      -Subway surfers pc online play in browser
                      -Subway surfers pc game setup download
                      -Subway surfers for pc windows 7 32 bit free download
                      -Subway surfers for pc windows 7 offline installer
                      -Subway surfers for pc windows 7 with keyboard controls
                      -Subway surfers for pc windows 7 highly compressed
                      -Subway surfers for pc windows 7 softonic download
                      -Subway surfers for pc windows 7 bluestacks download
                      -Subway surfers for pc windows 7 nox player download
                      -Subway surfers for pc windows 7 gameloop download
                      -Subway surfers for pc windows 7 memu play download
                      -Subway surfers for pc windows 7 ldplayer download
                      -Subway surfers for pc windows 7 apk download
                      -Subway surfers for pc windows 7 mod apk download
                      -Subway surfers for pc windows 7 hack download
                      -Subway surfers for pc windows 7 unlimited coins and keys download
                      -Subway surfers for pc windows 7 latest version download
                      -Subway surfers for pc windows 7 old version download
                      -Subway surfers for pc windows 7 world tour download
                      -Subway surfers for pc windows 7 mumbai edition download
                      -Subway surfers for pc windows 7 london edition download
                      -Subway surfers for pc windows 7 paris edition download
                      -Subway surfers for pc windows 7 new york edition download
                      -Subway surfers for pc windows 7 rio edition download
                      -Subway surfers for pc windows 7 sydney edition download
                      -Subway surfers for pc windows 7 tokyo edition download
                      -Subway surfers for pc windows 7 beijing edition download
                      -Subway surfers for pc windows 7 moscow edition download
                      -Subway surfers for pc windows 7 seoul edition download
                      -Subway surfers for pc windows 7 cairo edition download
                      -Subway surfers for pc windows 7 hawaii edition download
                      -Subway surfers for pc windows 7 las vegas edition download
                      -Subway surfers for pc windows 7 arabia edition download
                      -Subway surfers for pc windows 7 bangkok edition download
                      -Subway surfers for pc windows 7 transylvania edition download
                      -Subway surfers for pc windows 7 north pole edition download
                      -Subway surfers for pc windows 7 venice edition download
                      -Subway surfers for pc windows 7 greece edition download
                      -Subway surfers for pc windows 7 kenya edition download
                      -Subway surfers for pc windows 7 peru edition download
                      -Subway surfers for pc windows 7 singapore edition download
                      -Subway surfers for pc windows 7 prague edition download
                      -Subway surfers for pc windows 7 madagascar edition download
                      -Subway surfers for pc windows 7 san francisco edition download
                      -Subway surfers for pc windows 7 island edition download

                      -
                        -
                      • You can experience the game on a bigger screen, which can enhance the graphics and the gameplay.
                      • -
                      • You can use your keyboard or mouse to control your character, which can be more comfortable and precise than using your fingers on a touch screen.
                      • -
                      • You can save your phone or tablet battery and storage space by playing the game on your PC Windows 7.
                      • -
                      • You can play the game without any interruptions from phone calls, messages, notifications, or low battery alerts.
                      • -
                      • You can access the game anytime and anywhere, as long as you have your PC Windows 7 and an internet connection.
                      • -
                      -

                      Drawbacks of Playing Subway Surfers on PC Windows 7

                      -

                      Some of the drawbacks of playing Subway Surfers on PC Windows 7 are:

                      -
                        -
                      • You might need to download and install additional software or apps to run the game on your PC Windows 7, which can take up some time and space.
                      • -
                      • You might encounter some compatibility or performance issues, depending on the specifications of your PC Windows 7 and the software or apps you use to run the game.
                      • -
                      • You might lose some of the portability and convenience of playing the game on your phone or tablet, which you can carry around and use anywhere.
                      • -
                      -

                      How to Download Subway Surfers for PC Windows 7?

                      -

                      If you want to play Subway Surfers on your PC Windows 7, you have two main options: using an emulator or using the Microsoft Phone Link app. An emulator is a software that mimics the Android operating system on your PC Windows 7, allowing you to run Android apps and games. The Microsoft Phone Link app is a feature that lets you connect your Android phone and your PC Windows 7 via Wi-Fi, and access the Android apps installed on your phone from your PC. Here are the steps for each method:

                      -

                      Method 1: Using an Emulator

                      -

                      An emulator is a software that mimics the Android operating system on your PC Windows 7, allowing you to run Android apps and games. There are many emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can choose any emulator that suits your preferences and requirements. Here are the steps to download Subway Surfers for PC Windows 7 using an emulator:

                      -

                      Step 1: Download and Install an Emulator

                      -

                      The first step is to download and install an emulator of your choice on your PC Windows 7. You can visit the official website of the emulator and follow the instructions to download and install it. For example, if you choose BlueStacks, you can go to [BlueStacks.com] and click on the "Download BlueStacks" button. Then, run the installer file and follow the steps to install BlueStacks on your PC Windows 7.

                      -

                      Step 2: Launch the Emulator and Sign in with Google Account

                      -

                      The next step is to launch the emulator and sign in with your Google account. This will allow you to access the Google Play Store and download Android apps and games. For example, if you use BlueStacks, you can launch it from your desktop or start menu, and then click on the "Sign in with Google" button. Then, enter your Google account credentials and agree to the terms of service.

                      -

                      Step 3: Search for Subway Surfers in the Play Store and Install It

                      -

                      The third step is to search for Subway Surfers in the Play Store and install it. You can do this by clicking on the "Play Store" icon in the emulator's home screen, and then typing "Subway Surfers" in the search bar. Then, click on the "Install" button next to the Subway Surfers icon.

                      -

                      Step 4: Enjoy Playing Subway Surfers on PC Windows 7

                      -

                      The final step is to enjoy playing Subway Surfers on PC Windows 7. You can do this by clicking on the "Subway Surfers" icon in the emulator's home screen or app drawer. Then, you can use your keyboard or mouse to control your character and play the game as you would on your phone or tablet. You can also adjust the settings, such as the sound, the graphics, the language, etc., to suit your preferences.

                      -

                      Method 2: Using the Microsoft Phone Link App

                      -

                      The Microsoft Phone Link app is a feature that lets you connect your Android phone and your PC Windows 7 via Wi-Fi, and access the Android apps installed on your phone from your PC. This way, you can play Subway Surfers on your PC Windows 7 without downloading or installing anything. However, you will need to have Subway Surfers installed on your Android phone, and both your phone and your PC Windows 7 must be connected to the same Wi-Fi network. Here are the steps to download Subway Surfers for PC Windows 7 using the Microsoft Phone Link app:

                      -

                      Step 1: Install the Phone Link App on Your PC and the Link to Windows App on Your Android Phone

                      -

                      The first step is to install the Phone Link app on your PC Windows 7 and the Link to Windows app on your Android phone. You can do this by visiting [this link] and following the instructions to download and install the Phone Link app on your PC Windows 7. Then, go to the Google Play Store on your Android phone and search for "Link to Windows" or "Your Phone Companion". Then, download and install the app on your phone.

                      -

                      Step 2: Connect Your Android Phone and Your PC via Wi-Fi

                      -

                      The next step is to connect your Android phone and your PC Windows 7 via Wi-Fi. You can do this by opening the Phone Link app on your PC Windows 7 and clicking on the "Add a device" button. Then, scan the QR code displayed on your PC screen with your phone's camera. Alternatively, you can open the Link to Windows app on your phone and tap on the "Link your phone and PC" button. Then, sign in with your Microsoft account and select your PC from the list of devices.

                      -

                      Step 3: Access the Android Apps Installed on Your Phone from Your PC

                      -

                      The third step is to access the Android apps installed on your phone from your PC Windows 7. You can do this by opening the Phone Link app on your PC Windows 7 and clicking on the "Apps" tab. Then, you will see a list of all the Android apps installed on your phone. You can also search for a specific app using the search bar.

                      -

                      Step 4: Enjoy Playing Subway Surfers on PC Windows 7

                      -

                      The final step is to enjoy playing Subway Surfers on PC Windows 7. You can do this by clicking on the "Subway Surfers" icon in the list of apps. Then, you will see a window that shows your phone's screen. You can use your mouse or touch screen to control your character and play the game as you would on your phone or tablet. You can also resize or minimize the window as you wish.

                      -

                      Conclusion

                      -

                      Subway Surfers is a fun and addictive endless runner game that you can play on your Android device or on your PC Windows 7. Playing Subway Surfers on PC Windows 7 has its benefits and drawbacks, but it can be a great way to enjoy the game on a bigger screen and with more comfort and convenience. You can download Subway Surfers for PC Windows 7 for free using either an emulator or the Microsoft Phone Link app. Both methods are easy and simple, and you can choose whichever one suits you better. We hope this article has helped you learn how to download Subway Surfers for PC Windows 7. Now, go ahead and surf the subways!

                      -

                      FAQs

                      -

                      Here are some frequently asked questions about Subway Surfers for PC Windows 7:

                      -
                        -
                      • Is Subway Surfers for PC Windows 7 safe?
                      • -

                        Yes, Subway Surfers for PC Windows 7 is safe as long as you download it from a trusted source, such as the Google Play Store or the official website of the emulator or app you use. You should also scan any file you download with an antivirus software before installing it.

                        -
                      • Is Subway Surfers for PC Windows 7 free?
                      • -

                        Yes, Subway Surfers for PC Windows 7 is free to download and play. However, some features or items in the game may require in-app purchases or ads.

                        -
                      • Can I play Subway Surfers for PC Windows 7 offline?
                      • -

                        Yes, you can play Subway Surfers for PC Windows 7 offline without an internet connection. However, some features or items in the game may require an internet connection to access or update them.

                        -
                      • Can I sync my Subway Surfers progress between my phone and my PC Windows 7?
                      • -

                        Yes, you can sync your Subway Surfers progress between your phone and your PC Windows 7 by signing in with the same Google account on both devices. This way, you can continue playing where you left off on either device.

                        -
                      • Can I play Subway Surfers for PC Windows 7 with my friends?
                      • -

                        Yes, you can play Subway Surfers for PC Windows 7 with your friends by connecting to Facebook or Google Play Games. This way, you can see your friends' scores, challenge them, and help them in the game.

                        -

                      401be4b1e0
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/stft.py b/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/stft.py deleted file mode 100644 index 5852bd20904c9c206030523737ce3fbd64300a0c..0000000000000000000000000000000000000000 --- a/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/stft.py +++ /dev/null @@ -1,185 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2017, Prem Seetharaman -All rights reserved. - -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from librosa import stft, istft -from audio_processing import window_sumsquare - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - - def __init__( - self, filter_length=800, hop_length=200, win_length=800, window="hann" - ): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - if window is not None: - assert filter_length >= win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - if input_data.device.type == "cuda": - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode="reflect", - ) - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, self.forward_basis, stride=self.hop_length, padding=0 - ) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - else: - x = input_data.detach().numpy() - real_part = [] - imag_part = [] - for y in x: - y_ = stft( - y, self.filter_length, self.hop_length, self.win_length, self.window - ) - real_part.append(y_.real[None, :, :]) - imag_part.append(y_.imag[None, :, :]) - real_part = np.concatenate(real_part, 0) - imag_part = np.concatenate(imag_part, 0) - - real_part = torch.from_numpy(real_part).to(input_data.dtype) - imag_part = torch.from_numpy(imag_part).to(input_data.dtype) - - magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2) - phase = torch.atan2(imag_part.data, real_part.data) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - if magnitude.device.type == "cuda": - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - self.inverse_basis, - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.from_numpy(window_sum).to(inverse_transform.device) - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :] - inverse_transform = inverse_transform[ - :, :, : -int(self.filter_length / 2) : - ] - inverse_transform = inverse_transform.squeeze(1) - else: - x_org = recombine_magnitude_phase.detach().numpy() - n_b, n_f, n_t = x_org.shape - x = np.empty([n_b, n_f // 2, n_t], dtype=np.complex64) - x.real = x_org[:, : n_f // 2] - x.imag = x_org[:, n_f // 2 :] - inverse_transform = [] - for y in x: - y_ = istft(y, self.hop_length, self.win_length, self.window) - inverse_transform.append(y_[None, :]) - inverse_transform = np.concatenate(inverse_transform, 0) - inverse_transform = torch.from_numpy(inverse_transform).to( - recombine_magnitude_phase.dtype - ) - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction diff --git a/spaces/skyxx/skyxxChat/modules/presets.py b/spaces/skyxx/skyxxChat/modules/presets.py deleted file mode 100644 index a3a2bd9385e70bf9a160816c1b19d4d305139b54..0000000000000000000000000000000000000000 --- a/spaces/skyxx/skyxxChat/modules/presets.py +++ /dev/null @@ -1,222 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai-proxy.com" -COMPLETION_URL = "https://proxyai.xueguizheng.top/v1/chat/completions" -BALANCE_API_URL="https://proxyai.xueguizheng.top/dashboard/billing/credit_grants" -USAGE_API_URL="https://proxyai.xueguizheng.top/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("Chat 🚀") - -CHUANHU_DESCRIPTION = i18n("由Bilibili [土川虎虎虎] 和 [明昭MZhao]开发
                      ") - -FOOTER = """
                      {versions}
                      """ - -APPEARANCE_SWITCHER = """ -
                      -"""+ i18n("切换亮暗色主题") + """ - -
                      -""" - -SUMMARIZE_PROMPT = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", - "xmchat", -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-qe", - "llama-7b-hf", - "llama-13b-hf", - "llama-30b-hf", - "llama-65b-hf" -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-0301": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/smartinezbragado/reddit-topic-modelling/src/reddit.py b/spaces/smartinezbragado/reddit-topic-modelling/src/reddit.py deleted file mode 100644 index b8406016520ffbdb472955dc8e68856828fb2bd3..0000000000000000000000000000000000000000 --- a/spaces/smartinezbragado/reddit-topic-modelling/src/reddit.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import praw -import logging -import pandas as pd -from typing import Generator -from dotenv import load_dotenv - -load_dotenv() -logger = logging.getLogger(__name__) - -class RedditBot: - def __init__( - self, - client_id: str | None = None, - client_secret: str | None = None, - username: str | None = None, - password: str | None = None - ) -> None: - - self.reddit = praw.Reddit( - client_id = client_id if client_id else os.getenv('REDDIT_CLIENT_ID'), - client_secret = client_secret if client_secret else os.getenv('REDDIT_CLIENT_SECRET'), - username = username if username else os.getenv('REDDIT_USERNAME'), - password = password if password else os.getenv('REDDIT_PASSWORD'), - user_agent='bot' - ) - - def get_subreddits_posts(self, name: str, type: str, amount=100) -> Generator: - """Gets the posts from a given subreddit""" - subreddit = self.reddit.subreddit(name) - if type == 'new': - posts = subreddit.new(limit=amount) - elif type == 'hot': - posts = subreddit.hot(limit=amount) - elif type == 'top': - posts = subreddit.top(limit=amount) - elif type == 'rising': - posts = subreddit.rising(limit=amount) - - return posts - - @staticmethod - def convert_posts_to_df(posts: Generator) -> pd.DataFrame: - """Extracts the title and text from a post""" - df = pd.DataFrame(columns=['Title', 'Content']) - for n, p in enumerate(posts): - df.loc[n, 'Title'] = p.title - df.loc[n, 'Content'] = p.selftext - - return df - - def subreddit_exists(self, name: str) -> bool: - try: - self.reddit.subreddits.search_by_name(name, exact=True) - return True - except Exception as e: - logger.error(e) - return False \ No newline at end of file diff --git a/spaces/sourav11295/Model_Recommendation/README.md b/spaces/sourav11295/Model_Recommendation/README.md deleted file mode 100644 index ca685cae4eab97b4ce175f2c54a2a9b6f10539c3..0000000000000000000000000000000000000000 --- a/spaces/sourav11295/Model_Recommendation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Model Recommendation -emoji: ⚡ -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/README.md deleted file mode 100644 index d5224fb2894606a2a8027e01e224be190776ecfe..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# Adaptive Span - -Adaptive Span is a novel self-attention mechanism that can learn its optimal -attention span. This allows us to extend significantly the maximum context size -used in Transformer, while maintaining control over their memory footprint -and computational time. It uses the Truncated BPTT technique for training, -as in [transformerXL](https://github.com/pytorch/fairseq/blob/main/examples/truncated_bptt/README.md). - -Adaptive Span was introduced by paper: -[Adaptive Attention Span in Transformers](https://arxiv.org/abs/1905.07799), -which achieved state-of-the-art language modeling results at the time of publication. - -We manage to reproduce their result in fairseq and keep most of the -[original implementation](https://github.com/facebookresearch/adaptive-span) untouched. -You can refer to the their sweep file as well if any combination of hyperparameter is not clear. - -##### 0. Setup - -First you need to process the Enwik8 dataset, we use the pre-tokenized dataset -from [adaptive span paper](https://github.com/facebookresearch/adaptive-span/blob/master/get_data.sh). -You can download the dataset, and then run: -```bash -fairseq-preprocess --only-source --trainpref ~/data/enwik8/train.txt \ - --validpref ~/data/enwik8/valid.txt --testpref ~/data/enwik8/test.txt \ - --destdir ~/data/enwik8/data-bin/ --joined-dictionary --workers 20 -``` - -##### 1. Train a Adaptive Span model on Enwik8 - -We will train a 12-layer Adaptive Span model following the [hyperparameters -used in the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). - -The following command assumes 4 GPUs, so that the total batch size is 64 -sequences (4 x 16). Training should take 2-3 days on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/adaptive_span \ - --data ~/data/enwik8/data-bin/ \ - --fp16 --fp16-no-flatten-grads --max-update 600000 \ - --task truncated_bptt_lm --tokens-per-sample 512 --arch adaptive_span \ - --n-layer 12 --d-model 512 --n-head 8 --d-inner 2048 --dropout 0.3 \ - --attn-span 8192 --optimizer adagrad_with_grad_clip --adagrad-clip 0.03 \ - --validate-interval-updates 1000 \ - --lr-scheduler fixed --warmup-updates 32000 --batch-size-valid 32 \ - --lr 0.07 --criterion adaptive_span_loss --batch-size 16 --update-freq 1 \ - --seed 2 --log-format json --log-interval 25 --aux-loss-scaler 5e-07 -``` -This should land around 1.05 on validation, 1.03 on test. You can lower the ---aux-loss-scaler for better performance (longer span). It gives ~0.03 bpc -improvement to the transformerXL baseline here. -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. -You can also reproduce the transformerXL result on enwik8 using this code base. -It should land around 1.06 on test,matching the [original paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_enwik8_base.sh). -You can try by -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - ~/data/enwik8/data-bin/ \ - --task truncated_bptt_lm --fp16 --max-update 400000 \ - --tokens-per-sample 512 --arch transformer_xl --n-layer 12 \ - --d-model 512 --n-head 8 --d-head 64 --d-inner 2048 --dropout 0.1 \ - --dropatt 0.0 --mem-len 512 --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 \ - --lr 0.0 --lr 0.00025 --batch-size 15 \ - --update-freq 1 --seed 2 --log-format json --log-interval 25 \ - --fp16 -``` - -##### 2. Evaluate -For Adaptive Span: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/adaptive_span \ - --task truncated_bptt_lm --batch-size 8 --tokens-per-sample 512 --gen-subset test -``` -For Transformer-XL evaluation: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ --task truncated_bptt_lm --batch-size 8 \ - --tokens-per-sample 80 \ - --model-overrides '{"mem_len":2100,"clamp_len":820,"same_length":True}' \ - --gen-subset valid -``` - -*Note:* During training the model saw 512 tokens of context -(``--tokens-per-sample=512``), with batch size 8. These settings match the evaluation -settings from [the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/__init__.py deleted file mode 100644 index f5ea180f9b4cdb27cd553439b6df9d743105f18c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import importlib -from fairseq import registry - -( - build_monotonic_attention, - register_monotonic_attention, - MONOTONIC_ATTENTION_REGISTRY, - _, -) = registry.setup_registry("--simul-type") - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module( - "examples.simultaneous_translation.modules." + model_name - ) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sinusoidal_positional_embedding.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sinusoidal_positional_embedding.py deleted file mode 100644 index 4793ecfb522d0729fc2d24a3ddf0c6a774d67773..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sinusoidal_positional_embedding.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Any, Optional - -import torch -import torch.onnx.operators -from fairseq import utils -from torch import Tensor, nn - - -class SinusoidalPositionalEmbedding(nn.Module): - """This module produces sinusoidal positional embeddings of any length. - - Padding symbols are ignored. - """ - - def __init__(self, embedding_dim, padding_idx, init_size=1024): - super().__init__() - self.embedding_dim = embedding_dim - self.padding_idx = padding_idx if padding_idx is not None else 0 - self.weights = SinusoidalPositionalEmbedding.get_embedding( - init_size, embedding_dim, padding_idx - ) - self.onnx_trace = False - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - self.max_positions = int(1e5) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - @staticmethod - def get_embedding( - num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None - ): - """Build sinusoidal embeddings. - - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze( - 1 - ) * emb.unsqueeze(0) - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view( - num_embeddings, -1 - ) - if embedding_dim % 2 == 1: - # zero pad - emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) - if padding_idx is not None: - emb[padding_idx, :] = 0 - return emb - - def forward( - self, - input, - incremental_state: Optional[Any] = None, - timestep: Optional[Tensor] = None, - positions: Optional[Any] = None, - ): - """Input is expected to be of size [bsz x seqlen].""" - bspair = torch.onnx.operators.shape_as_tensor(input) - bsz, seq_len = bspair[0], bspair[1] - max_pos = self.padding_idx + 1 + seq_len - if self.weights is None or max_pos > self.weights.size(0): - # recompute/expand embeddings if needed - self.weights = SinusoidalPositionalEmbedding.get_embedding( - max_pos, self.embedding_dim, self.padding_idx - ) - self.weights = self.weights.to(self._float_tensor) - - if incremental_state is not None: - # positions is the same for every token when decoding a single step - pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len - if self.onnx_trace: - return ( - self.weights.index_select(index=self.padding_idx + pos, dim=0) - .unsqueeze(1) - .repeat(bsz, 1, 1) - ) - return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1) - - positions = utils.make_positions( - input, self.padding_idx, onnx_trace=self.onnx_trace - ) - if self.onnx_trace: - flat_embeddings = self.weights.detach().index_select(0, positions.view(-1)) - embedding_shape = torch.cat( - (bsz.view(1), seq_len.view(1), torch.tensor([-1], dtype=torch.long)) - ) - embeddings = torch.onnx.operators.reshape_from_tensor_shape( - flat_embeddings, embedding_shape - ) - return embeddings - return ( - self.weights.index_select(0, positions.view(-1)) - .view(bsz, seq_len, -1) - .detach() - ) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/sgd.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/sgd.py deleted file mode 100644 index 8e34fb99a18fff12ab76be5894a84cbbb2f48176..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/sgd.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("sgd") -class SGD(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.SGD(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--momentum', default=0.0, type=float, metavar='M', - help='momentum factor') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "momentum": self.args.momentum, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/spaces/st0bb3n/Cam2Speech/README.md b/spaces/st0bb3n/Cam2Speech/README.md deleted file mode 100644 index 213ce5ff9b9e498c8206982885649f4d6c8480dc..0000000000000000000000000000000000000000 --- a/spaces/st0bb3n/Cam2Speech/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cam2Speech -emoji: 🐠 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/stomexserde/gpt4-ui/Examples/Avg Internet Security 2015 !LINK! Full Download With Serial Keys.md b/spaces/stomexserde/gpt4-ui/Examples/Avg Internet Security 2015 !LINK! Full Download With Serial Keys.md deleted file mode 100644 index fde6b1ce3d859b07c81cbf2deb9dd133ba6031fc..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Avg Internet Security 2015 !LINK! Full Download With Serial Keys.md +++ /dev/null @@ -1,29 +0,0 @@ - -

                      How to Download and Activate AVG Internet Security 2015 for Free

                      -

                      AVG Internet Security 2015 is a comprehensive antivirus software that protects your PC from various threats, such as viruses, malware, spyware, ransomware, phishing, and more. It also offers features like firewall, email scanner, anti-spam, data safe, PC analyzer, and file shredder. If you want to download and activate AVG Internet Security 2015 for free, here are some steps you can follow:

                      -

                      Avg Internet Security 2015 Full Download With Serial Keys


                      Download File >>>>> https://urlgoal.com/2uIa7z



                      -
                        -
                      1. Download the latest version of AVG Internet Security 2015 from the official website[^1^] or from other sources[^2^] [^3^]. Make sure you choose the right version for your system (32-bit or 64-bit).
                      2. -
                      3. Install the software and run it. Go to My AVG/My Subscription and click on Enter License.
                      4. -
                      5. Use one of the universal license keys[^2^] or the OEM 1-year free license keys[^2^] to activate the software. You can also use a keygen[^2^] to generate a license key.
                      6. -
                      7. Restart your computer if prompted. You should now have a full version of AVG Internet Security 2015 activated for free.
                      8. -
                      -

                      Note: It is recommended to turn off the automatic update feature and manually update the software regularly. After each update, you may need to use a different license key to re-activate the software.

                      -

                      Enjoy your free AVG Internet Security 2015 and stay safe online!

                      - -

                      If you want to learn more about the features and benefits of AVG Internet Security 2015, here are some highlights:

                      -
                        -
                      • Antivirus and antispyware: AVG Internet Security 2015 scans and removes all kinds of malware from your PC, including viruses, worms, trojans, rootkits, adware, and more. It also protects you from zero-day threats and malicious downloads.
                      • -
                      • Anti-rootkit: AVG Internet Security 2015 detects and removes hidden rootkits that can compromise your system and give hackers access to your data.
                      • -
                      • Web protection: AVG Internet Security 2015 blocks malicious websites and links that can infect your PC or steal your personal information. It also warns you of fake or phishing websites that try to trick you into entering your credentials or payment details.
                      • -
                      • Online shield: AVG Internet Security 2015 checks the files you download and share online for malware and viruses. It also prevents you from downloading or opening unsafe attachments in your email.
                      • -
                      • Privacy statement: AVG Internet Security 2015 helps you protect your privacy online by encrypting and storing your sensitive files in a data safe. You can also use the file shredder feature to permanently delete your files from your hard disk, leaving no traces behind.
                      • -
                      • Identity alert: AVG Internet Security 2015 monitors the web for any signs of identity theft or fraud involving your personal information. It alerts you if it finds any suspicious activity or breaches on your accounts.
                      • -
                      • Email scanner: AVG Internet Security 2015 scans your incoming and outgoing emails for spam, phishing, and malware. It also filters out unwanted or harmful messages from your inbox.
                      • -
                      • Personal firewall: AVG Internet Security 2015 controls the network traffic on your PC and blocks unauthorized access from hackers or intruders. It also allows you to customize the settings for each application or device on your network.
                      • -
                      • PC analyzer: AVG Internet Security 2015 scans your PC for errors, junk files, registry issues, and other performance problems. It also offers solutions to fix them and optimize your PC speed and stability.
                      • -
                      -

                      With AVG Internet Security 2015, you can enjoy a fast, secure, and hassle-free online experience. Download and activate it for free today!

                      -

                      cec2833e83
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bb 9000 Bold Jumper Zokus .JPG.md b/spaces/stomexserde/gpt4-ui/Examples/Bb 9000 Bold Jumper Zokus .JPG.md deleted file mode 100644 index db26084a0c449a4a582ea17865086a0e3ceda2bb..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bb 9000 Bold Jumper Zokus .JPG.md +++ /dev/null @@ -1,23 +0,0 @@ -
                      -

                      How to Fix a BlackBerry Bold 9000 with a Jumper Zokus Problem

                      -

                      The BlackBerry Bold 9000 is a smartphone that was released in 2008 and featured a QWERTY keyboard, a 2.6-inch display, a 2 MP camera, and Wi-Fi, GPS, and 3G connectivity. It was one of the first BlackBerry devices to have these features in one device. However, some users have reported a problem with their BlackBerry Bold 9000 that causes it to display a white screen with the words "Jumper Zokus" and a .JPG file name. This problem is also known as the "white screen of death" or WSOD.

                      -

                      Bb 9000 Bold Jumper Zokus .JPG


                      Downloadhttps://urlgoal.com/2uI7za



                      -

                      What causes the Jumper Zokus problem? According to some sources, the Jumper Zokus problem is caused by a faulty or damaged LCD connector on the motherboard of the BlackBerry Bold 9000. The LCD connector is responsible for transmitting data from the motherboard to the display. When the LCD connector is loose or broken, it can cause the display to malfunction and show the Jumper Zokus error message.

                      -

                      How to fix the Jumper Zokus problem? There are two possible ways to fix the Jumper Zokus problem on your BlackBerry Bold 9000: replacing the LCD connector or using a jumper wire. Replacing the LCD connector requires opening up your device and soldering a new connector onto the motherboard. This can be risky and may void your warranty, so it is recommended that you seek professional help if you are not confident in your skills. Using a jumper wire is a simpler and cheaper method that involves connecting two points on the motherboard with a thin wire. This can bypass the faulty LCD connector and restore the display function. However, this method may not work for all devices and may also cause other problems if done incorrectly.

                      -

                      Here are the steps to use a jumper wire to fix the Jumper Zokus problem on your BlackBerry Bold 9000:

                      -
                        -
                      1. Turn off your device and remove the battery, SIM card, and memory card.
                      2. -
                      3. Remove the back cover and unscrew the six screws that hold the front cover.
                      4. -
                      5. Carefully pry off the front cover and disconnect the ribbon cable that connects the keypad to the motherboard.
                      6. -
                      7. Locate the LCD connector on the motherboard. It is a small rectangular component with eight pins on each side. You can see an image of it here: [^1^]
                      8. -
                      9. Using a multimeter, test each pin on the LCD connector to find out which one is faulty. The faulty pin will have no continuity or resistance when tested with another pin.
                      10. -
                      11. Once you have identified the faulty pin, find another pin that has continuity or resistance with it. This will be your jumper point.
                      12. -
                      13. Cut a thin wire (about 2 cm long) and strip both ends of it.
                      14. -
                      15. Solder one end of the wire to the faulty pin on the LCD connector and solder the other end to the jumper point on another pin.
                      16. -
                      17. Reconnect the ribbon cable and reassemble your device.
                      18. -
                      19. Insert the battery, SIM card, and memory card and turn on your device.
                      20. -
                      -

                      If done correctly, your device should boot up normally and display no more Jumper Zokus error message. However, this method may not work for all devices and may also cause other problems if done incorrectly. Therefore, use it at your own risk and only as a last resort.

                      -

                      cec2833e83
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bill Withers Greatest Hits Rar D.md b/spaces/stomexserde/gpt4-ui/Examples/Bill Withers Greatest Hits Rar D.md deleted file mode 100644 index d37ef6ad7c10878add31631557bc001bc1b3d662..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bill Withers Greatest Hits Rar D.md +++ /dev/null @@ -1,19 +0,0 @@ - -

                      Bill Withers Greatest Hits Rar D: A Collection of Soulful Classics

                      -

                      Bill Withers was one of the most influential and beloved soul singers of all time. His songs, such as "Lean on Me", "Ain't No Sunshine", "Lovely Day" and "Just the Two of Us", have touched millions of hearts and inspired countless artists. His greatest hits album, released in 1981, showcases his remarkable talent and versatility, spanning from soul-jazz to funk to pop.

                      -

                      But what is the meaning of the mysterious "Rar D" in the title? Some fans have speculated that it stands for "rare disc", "rarities deluxe" or even "R&B legend". However, the truth is much simpler: it is just a typo. According to Discogs[^1^], the original title of the album was simply "Bill Withers' Greatest Hits", but some copies were mistakenly printed with an extra "Rar D" at the end. This error has made these copies more collectible and sought-after by vinyl enthusiasts.

                      -

                      Bill Withers Greatest Hits Rar D


                      DOWNLOAD - https://urlgoal.com/2uI6Ei



                      -

                      If you are looking for a digital version of this album, you can find it on SoundCloud[^3^], where a user named Jesse Younce has uploaded it for free streaming. You can also buy a CD or vinyl reissue from various online stores. However you choose to listen to it, you will surely enjoy this masterpiece of soul music by Bill Withers.

                      - -

                      Bill Withers was not only a successful singer and songwriter, but also a remarkable person who overcame many challenges in his life. He was born in a small coal-mining town in West Virginia, where he faced poverty and racism. He lost his father at the age of 13 and joined the Navy at 17. He stuttered since childhood and struggled with self-confidence. He worked as an assembler for various companies while pursuing his musical dreams. He did not have his first hit until he was 33 years old. [^1^] [^2^]

                      -

                      Despite his late start and humble beginnings, Withers became one of the most influential and beloved soul singers of all time. He wrote songs that spoke to millions of people, expressing universal emotions of love, pain, joy and hope. He collaborated with legendary artists such as Booker T. Jones, Stevie Wonder, Grover Washington Jr., Al Jarreau and Ralph MacDonald. He won three Grammy Awards and was nominated for six more. He was inducted into the Songwriters Hall of Fame in 2005 and the Rock and Roll Hall of Fame in 2015. [^1^] [^3^]

                      -

                      Withers retired from the music industry in 1985, after becoming disillusioned with the business and wanting to spend more time with his family. He married Marcia Johnson in 1976 and had two children, Todd and Kori. He lived a quiet and private life in Los Angeles, occasionally appearing at special events or giving interviews. He passed away on March 30, 2020, at the age of 81, from heart complications. [^1^] [^2^]

                      -

                      Bill Withers left behind a legacy of timeless songs that will continue to inspire generations of listeners and musicians. His music is a testament to his talent, his soul and his humanity.

                      - -

                      Bill Withers' Impact and Legacy

                      -

                      Bill Withers' music has been widely covered, sampled and referenced by various artists across genres and generations. His songs have been featured in numerous films, television shows, commercials and video games. His influence can be heard in the works of contemporary singers such as John Legend, Ed Sheeran, Alicia Keys, Lenny Kravitz and Michael Kiwanuka. His fans include former presidents Barack Obama and Bill Clinton, who both praised his music and invited him to perform at the White House.

                      -

                      Bill Withers' music has also been used for social causes and movements. His song "Lean on Me" became an anthem of solidarity and support for people facing hardships, such as the victims of Hurricane Katrina, the survivors of the 2010 Haiti earthquake and the students of the 2018 Parkland shooting. His song "Lovely Day" was chosen by the United Nations Foundation as the theme song for their campaign to end global poverty. His song "Ain't No Sunshine" was used by Amnesty International to raise awareness about domestic violence.

                      -

                      Bill Withers' music has transcended time and boundaries, reaching people of different ages, cultures and backgrounds. His music has touched millions of hearts and inspired countless artists. His music is a gift to the world that will never fade away.

                      -

                      e93f5a0c3f
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Communication Theory Book By Murali Babu Free Download VERIFIED.md b/spaces/stomexserde/gpt4-ui/Examples/Communication Theory Book By Murali Babu Free Download VERIFIED.md deleted file mode 100644 index 5f879bf08e1e98b08aab00194de73a6150991ea6..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Communication Theory Book By Murali Babu Free Download VERIFIED.md +++ /dev/null @@ -1,22 +0,0 @@ -
                      -I can try to help you with that. Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Communication Theory Book By Murali Babu Free Download": - -

                      How to Download Communication Theory Book By Murali Babu for Free

                      -

                      Communication theory is a fascinating field that explores how humans and machines communicate with each other. It covers topics such as information theory, coding, modulation, signal processing, noise, channel capacity, and more. If you are interested in learning more about communication theory, you might want to check out the book Communication Theory by K.Murali Babu and L.Agilandeswari.

                      -

                      Communication Theory Book By Murali Babu Free Download


                      Download File 🗸🗸🗸 https://urlgoal.com/2uI6cO



                      -

                      This book is a comprehensive and up-to-date introduction to communication theory, with an emphasis on practical applications and examples. It covers both analog and digital communication systems, as well as wireless communication, optical communication, satellite communication, and cellular communication. It also includes chapters on cryptography, network security, multimedia communication, and communication standards.

                      -

                      The best part is that you can download this book for free from the internet. There are several websites that offer free PDF downloads of communication theory books, such as Scribd[^1^], EbookNetworking[^2^], and Microsoft Sway[^3^] [^4^]. All you need to do is to search for the keyword "Communication Theory Book By Murali Babu Free Download" and you will find links to download the book.

                      -

                      However, before you download any book from the internet, you should be aware of the possible risks and legal issues involved. Some websites may not have the permission or license to distribute the book for free, and some may contain viruses or malware that can harm your computer or device. Therefore, you should always scan the file for viruses before opening it, and only download from trusted and reputable sources. You should also respect the intellectual property rights of the author and publisher of the book, and use it for personal and educational purposes only.

                      -

                      If you want to learn more about communication theory, downloading Communication Theory by K.Murali Babu and L.Agilandeswari is a great way to start. This book will provide you with a solid foundation and a clear understanding of the concepts and principles of communication theory. You will also be able to apply your knowledge to real-world problems and scenarios. So what are you waiting for? Download your copy today and enjoy reading!

                      Here is a possible continuation of the article: - -

                      Now that you have downloaded Communication Theory by K.Murali Babu and L.Agilandeswari, you might be wondering how to use it effectively. Communication theory is not just a collection of abstract ideas and formulas, but a powerful tool for understanding and improving communication in various contexts and situations. Here are some tips on how to apply communication theory to your personal and professional life:

                      -
                        -
                      • Identify the communication problem or goal you want to address. For example, do you want to persuade someone to do something, improve your relationship with someone, or reduce misunderstandings in a group?
                      • -
                      • Select the communication theory or theories that are relevant to your problem or goal. For example, if you want to persuade someone to do something, you might use cognitive dissonance theory, elaboration likelihood model, or social judgment theory. If you want to improve your relationship with someone, you might use social exchange theory, relational dialectics theory, or uncertainty reduction theory. If you want to reduce misunderstandings in a group, you might use groupthink, organizational culture theory, or agenda setting theory.
                      • -
                      • Analyze the communication situation using the communication theory or theories you selected. For example, what are the key concepts, variables, assumptions, and propositions of the theory? How do they apply to your communication situation? What are the strengths and weaknesses of the theory? How does the theory help you explain or predict the communication behavior or outcome?
                      • -
                      • Design and implement a communication strategy based on the communication theory or theories you selected. For example, what are the practical implications or recommendations of the theory? How can you use them to achieve your communication problem or goal? What are the potential risks or challenges of using the theory? How can you overcome them?
                      • -
                      • Evaluate the effectiveness of your communication strategy using the communication theory or theories you selected. For example, did you achieve your communication problem or goal? How do you know? What evidence do you have? How did the theory help or hinder your communication strategy? What did you learn from using the theory?
                      • -
                      -

                      By applying communication theory to your personal and professional life, you can enhance your communication skills and competence, as well as your critical thinking and creativity. You can also contribute to the development and evaluation of communication theory by testing its validity and usefulness in different contexts and situations. Communication theory is not only a source of knowledge, but also a source of inspiration and innovation.

                      7196e7f11a
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Evo Pdf Licence Key.md b/spaces/stomexserde/gpt4-ui/Examples/Evo Pdf Licence Key.md deleted file mode 100644 index 822f43afe46dcc02425384824ed48c9758b77cd1..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Evo Pdf Licence Key.md +++ /dev/null @@ -1,47 +0,0 @@ - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Evo Pdf Licence Key": - -

                      How to Get Evo Pdf Licence Key for Free

                      -

                      Evo Pdf is a powerful and easy-to-use library that allows you to create, edit, and convert PDF documents in your .NET applications. With Evo Pdf, you can generate PDF files from HTML, XML, images, text, and more. You can also manipulate existing PDF files by adding annotations, bookmarks, watermarks, digital signatures, and encryption.

                      -

                      Evo Pdf Licence Key


                      DOWNLOAD ··· https://urlgoal.com/2uIa0z



                      -

                      However, Evo Pdf is not a free library. You need to purchase a licence key to use it in your projects. A licence key costs $300 for a single developer licence, $900 for a team licence, and $1800 for an enterprise licence. These prices may be too high for some developers who want to use Evo Pdf in their applications.

                      -

                      Fortunately, there is a way to get Evo Pdf licence key for free. In this article, we will show you how to do that in a few simple steps.

                      -

                      Step 1: Download Evo Pdf

                      -

                      The first step is to download Evo Pdf from its official website: https://www.evopdf.com/download.aspx. You can choose between the 32-bit and 64-bit versions depending on your system architecture. You can also download the documentation and samples to learn how to use Evo Pdf.

                      -

                      Step 2: Install Evo Pdf

                      -

                      The next step is to install Evo Pdf on your computer. To do that, you need to run the setup file that you downloaded in the previous step. Follow the instructions on the screen to complete the installation process. You will need to accept the licence agreement and choose the destination folder for Evo Pdf.

                      -

                      -

                      Step 3: Generate Evo Pdf Licence Key

                      -

                      The final step is to generate Evo Pdf licence key for free. To do that, you need to use a tool called Evo Pdf Keygen. This tool can generate valid licence keys for any version of Evo Pdf. You can download Evo Pdf Keygen from this link: https://evopdfkeygen.com.

                      -

                      Once you download Evo Pdf Keygen, you need to run it and enter your name and email address. Then, click on the "Generate" button and wait for a few seconds. You will see a licence key displayed on the screen. Copy this licence key and save it somewhere safe.

                      -

                      Step 4: Activate Evo Pdf

                      -

                      The last step is to activate Evo Pdf with the licence key that you generated in the previous step. To do that, you need to open your .NET project that uses Evo Pdf and add the following code snippet at the beginning of your code:

                      -
                      -using System;
                      -using System.Collections.Generic;
                      -using System.Linq;
                      -using System.Text;
                      -using System.Threading.Tasks;
                      -using EvoPdf;
                      -
                      -namespace MyProject
                      -
                      -    class Program
                      -    
                      -        static void Main(string[] args)
                      -        
                      -            //set the licence key
                      -            HtmlToPdfConverter.LicenseKey = "your_licence_key_here";
                      -
                      -            //use Evo Pdf as usual
                      -            //...
                      -        
                      -    
                      -
                      -
                      -

                      Replace "your_licence_key_here" with the actual licence key that you copied from Evo Pdf Keygen. Save your code and run your project. You should be able to use Evo Pdf without any limitations or watermarks.

                      -

                      Conclusion

                      -

                      In this article, we showed you how to get Evo Pdf licence key for free using a tool called Evo Pdf Keygen. This tool can generate valid licence keys for any version of Evo Pdf. You can use these licence keys to activate Evo Pdf and use it in your .NET applications without paying anything.

                      -

                      We hope you found this article helpful and informative. If you have any questions or feedback, please leave a comment below. Thank you for reading!

                      7196e7f11a
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fiery Beach Dreams.md b/spaces/stomexserde/gpt4-ui/Examples/Fiery Beach Dreams.md deleted file mode 100644 index 9a09a00b77b7733f027100b387c848dfbf1b3bd8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fiery Beach Dreams.md +++ /dev/null @@ -1,22 +0,0 @@ - -```markdown -

                      Fiery Beach Dreams: How to Ignite Your Passion and Creativity

                      -

                      Do you ever dream of a fiery beach, where the sun sets the sky ablaze and the waves sparkle like diamonds? Do you feel a surge of energy and inspiration when you imagine yourself in such a place? If so, you are not alone. Many people have fiery beach dreams, and they can be a powerful source of passion and creativity.

                      -

                      fiery beach dreams


                      Downloadhttps://urlgoal.com/2uI8xl



                      -

                      Fiery beach dreams are a type of lucid dream, where you are aware that you are dreaming and can control some aspects of the dream. They are often triggered by a strong emotion, such as love, excitement, or curiosity. Fiery beach dreams can also be induced by meditation, visualization, or listening to relaxing music.

                      -

                      Fiery beach dreams can help you unleash your inner fire and express yourself in new ways. They can also help you overcome fears, challenges, or blocks that may be holding you back from achieving your goals. Here are some tips on how to use fiery beach dreams to ignite your passion and creativity:

                      -
                        -
                      • Before you go to sleep, set an intention to have a fiery beach dream. You can write it down, say it out loud, or repeat it in your mind. For example, "I want to have a fiery beach dream tonight and explore my creative potential."
                      • -
                      • As you fall asleep, imagine yourself on a fiery beach. Use all your senses to make the scene as vivid as possible. Feel the warmth of the sand, the breeze of the wind, the sound of the waves, the smell of the saltwater, and the sight of the fiery sky.
                      • -
                      • When you enter the dream, try to stay aware that you are dreaming. You can do this by looking at your hands, checking your surroundings, or asking yourself "Am I dreaming?" If you lose awareness, don't worry. You can still enjoy the dream and benefit from it.
                      • -
                      • Once you are lucid in the dream, explore your fiery beach. You can do anything you want in the dream, such as flying, swimming, surfing, dancing, singing, painting, writing, or anything else that sparks your interest. You can also interact with other characters or objects in the dream, such as animals, plants, rocks, or stars.
                      • -
                      • As you explore your fiery beach, pay attention to how you feel. Notice what emotions arise in you and what thoughts come to your mind. You may discover new insights, ideas, or solutions that can help you in your waking life.
                      • -
                      • When you wake up from the dream, write down everything you remember. You can use a journal, a voice recorder, or a drawing pad. Try to capture as much detail as possible. You can also use keywords or symbols to summarize the main themes or messages of the dream.
                      • -
                      • Review your dream notes and reflect on what they mean for you. You can ask yourself questions such as "What did I learn from this dream?", "How did this dream make me feel?", "What does this dream tell me about myself?", or "How can I apply this dream to my waking life?"
                      • -
                      • Take action on your dream insights. You can use your fiery beach dream as a source of motivation and inspiration for your personal or professional projects. You can also use it as a guide for your personal growth and development. For example, if your dream showed you that you have a hidden talent for writing, you can start writing a blog or a book. If your dream showed you that you need more adventure in your life, you can plan a trip or try a new hobby.
                      • -
                      -

                      Fiery beach dreams are a gift from your subconscious mind. They can help you tap into your passion and creativity and transform your life. So don't be afraid to dream big and follow your fire.

                      -

                      -```

                      7b8c122e87
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fightingkids Dvd 49385.md b/spaces/stomexserde/gpt4-ui/Examples/Fightingkids Dvd 49385.md deleted file mode 100644 index 42aa4cc16d0dd9ce6def4c90b66603e532617b1f..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fightingkids Dvd 49385.md +++ /dev/null @@ -1,48 +0,0 @@ -
                      -Hello, this is Bing. I can help you with writing a title and an article with SEO optimization and HTML formatting for the keyword "Fightingkids DVD 49385". Here is a possible example: - -

                      Fightingkids DVD 49385: A Review of the Latest Release from the South African Wrestling Site

                      - -

                      If you are a fan of wrestling matches between kids, you might be interested in the latest DVD from Fightingkids, a South African site that produces custom videos and photos of kids fighting for fun. Fightingkids DVD 49385 features two matches: one between Xanthe and Ziska, two girls who compete in foot domination and submission wrestling, and another between Razzle and Waffle, two boys who have a friendly and playful wrestling match.

                      - -

                      In this article, we will review the DVD and give you some reasons why you might want to buy it from the Fightingkids VIP site.

                      -

                      Fightingkids Dvd 49385


                      Downloadhttps://urlgoal.com/2uI8SZ



                      - -

                      Xanthe vs Ziska: A Foot Domination Match

                      - -

                      The first match on the DVD is between Xanthe and Ziska, two girls who are both skilled in foot domination and submission wrestling. They wear colorful outfits and socks, and they use their feet to attack each other's faces, bodies, and limbs. They also apply various holds and pins, such as headscissors, leglocks, grapevines, schoolgirl pins, and more.

                      - -

                      The match is very competitive and intense, as both girls try to make the other submit or tap out. They also trash talk each other and taunt each other with their feet. The match lasts for about 20 minutes, and it ends with a decisive winner who celebrates by posing with her feet on the loser's face.

                      - -

                      If you enjoy watching girls fight with their feet, you will love this match. It is full of action, drama, and humiliation. You can see some previews of this match on the Fightingkids Instagram page[^4^].

                      - -

                      Razzle vs Waffle: A Fun Wrestling Match

                      - -

                      The second match on the DVD is between Razzle and Waffle, two boys who have a fun wrestling match. They wear casual clothes and sneakers, and they wrestle on a mat in a living room. They use various moves and techniques, such as takedowns, throws, slams, headlocks, armlocks, chokes, and more.

                      - -

                      The match is very friendly and playful, as both boys have a good time wrestling each other. They also laugh, joke, compliment each other, and give each other high fives. The match lasts for about 40 minutes, and it ends with a mutual respect and friendship between the two wrestlers.

                      - -

                      If you enjoy watching boys wrestle for fun, you will like this match. It is full of entertainment, humor, and sportsmanship. You can watch the full match on YouTube[^3^].

                      - -

                      Why You Should Buy Fightingkids DVD 49385

                      - -

                      There are many reasons why you might want to buy Fightingkids DVD 49385 from the Fightingkids VIP site[^2^]. Here are some of them:

                      - -
                        -
                      • You will get access to high-quality videos and photos of kids wrestling matches that are not available anywhere else.
                      • -
                      • You will support the Fightingkids site and help them produce more content in the future.
                      • -
                      • You will enjoy watching kids fight for fun in a safe and supervised environment.
                      • -
                      • You will be able to request custom videos and photos of your favorite wrestlers or scenarios.
                      • -
                      • You will get discounts on other DVDs and sets from the Fightingkids catalogue[^1^].
                      • -
                      - -

                      To buy Fightingkids DVD 49385, you need to register as a VIP member on the Fightingkids site[^2^] and follow the instructions on how to order DVDs. The price of the DVD is $50 USD plus shipping costs. You can pay by credit card or PayPal.

                      -

                      - -

                      Conclusion

                      - -

                      Fightingkids DVD 49385 is a great DVD for anyone who likes watching kids wrestle for fun. It features two matches: one between Xanthe and Ziska, two girls who compete in foot domination and submission wrestling, and another between Razzle and Waffle, two boys who have a friendly and playful wrestling match. You can buy the DVD from the Fightingkids VIP site[^2^] for $50 USD plus shipping costs.

                      - -

                      If you are interested in more

                      7196e7f11a
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Inazuma Eleven Pc Game English Version Free Download Torrents.md b/spaces/stomexserde/gpt4-ui/Examples/Inazuma Eleven Pc Game English Version Free Download Torrents.md deleted file mode 100644 index 325b05d8db511b0508bdf42afe19dbcfa100fe39..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Inazuma Eleven Pc Game English Version Free Download Torrents.md +++ /dev/null @@ -1,22 +0,0 @@ - -

                      How to Play Inazuma Eleven GO Strikers 2013 on PC for Free

                      -

                      If you are a fan of soccer games and anime, you might have heard of Inazuma Eleven, a popular series that combines both genres. Inazuma Eleven GO Strikers 2013 is the latest installment of the series, released for the Wii console in Japan in 2012. However, if you don't have a Wii or you want to play the game in English, you can still enjoy it on your PC with the help of an emulator and a fan-made patch.

                      -

                      Inazuma Eleven Pc Game English Version Free Download Torrents


                      DOWNLOAD ✫✫✫ https://urlgoal.com/2uI6aX



                      -

                      In this article, we will show you how to download and install Inazuma Eleven GO Strikers 2013 on your PC for free, using the Dolphin emulator and the English patch by OMK team. You will also learn how to configure the game settings and controls to optimize your gaming experience. Let's get started!

                      -

                      What is Inazuma Eleven GO Strikers 2013?

                      -

                      Inazuma Eleven GO Strikers 2013 is a soccer game based on the Inazuma Eleven GO anime and manga series. The game features over 200 characters from the series, each with their own unique skills and abilities. You can create your own team and compete in various modes, such as story mode, tournament mode, or multiplayer mode. You can also use special techniques and tactics to outsmart your opponents and score goals.

                      -

                      The game has a colorful and vibrant graphics style, as well as an energetic soundtrack that matches the anime's atmosphere. The gameplay is fast-paced and action-packed, with dynamic camera angles and animations. The game also supports up to four players in local or online co-op or versus mode.

                      -

                      How to Download and Install Inazuma Eleven GO Strikers 2013 on PC for Free?

                      -

                      To play Inazuma Eleven GO Strikers 2013 on PC for free, you will need two things: the Dolphin emulator and the English patch. Here are the steps to follow:

                      -

                      -
                        -
                      1. Download the Dolphin emulator from https://dolphin-emu.org/download/. Dolphin is a free and open-source emulator that can run Wii and GameCube games on PC.
                      2. -
                      3. Download the Inazuma Eleven GO Strikers 2013 ISO file from one of these links: https://www.4fnet.org/inazuma-eleven-go-strikers-para-windows-pc/, https://www.reddit.com/r/PC4Gamer/comments/zo6cn2/inazuma_eleven_go_strikers_2013_free_download_for/, or https://sway.office.com/K3IEExg7665p6tyh. The ISO file is a disc image of the game that you can run with Dolphin.
                      4. -
                      5. Download the English patch from https://drive.google.com/file/d/1QZyYs0w7Q8wzJZBxOZ0nFkY9XyWm8yL-/view. The English patch is a fan-made translation of the game that replaces the Japanese text and voices with English ones.
                      6. -
                      7. Extract the English patch zip file to a folder of your choice. You will see two files: patch.bat and xdelta.exe.
                      8. -
                      9. Copy the Inazuma Eleven GO Strikers 2013 ISO file to the same folder where you extracted the English patch.
                      10. -
                      11. Run patch.bat and wait for it to finish. This will create a new ISO file called Inazuma Eleven GO Strikers 2013 (English Patched).iso in the same folder.
                      12. -
                      13. Run Dolphin and click on Open. Browse to the folder where you have the patched ISO file and select it.
                      14. - 7b8c122e87
                        -
                        -
                        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Indian Babu Songs Hindi 1080p Download !!INSTALL!!.md b/spaces/stomexserde/gpt4-ui/Examples/Indian Babu Songs Hindi 1080p Download !!INSTALL!!.md deleted file mode 100644 index 1d3bc2d07a31cbec39ac5468ca72c089c9560f90..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Indian Babu Songs Hindi 1080p Download !!INSTALL!!.md +++ /dev/null @@ -1,18 +0,0 @@ -
                        -

                        How to Download Indian Babu Songs in Hindi 1080p Quality

                        -

                        Indian Babu is a 2002 Hindi movie that features Jaz Pandher, Gurleen Chopra, Johnny Lever, and Mukesh Rishi in the lead roles. The movie has nine songs composed by Nadeem-Shravan, a popular duo of music directors. The songs are sung by Kumar Sanu, Alka Yagnik, Jaspinder Narula, Sarika Kapoor, KK, Sabri Brothers, Nirja Pandit, Kunal Ganjawala, and Udit Narayan. The songs are a mix of romantic, dance, and wedding tracks that suit the mood of the movie.

                        -

                        If you are a fan of Indian Babu songs and want to download them in high-quality 1080p format, you have several options to choose from. Here are some of the best ways to download Indian Babu songs in Hindi 1080p quality:

                        -

                        Indian Babu songs hindi 1080p download


                        Download File >> https://urlgoal.com/2uI6Z9



                        -
                          -
                        • JioSaavn: JioSaavn is a popular online music streaming service that offers a huge collection of songs in various languages and genres. You can listen to Indian Babu songs online on JioSaavn for free or subscribe to JioSaavn Pro to download them offline. JioSaavn Pro offers high-quality 1080p downloads for its subscribers. You can also access exclusive content and ad-free listening on JioSaavn Pro. To download Indian Babu songs on JioSaavn, you need to search for the album name and click on the download icon next to each song[^1^].
                        • -
                        • Wynk Music: Wynk Music is another online music streaming service that lets you play and download Indian Babu songs for free or with a subscription. Wynk Music has a large library of songs in different languages and genres. You can also create your own personalized playlists and enjoy seamless music experience on Wynk Music. To download Indian Babu songs on Wynk Music, you need to search for the album name and click on the download icon next to each song[^2^].
                        • -
                        • Hungama Music: Hungama Music is an online music streaming service that offers unlimited access to songs, videos, radio, and podcasts. You can listen to Indian Babu songs online on Hungama Music for free or subscribe to Hungama Pro to download them offline. Hungama Pro offers high-quality 1080p downloads for its subscribers. You can also enjoy ad-free music and exclusive content on Hungama Pro. To download Indian Babu songs on Hungama Music, you need to search for the album name and click on the download icon next to each song[^3^].
                        • -
                        -

                        These are some of the best ways to download Indian Babu songs in Hindi 1080p quality. You can also check out other online music streaming services or websites that offer similar features and options. However, make sure that you download songs from legal and authorized sources only. Downloading songs from illegal or pirated sources may harm your device or violate the copyright laws.

                        -

                        Hope this article helps you enjoy Indian Babu songs in high-quality 1080p format. Happy listening!

                        - -

                        Indian Babu is a romantic comedy movie that revolves around the love story of Karan (Jaz Pandher) and Dil (Gurleen Chopra). Karan is a wealthy businessman who lives in London and has a girlfriend named Anita (Shweta Menon). Dil is a simple girl who lives in India and works as a dancer. Karan's grandfather (Alok Nath) wants him to marry an Indian girl and arranges his marriage with Dil. Karan agrees to the marriage but plans to divorce Dil after a few days. However, things change when he meets Dil and falls in love with her. The movie shows how Karan and Dil overcome the obstacles in their relationship and realize their true feelings for each other.

                        -

                        The songs of Indian Babu are one of the highlights of the movie. The songs are composed by Nadeem-Shravan, who are known for their melodious and catchy tunes. The songs are written by Sameer, who has penned many popular songs in Bollywood. The songs are sung by some of the best singers in the industry, such as Kumar Sanu, Alka Yagnik, Jaspinder Narula, Sarika Kapoor, KK, Sabri Brothers, Nirja Pandit, Kunal Ganjawala, and Udit Narayan. The songs are well-choreographed and picturized on the lead actors and supporting cast. The songs have a variety of themes and genres, such as romance, dance, wedding, and qawwali.

                        -

                        Some of the most popular songs of Indian Babu are Hum Deewane Hum Deewane Hai Tere, Dil Mera Dil Mera Dil, Aap Humse Pyar Karne Lage, Mere Sang Sang, Aaya Dulha Aaya, I Wanna Take You, Rabba Rabba, and Hum Deewane Hum. These songs have received positive reviews from the critics and audiences alike. The songs have also been nominated and won several awards for their music and lyrics. The songs of Indian Babu have become evergreen hits that are still loved and enjoyed by the fans of Bollywood music.

                        cec2833e83
                        -
                        -
                        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64 TOP.md b/spaces/stomexserde/gpt4-ui/Examples/Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64 TOP.md deleted file mode 100644 index 430609adec396ca8ff12fceb097f17060e311076..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64 TOP.md +++ /dev/null @@ -1,20 +0,0 @@ -
                        -

                        Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64: A Complete Solution for Scattering Objects

                        -

                        If you are looking for a plugin that can help you create realistic and natural scenes with millions of objects and polygons, you might want to check out Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64. This plugin is designed to give you a complete solution for scattering objects, from trees and plants to buildings, crowds, aggregates, ground-cover, rocks and more. You can use it with Mental Ray and VRay native shaders, and enjoy the benefits of its production-tested algorithms and native support for most popular render engines[^1^] [^2^].

                        -

                        Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64 TOP


                        Download File ===> https://urlgoal.com/2uIb4w



                        -

                        With Forest Pack Pro, you can simulate natural distribution patterns and get the most out of your assets using advanced mapping and randomization tools. You can also fine-tune your scatters with granular control over every aspect of the plugin, such as density, scale, rotation, alignment, clustering, animation, and more. You can also use interactive features such as marker placement and distribution along splines to create custom scatters with ease[^2^] [^3^].

                        -

                        Forest Pack Pro also comes with a comprehensive library of 3D models and presets that you can use in your projects. You can access hundreds of ready-to-use objects and templates for a variety of scenarios, such as forests, meadows, gardens, cities, crowds, carpets, and more. You can also integrate Forest Pack Pro with many leading 3rd party libraries, such as Evermotion, Laubwerk, Xfrog, SpeedTree, and more[^2^] [^4^].

                        -

                        If you want to download Itoo Software Forest Pack Pro V.6.2.1 For 3DsMax 2015-2019 Win X64, you can find it on various websites that offer CG software and resources. However, be sure to scan all the downloaded files with your antivirus before installing or running them, as some of them might contain malicious code or viruses[^1^]. You can also visit the official website of Itoo Software to learn more about Forest Pack Pro and its features[^2^].

                        Part 2. Tutorials and Resources

                        -

                        If you want to learn more about how to use Forest Pack Pro effectively, you can check out the tutorials and resources available on the official website of Itoo Software. There you can find a variety of video tutorials that cover basic and advanced topics, such as using splines, surfaces, effects, materials, animation, optimization, and more. You can also download sample files and scenes to follow along or experiment with different settings and features.

                        -

                        -

                        Some of the tutorials that you might find useful are:

                        -
                          -
                        • Basic Tutorial: This tutorial shows how to use 3D high-poly trees in an architectural environment. It can be completed with Forest Lite.
                        • -
                        • Modern Barn Tutorial: This tutorial shows how to use most of the software options, from area splines to advanced materials. It uses the VRay renderer but all the principles will apply to any other engine. It includes five parts with 90 minutes of video.
                        • -
                        • Autumn Park Tutorial: This tutorial shows how to use Forest Color to add simple color variation using the optimize materials tool and Forest Color's tint by gradient feature.
                        • -
                        • Spline Attraction Effect Tutorial: This tutorial shows how to use the handy Spline Attraction effect that ships with Forest Pack 8. It allows you to create scatters that follow a spline's shape and direction.
                        • -
                        • Swap Objects Inside a Spline Tutorial: This tutorial shows how to use a Forest Effect to swap geometry inside a spline. This can be useful for creating variations in your scatters or for creating transitions between different types of objects.
                        • -
                        -

                        You can also find more tips and tricks videos in the tutorials section of the website. They are regularly updated so check often for new content.

                        81aa517590
                        -
                        -
                        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jumbo Full Movie Download 720p Movies.md b/spaces/stomexserde/gpt4-ui/Examples/Jumbo Full Movie Download 720p Movies.md deleted file mode 100644 index b51849d5a1ef184c23611d077d21e7006e409d2d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Jumbo Full Movie Download 720p Movies.md +++ /dev/null @@ -1,31 +0,0 @@ -
                        -

                        Jumbo Full Movie Download 720p Movies: A Review of the Unusual Romance Between a Woman and a Fair Ride

                        - -

                        Jumbo is a 2020 drama film written and directed by Zoé Wittock in her feature directorial debut. An international co-production of France, Belgium and Luxembourg, the film stars Noémie Merlant, Emmanuelle Bercot, Bastien Bouillon and Sam Louwyck. It had its world premiere at the Sundance Film Festival on 24 January 2020 and was released in France on 1 July 2020. [^5^]

                        -

                        Jumbo Full Movie Download 720p Movies


                        DOWNLOADhttps://urlgoal.com/2uI9rb



                        - -

                        The film tells the story of Jeanne (Merlant), a shy and lonely girl who works as a cleaner at a local amusement park. One night, she discovers that one of the park's attractions, a spinning ride called Jumbo, is alive and can communicate with her through lights, sounds and movements. Jeanne falls in love with Jumbo and starts a secret relationship with him, much to the dismay of her mother Margarette (Bercot) and her co-worker Marc (Bouillon), who both try to convince her that she is delusional and needs help.

                        - -

                        Jumbo is a bold and original film that explores the theme of object sexuality, or the attraction to inanimate objects. Wittock does not judge or mock Jeanne's feelings, but rather portrays them with sincerity and empathy. Merlant delivers a stunning performance as Jeanne, conveying her passion, pain and joy with minimal dialogue and expressive body language. She makes us believe in her bond with Jumbo, who is also given a personality and emotions through his actions and reactions.

                        - -

                        The film also features stunning cinematography by Thomas Buelens, who captures the contrast between the dark and dull reality of Jeanne's life and the bright and colorful fantasy of her romance with Jumbo. The film uses visual effects sparingly but effectively to create Jumbo's expressions and movements. The film also has a haunting score by Thomas Roussel, who mixes electronic and orchestral sounds to create a mood of wonder and mystery.

                        - -

                        Jumbo is not a film for everyone, as some viewers may find it too weird or disturbing. However, for those who are open-minded and curious, Jumbo offers a unique and touching experience that challenges our notions of love and normality. Jumbo is a film that celebrates difference and diversity, and shows that love can be found in the most unexpected places.

                        - -

                        If you are interested in watching Jumbo full movie download 720p movies, you can find it on various online platforms such as HDHub4u [^1^], SoundCloud [^2^], Sway [^3^] or OpenSea [^4^]. However, we recommend that you watch it legally on official streaming services or cinemas to support the filmmakers and respect their rights.

                        -

                        - -

                        Jumbo is not the first film to deal with object sexuality. In fact, there have been several documentaries and fictional films that have explored this phenomenon in different ways. Some examples are:

                        - -
                          -
                        • Married to the Eiffel Tower (2008): A documentary by Agnieszka Piotrowska that follows the lives of three women who are in love with objects such as the Eiffel Tower, the Berlin Wall and a bow. The film examines their psychological and emotional motivations, as well as the social and legal implications of their relationships.
                        • -
                        • Lars and the Real Girl (2007): A comedy-drama film by Craig Gillespie that stars Ryan Gosling as Lars, a socially awkward man who develops a romantic attachment to a sex doll named Bianca. The film shows how Lars' family and community react to his unconventional choice and try to help him overcome his loneliness and insecurity.
                        • -
                        • Her (2013): A science fiction film by Spike Jonze that stars Joaquin Phoenix as Theodore, a lonely writer who falls in love with an artificial intelligence system named Samantha (voiced by Scarlett Johansson). The film explores the ethical and emotional issues of human-machine relationships in a futuristic society.
                        • -
                        • Ruby Sparks (2012): A romantic fantasy film by Jonathan Dayton and Valerie Faris that stars Paul Dano as Calvin, a struggling novelist who creates a female character named Ruby (played by Zoe Kazan) in his book. He is surprised when Ruby comes to life and becomes his girlfriend. The film examines the power dynamics and creative challenges of having a partner who is literally a product of one's imagination.
                        • -
                        - -

                        These films show that object sexuality is not a new or isolated phenomenon, but rather a complex and diverse one that reflects different aspects of human psychology, culture and technology. They also challenge us to question our own definitions and expectations of love, intimacy and identity.

                        - -

                        Jumbo is a film that adds to this rich and fascinating cinematic discourse, and invites us to look beyond appearances and stereotypes. It is a film that celebrates the beauty and diversity of love in all its forms.

                        81aa517590
                        -
                        -
                        \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/vendor-4cd7d240.js b/spaces/sub314xxl/MetaGPT/metagpt/static/assets/vendor-4cd7d240.js deleted file mode 100644 index a048623ad3fc0c02d8ad1111908bc8438c7d2247..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/static/assets/vendor-4cd7d240.js +++ /dev/null @@ -1,30 +0,0 @@ -import{r as H,a as ze,i as et,c as C,g as Wt,b as Cr,d as te,w as Ne,o as Ke,e as Sr,f as x,h as ee,j as se,n as K,k as $e,l as fe,m as ue,p as ve,q as ke,s as pe,t as nt,u as Xe,v as z,T as Pn,x as cr,y as Re,z as Fe,A as Je,B as $t,F as rt,C as Pl,D as cn,E as vn,G as Ht,H as qc,I as Gc,J as Kc,K as xi,L as zi,M as sn,N as Yc,O as Il,P as Fn,Q as Xc,R as Ml,S as Oa,U as Jc,V as Zc,W as Qc,X as ed}from"./vue-e0bc46a9.js";import{c as di,g as Rl,a as td}from"./__commonjsHelpers__-042e6b4d.js";const gn=Object.prototype.toString;function We(e){return gn.call(e)==="[object Array]"}function jn(e){return gn.call(e)==="[object Null]"}function nd(e){return gn.call(e)==="[object Boolean]"}function Be(e){return gn.call(e)==="[object Object]"}function Vt(e){return gn.call(e)==="[object String]"}function de(e){return gn.call(e)==="[object Number]"&&e===e}function vt(e){return e===void 0}function it(e){return typeof e=="function"}function rd(e){return Be(e)&&Object.keys(e).length===0}const Bl=e=>(e==null?void 0:e.$)!==void 0,xt=Symbol("ArcoConfigProvider"),Un={formatYear:"YYYY 年",formatMonth:"YYYY 年 MM 月",today:"今天",view:{month:"月",year:"年",week:"周",day:"日"},month:{long:{January:"一月",February:"二月",March:"三月",April:"四月",May:"五月",June:"六月",July:"七月",August:"八月",September:"九月",October:"十月",November:"十一月",December:"十二月"},short:{January:"一月",February:"二月",March:"三月",April:"四月",May:"五月",June:"六月",July:"七月",August:"八月",September:"九月",October:"十月",November:"十一月",December:"十二月"}},week:{long:{self:"周",monday:"周一",tuesday:"周二",wednesday:"周三",thursday:"周四",friday:"周五",saturday:"周六",sunday:"周日"},short:{self:"周",monday:"一",tuesday:"二",wednesday:"三",thursday:"四",friday:"五",saturday:"六",sunday:"日"}}},od={locale:"zh-CN",empty:{description:"暂无数据"},drawer:{okText:"确定",cancelText:"取消"},popconfirm:{okText:"确定",cancelText:"取消"},modal:{okText:"确定",cancelText:"取消"},pagination:{goto:"前往",page:"页",countPerPage:"条/页",total:"共 {0} 条"},table:{okText:"确定",resetText:"重置"},upload:{start:"开始",cancel:"取消",delete:"删除",retry:"点击重试",buttonText:"点击上传",preview:"预览",drag:"点击或拖拽文件到此处上传",dragHover:"释放文件并开始上传",error:"上传失败"},calendar:Un,datePicker:{view:Un.view,month:Un.month,week:Un.week,placeholder:{date:"请选择日期",week:"请选择周",month:"请选择月份",year:"请选择年份",quarter:"请选择季度",time:"请选择时间"},rangePlaceholder:{date:["开始日期","结束日期"],week:["开始周","结束周"],month:["开始月份","结束月份"],year:["开始年份","结束年份"],quarter:["开始季度","结束季度"],time:["开始时间","结束时间"]},selectTime:"选择时间",today:"今天",now:"此刻",ok:"确定"},image:{loading:"加载中"},imagePreview:{fullScreen:"全屏",rotateRight:"向右旋转",rotateLeft:"向左旋转",zoomIn:"放大",zoomOut:"缩小",originalSize:"原始尺寸"},typography:{copied:"已复制",copy:"复制",expand:"展开",collapse:"折叠",edit:"编辑"}},id=H("zh-CN"),ad=ze({"zh-CN":od}),sd=()=>{const e=et(xt,void 0),t=C(()=>{var o;return(o=e==null?void 0:e.locale)!=null?o:ad[id.value]});return{locale:C(()=>t.value.locale),t:(o,...i)=>{const a=o.split(".");let s=t.value;for(const l of a){if(!s[l])return o;s=s[l]}return Vt(s)&&i.length>0?s.replace(/{(\d+)}/g,(l,u)=>{var c;return(c=i[u])!=null?c:l}):s}}};var ld=Object.defineProperty,ud=Object.defineProperties,cd=Object.getOwnPropertyDescriptors,La=Object.getOwnPropertySymbols,dd=Object.prototype.hasOwnProperty,fd=Object.prototype.propertyIsEnumerable,Ta=(e,t,n)=>t in e?ld(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,hd=(e,t)=>{for(var n in t||(t={}))dd.call(t,n)&&Ta(e,n,t[n]);if(La)for(var n of La(t))fd.call(t,n)&&Ta(e,n,t[n]);return e},pd=(e,t)=>ud(e,cd(t));const md="A",vd="arco",fi="$arco",je=e=>{var t;return(t=e==null?void 0:e.componentPrefix)!=null?t:md},Ve=(e,t)=>{var n;t&&t.classPrefix&&(e.config.globalProperties[fi]=pd(hd({},(n=e.config.globalProperties[fi])!=null?n:{}),{classPrefix:t.classPrefix}))},oe=e=>{var t,n,r;const o=Wt(),i=et(xt,void 0),a=(r=(n=i==null?void 0:i.prefixCls)!=null?n:(t=o==null?void 0:o.appContext.config.globalProperties[fi])==null?void 0:t.classPrefix)!=null?r:vd;return e?`${a}-${e}`:a};var Dl=function(){if(typeof Map<"u")return Map;function e(t,n){var r=-1;return t.some(function(o,i){return o[0]===n?(r=i,!0):!1}),r}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(n){var r=e(this.__entries__,n),o=this.__entries__[r];return o&&o[1]},t.prototype.set=function(n,r){var o=e(this.__entries__,n);~o?this.__entries__[o][1]=r:this.__entries__.push([n,r])},t.prototype.delete=function(n){var r=this.__entries__,o=e(r,n);~o&&r.splice(o,1)},t.prototype.has=function(n){return!!~e(this.__entries__,n)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(n,r){r===void 0&&(r=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!hi||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),Sd?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!hi||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var n=t.propertyName,r=n===void 0?"":n,o=Cd.some(function(i){return!!~r.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Fl=function(e,t){for(var n=0,r=Object.keys(t);n"u"||!(Element instanceof Object))){if(!(t instanceof dn(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var n=this.observations_;n.has(t)||(n.set(t,new Nd(t)),this.controller_.addObserver(this),this.controller_.refresh())}},e.prototype.unobserve=function(t){if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");if(!(typeof Element>"u"||!(Element instanceof Object))){if(!(t instanceof dn(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var n=this.observations_;n.has(t)&&(n.delete(t),n.size||this.controller_.removeObserver(this))}},e.prototype.disconnect=function(){this.clearActive(),this.observations_.clear(),this.controller_.removeObserver(this)},e.prototype.gatherActive=function(){var t=this;this.clearActive(),this.observations_.forEach(function(n){n.isActive()&&t.activeObservations_.push(n)})},e.prototype.broadcastActive=function(){if(this.hasActive()){var t=this.callbackCtx_,n=this.activeObservations_.map(function(r){return new Pd(r.target,r.broadcastRect())});this.callback_.call(t,n,t),this.clearActive()}},e.prototype.clearActive=function(){this.activeObservations_.splice(0)},e.prototype.hasActive=function(){return this.activeObservations_.length>0},e}(),Vl=typeof WeakMap<"u"?new WeakMap:new Dl,xl=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var n=Ed.getInstance(),r=new Id(t,n,this);Vl.set(this,r)}return e}();["observe","unobserve","disconnect"].forEach(function(e){xl.prototype[e]=function(){var t;return(t=Vl.get(this))[e].apply(t,arguments)}});var Ui=function(){return typeof dr.ResizeObserver<"u"?dr.ResizeObserver:xl}(),Na;(function(e){e[e.ELEMENT=1]="ELEMENT",e[e.FUNCTIONAL_COMPONENT=2]="FUNCTIONAL_COMPONENT",e[e.STATEFUL_COMPONENT=4]="STATEFUL_COMPONENT",e[e.COMPONENT=6]="COMPONENT",e[e.TEXT_CHILDREN=8]="TEXT_CHILDREN",e[e.ARRAY_CHILDREN=16]="ARRAY_CHILDREN",e[e.SLOTS_CHILDREN=32]="SLOTS_CHILDREN",e[e.TELEPORT=64]="TELEPORT",e[e.SUSPENSE=128]="SUSPENSE",e[e.COMPONENT_SHOULD_KEEP_ALIVE=256]="COMPONENT_SHOULD_KEEP_ALIVE",e[e.COMPONENT_KEPT_ALIVE=512]="COMPONENT_KEPT_ALIVE"})(Na||(Na={}));var Pa;(function(e){e[e.TEXT=1]="TEXT",e[e.CLASS=2]="CLASS",e[e.STYLE=4]="STYLE",e[e.PROPS=8]="PROPS",e[e.FULL_PROPS=16]="FULL_PROPS",e[e.HYDRATE_EVENTS=32]="HYDRATE_EVENTS",e[e.STABLE_FRAGMENT=64]="STABLE_FRAGMENT",e[e.KEYED_FRAGMENT=128]="KEYED_FRAGMENT",e[e.UNKEYED_FRAGMENT=256]="UNKEYED_FRAGMENT",e[e.NEED_PATCH=512]="NEED_PATCH",e[e.DYNAMIC_SLOTS=1024]="DYNAMIC_SLOTS",e[e.DEV_ROOT_FRAGMENT=2048]="DEV_ROOT_FRAGMENT",e[e.HOISTED=-1]="HOISTED",e[e.BAIL=-2]="BAIL"})(Pa||(Pa={}));const wr=e=>!!(e&&e.shapeFlag&1),kr=(e,t)=>!!(e&&e.shapeFlag&6),Md=(e,t)=>!!(e&&e.shapeFlag&8),Wi=(e,t)=>!!(e&&e.shapeFlag&16),zl=(e,t)=>!!(e&&e.shapeFlag&32),ln=e=>{var t,n;if(e)for(const r of e){if(wr(r)||kr(r))return r;if(Wi(r,r.children)){const o=ln(r.children);if(o)return o}else if(zl(r,r.children)){const o=(n=(t=r.children).default)==null?void 0:n.call(t);if(o){const i=ln(o);if(i)return i}}else if(We(r)){const o=ln(r);if(o)return o}}},Rd=e=>{if(!e)return!0;for(const t of e)if(t.children)return!1;return!0},Ul=(e,t)=>{if(e&&e.length>0)for(let n=0;n0&&Ul(o,t))return!0}return!1},Wl=e=>{if(Wi(e,e.children))return e.children;if(We(e))return e},Hl=e=>{var t,n;if(wr(e))return e.el;if(kr(e)){if(((t=e.el)==null?void 0:t.nodeType)===1)return e.el;if((n=e.component)!=null&&n.subTree){const r=Hl(e.component.subTree);if(r)return r}}else{const r=Wl(e);return ql(r)}},ql=e=>{if(e&&e.length>0)for(const t of e){const n=Hl(t);if(n)return n}},Tn=(e,t=!1)=>{var n,r;const o=[];for(const i of e??[])wr(i)||kr(i)||t&&Md(i,i.children)?o.push(i):Wi(i,i.children)?o.push(...Tn(i.children,t)):zl(i,i.children)?o.push(...Tn((r=(n=i.children).default)==null?void 0:r.call(n),t)):We(i)&&o.push(...Tn(i,t));return o},Ia=e=>{if(e)return it(e)?e:()=>e};var Gl=te({name:"ResizeObserver",emits:["resize"],setup(e,{emit:t,slots:n}){let r;const o=H(),i=C(()=>Bl(o.value)?o.value.$el:o.value),a=l=>{l&&(r=new Ui(u=>{const c=u[0];t("resize",c)}),r.observe(l))},s=()=>{r&&(r.disconnect(),r=null)};return Ne(i,l=>{r&&s(),l&&a(l)}),Ke(()=>{i.value&&a(i.value)}),Sr(()=>{s()}),()=>{var l,u;const c=ln((u=(l=n.default)==null?void 0:l.call(n))!=null?u:[]);return c?Cr(c,{ref:o},!0):null}}});const Kl=typeof window>"u"?global:window,Bd=Kl.requestAnimationFrame,Ma=Kl.cancelAnimationFrame;function Dd(e){let t=0;const n=(...r)=>{t&&Ma(t),t=Bd(()=>{e(...r),t=0})};return n.cancel=()=>{Ma(t),t=0},n}const Hi=()=>{},qi=(()=>{try{return!(typeof window<"u"&&document!==void 0)}catch{return!0}})(),Ft=(()=>qi?Hi:(e,t,n,r=!1)=>{e.addEventListener(t,n,r)})(),In=(()=>qi?Hi:(e,t,n,r=!1)=>{e.removeEventListener(t,n,r)})(),Fd=e=>{const t=document.createElement("div");return t.setAttribute("class",`arco-overlay arco-overlay-${e}`),t},jd=(e,t)=>{var n;return qi?Hi():(n=(t??document).querySelector(e))!=null?n:void 0},Ra=(e,t)=>{if(Vt(e)){const n=e[0]==="#"?`[id='${e.slice(1)}']`:e;return jd(n,t)}return e},Vd=(e,t)=>{const n=e.getBoundingClientRect(),r=t.getBoundingClientRect();return{top:n.top-r.top,bottom:r.bottom-n.bottom,left:n.left-r.left,right:r.right-n.right,width:n.width,height:n.height}};var ce=(e,t)=>{for(const[n,r]of t)e[n]=r;return e};const xd=te({name:"IconHover",props:{prefix:{type:String},size:{type:String,default:"medium"},disabled:{type:Boolean,default:!1}},setup(){return{prefixCls:oe("icon-hover")}}});function zd(e,t,n,r,o,i){return x(),ee("span",{class:K([e.prefixCls,{[`${e.prefix}-icon-hover`]:e.prefix,[`${e.prefixCls}-size-${e.size}`]:e.size!=="medium",[`${e.prefixCls}-disabled`]:e.disabled}])},[se(e.$slots,"default")],2)}var gt=ce(xd,[["render",zd]]);const Ud=te({name:"IconClose",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-close`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Wd=["stroke-width","stroke-linecap","stroke-linejoin"],Hd=fe("path",{d:"M9.857 9.858 24 24m0 0 14.142 14.142M24 24 38.142 9.858M24 24 9.857 38.142"},null,-1),qd=[Hd];function Gd(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},qd,14,Wd)}var Hr=ce(Ud,[["render",Gd]]);const qt=Object.assign(Hr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Hr.name,Hr)}}),Kd=te({name:"IconInfoCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-info-circle-fill`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Yd=["stroke-width","stroke-linecap","stroke-linejoin"],Xd=fe("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M24 44c11.046 0 20-8.954 20-20S35.046 4 24 4 4 12.954 4 24s8.954 20 20 20Zm2-30a1 1 0 0 0-1-1h-2a1 1 0 0 0-1 1v2a1 1 0 0 0 1 1h2a1 1 0 0 0 1-1v-2Zm0 17h1a1 1 0 0 1 1 1v2a1 1 0 0 1-1 1h-6a1 1 0 0 1-1-1v-2a1 1 0 0 1 1-1h1v-8a1 1 0 0 1-1-1v-2a1 1 0 0 1 1-1h3a1 1 0 0 1 1 1v11Z",fill:"currentColor",stroke:"none"},null,-1),Jd=[Xd];function Zd(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Jd,14,Yd)}var qr=ce(Kd,[["render",Zd]]);const Yl=Object.assign(qr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+qr.name,qr)}}),Qd=te({name:"IconCheckCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-check-circle-fill`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),ef=["stroke-width","stroke-linecap","stroke-linejoin"],tf=fe("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M24 44c11.046 0 20-8.954 20-20S35.046 4 24 4 4 12.954 4 24s8.954 20 20 20Zm10.207-24.379a1 1 0 0 0 0-1.414l-1.414-1.414a1 1 0 0 0-1.414 0L22 26.172l-4.878-4.88a1 1 0 0 0-1.415 0l-1.414 1.415a1 1 0 0 0 0 1.414l7 7a1 1 0 0 0 1.414 0l11.5-11.5Z",fill:"currentColor",stroke:"none"},null,-1),nf=[tf];function rf(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},nf,14,ef)}var Gr=ce(Qd,[["render",rf]]);const Gi=Object.assign(Gr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Gr.name,Gr)}}),of=te({name:"IconExclamationCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-exclamation-circle-fill`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),af=["stroke-width","stroke-linecap","stroke-linejoin"],sf=fe("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M24 44c11.046 0 20-8.954 20-20S35.046 4 24 4 4 12.954 4 24s8.954 20 20 20Zm-2-11a1 1 0 0 0 1 1h2a1 1 0 0 0 1-1v-2a1 1 0 0 0-1-1h-2a1 1 0 0 0-1 1v2Zm4-18a1 1 0 0 0-1-1h-2a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h2a1 1 0 0 0 1-1V15Z",fill:"currentColor",stroke:"none"},null,-1),lf=[sf];function uf(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},lf,14,af)}var Kr=ce(of,[["render",uf]]);const Ki=Object.assign(Kr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Kr.name,Kr)}}),cf=te({name:"IconCloseCircleFill",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-close-circle-fill`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),df=["stroke-width","stroke-linecap","stroke-linejoin"],ff=fe("path",{"fill-rule":"evenodd","clip-rule":"evenodd",d:"M24 44c11.046 0 20-8.954 20-20S35.046 4 24 4 4 12.954 4 24s8.954 20 20 20Zm4.955-27.771-4.95 4.95-4.95-4.95a1 1 0 0 0-1.414 0l-1.414 1.414a1 1 0 0 0 0 1.414l4.95 4.95-4.95 4.95a1 1 0 0 0 0 1.414l1.414 1.414a1 1 0 0 0 1.414 0l4.95-4.95 4.95 4.95a1 1 0 0 0 1.414 0l1.414-1.414a1 1 0 0 0 0-1.414l-4.95-4.95 4.95-4.95a1 1 0 0 0 0-1.414l-1.414-1.414a1 1 0 0 0-1.414 0Z",fill:"currentColor",stroke:"none"},null,-1),hf=[ff];function pf(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},hf,14,df)}var Yr=ce(cf,[["render",pf]]);const Yi=Object.assign(Yr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Yr.name,Yr)}}),mf=te({name:"Alert",components:{IconHover:gt,IconClose:qt,IconInfoCircleFill:Yl,IconCheckCircleFill:Gi,IconExclamationCircleFill:Ki,IconCloseCircleFill:Yi},props:{type:{type:String,default:"info"},showIcon:{type:Boolean,default:!0},closable:{type:Boolean,default:!1},title:String,banner:{type:Boolean,default:!1}},emits:{close:e=>!0,afterClose:()=>!0},setup(e,{slots:t,emit:n}){const r=oe("alert"),o=H(!0),i=l=>{o.value=!1,n("close",l)},a=()=>{n("afterClose")},s=C(()=>[r,`${r}-${e.type}`,{[`${r}-with-title`]:!!(e.title||t.title),[`${r}-banner`]:e.banner}]);return{prefixCls:r,cls:s,visible:o,handleClose:i,handleAfterLeave:a}}});function vf(e,t,n,r,o,i){const a=ue("icon-info-circle-fill"),s=ue("icon-check-circle-fill"),l=ue("icon-exclamation-circle-fill"),u=ue("icon-close-circle-fill"),c=ue("icon-close"),d=ue("icon-hover");return x(),ve(Pn,{name:"zoom-in-top",onAfterLeave:e.handleAfterLeave},{default:ke(()=>[e.visible?(x(),ee("div",{key:0,role:"alert",class:K(e.cls)},[e.showIcon&&!(e.type==="normal"&&!e.$slots.icon)?(x(),ee("div",{key:0,class:K(`${e.prefixCls}-icon`)},[se(e.$slots,"icon",{},()=>[e.type==="info"?(x(),ve(a,{key:0})):e.type==="success"?(x(),ve(s,{key:1})):e.type==="warning"?(x(),ve(l,{key:2})):e.type==="error"?(x(),ve(u,{key:3})):pe("v-if",!0)])],2)):pe("v-if",!0),fe("div",{class:K(`${e.prefixCls}-body`)},[e.title||e.$slots.title?(x(),ee("div",{key:0,class:K(`${e.prefixCls}-title`)},[se(e.$slots,"title",{},()=>[nt(Xe(e.title),1)])],2)):pe("v-if",!0),fe("div",{class:K(`${e.prefixCls}-content`)},[se(e.$slots,"default")],2)],2),e.$slots.action?(x(),ee("div",{key:1,class:K(`${e.prefixCls}-action`)},[se(e.$slots,"action")],2)):pe("v-if",!0),e.closable?(x(),ee("div",{key:2,tabindex:"-1",role:"button","aria-label":"Close",class:K(`${e.prefixCls}-close-btn`),onClick:t[0]||(t[0]=(...m)=>e.handleClose&&e.handleClose(...m))},[se(e.$slots,"close-element",{},()=>[z(d,null,{default:ke(()=>[z(c)]),_:1})])],2)):pe("v-if",!0)],2)):pe("v-if",!0)]),_:3},8,["onAfterLeave"])}var Xr=ce(mf,[["render",vf]]);const HC=Object.assign(Xr,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Xr.name,Xr)}}),gf=["info","success","warning","error"],zt=["onFocus","onFocusin","onFocusout","onBlur","onChange","onBeforeinput","onInput","onReset","onSubmit","onInvalid","onKeydown","onKeypress","onKeyup","onCopy","onCut","onPaste","onCompositionstart","onCompositionupdate","onCompositionend","onSelect","autocomplete","autofocus","maxlength","minlength","name","pattern","readonly","required"],yf=te({name:"IconLoading",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-loading`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),bf=["stroke-width","stroke-linecap","stroke-linejoin"],_f=fe("path",{d:"M42 24c0 9.941-8.059 18-18 18S6 33.941 6 24 14.059 6 24 6"},null,-1),Cf=[_f];function Sf(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Cf,14,bf)}var Jr=ce(yf,[["render",Sf]]);const It=Object.assign(Jr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Jr.name,Jr)}}),Ef=te({name:"FeedbackIcon",components:{IconLoading:It,IconCheckCircleFill:Gi,IconExclamationCircleFill:Ki,IconCloseCircleFill:Yi},props:{type:{type:String}},setup(e){const t=oe("feedback-icon");return{cls:C(()=>[t,`${t}-status-${e.type}`])}}});function wf(e,t,n,r,o,i){const a=ue("icon-loading"),s=ue("icon-check-circle-fill"),l=ue("icon-exclamation-circle-fill"),u=ue("icon-close-circle-fill");return x(),ee("span",{class:K(e.cls)},[e.type==="validating"?(x(),ve(a,{key:0})):e.type==="success"?(x(),ve(s,{key:1})):e.type==="warning"?(x(),ve(l,{key:2})):e.type==="error"?(x(),ve(u,{key:3})):pe("v-if",!0)],2)}var Xi=ce(Ef,[["render",wf]]);const Ji={key:"Enter",code:"Enter"},kf={key:"Backspace",code:"Backspace"};var $f=Object.defineProperty,Ba=Object.getOwnPropertySymbols,Of=Object.prototype.hasOwnProperty,Lf=Object.prototype.propertyIsEnumerable,Da=(e,t,n)=>t in e?$f(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Tf=(e,t)=>{for(var n in t||(t={}))Of.call(t,n)&&Da(e,n,t[n]);if(Ba)for(var n of Ba(t))Lf.call(t,n)&&Da(e,n,t[n]);return e};const Vn=(e,t)=>{const n=Tf({},e);for(const r of t)r in n&&delete n[r];return n};function xn(e,t){const n={};return t.forEach(r=>{const o=r;r in e&&(n[o]=e[o])}),n}const pi=Symbol("ArcoFormItemContext"),Zi=Symbol("ArcoFormContext"),yt=({size:e,disabled:t,error:n,uninject:r}={})=>{const o=r?{}:et(pi,{}),i=C(()=>{var c;return(c=e==null?void 0:e.value)!=null?c:o.size}),a=C(()=>(t==null?void 0:t.value)||o.disabled),s=C(()=>(n==null?void 0:n.value)||o.error),l=cr(o,"feedback"),u=cr(o,"eventHandlers");return{formItemCtx:o,mergedSize:i,mergedDisabled:a,mergedError:s,feedback:l,eventHandlers:u}},Mt=(e,{defaultValue:t="medium"}={})=>{const n=et(xt,void 0);return{mergedSize:C(()=>{var o,i;return(i=(o=e==null?void 0:e.value)!=null?o:n==null?void 0:n.size)!=null?i:t})}};function Xl(e){const t=H();function n(){if(!e.value)return;const{selectionStart:o,selectionEnd:i,value:a}=e.value;if(o==null||i==null)return;const s=a.slice(0,Math.max(0,o)),l=a.slice(Math.max(0,i));t.value={selectionStart:o,selectionEnd:i,value:a,beforeTxt:s,afterTxt:l}}function r(){if(!e.value||!t.value)return;const{value:o}=e.value,{beforeTxt:i,afterTxt:a,selectionStart:s}=t.value;if(!i||!a||!s)return;let l=o.length;if(o.endsWith(a))l=o.length-a.length;else if(o.startsWith(i))l=i.length;else{const u=i[s-1],c=o.indexOf(u,s-1);c!==-1&&(l=c+1)}e.value.setSelectionRange(l,l)}return[n,r]}var Af=Object.defineProperty,Fa=Object.getOwnPropertySymbols,Nf=Object.prototype.hasOwnProperty,Pf=Object.prototype.propertyIsEnumerable,ja=(e,t,n)=>t in e?Af(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Va=(e,t)=>{for(var n in t||(t={}))Nf.call(t,n)&&ja(e,n,t[n]);if(Fa)for(var n of Fa(t))Pf.call(t,n)&&ja(e,n,t[n]);return e},An=te({name:"Input",inheritAttrs:!1,props:{modelValue:String,defaultValue:{type:String,default:""},size:{type:String},allowClear:{type:Boolean,default:!1},disabled:{type:Boolean,default:!1},readonly:{type:Boolean,default:!1},error:{type:Boolean,default:!1},placeholder:String,maxLength:{type:[Number,Object],default:0},showWordLimit:{type:Boolean,default:!1},wordLength:{type:Function},wordSlice:{type:Function},inputAttrs:{type:Object},type:{type:String,default:"text"}},emits:{"update:modelValue":e=>!0,input:(e,t)=>!0,change:(e,t)=>!0,pressEnter:e=>!0,clear:e=>!0,focus:e=>!0,blur:e=>!0},setup(e,{emit:t,slots:n,attrs:r}){const{size:o,disabled:i,error:a,modelValue:s}=Re(e),l=oe("input"),u=H(),{mergedSize:c,mergedDisabled:d,mergedError:m,feedback:_,eventHandlers:S}=yt({size:o,disabled:i,error:a}),{mergedSize:E}=Mt(c),[L,y]=Xl(u),$=H(e.defaultValue),w=C(()=>{var re;return(re=e.modelValue)!=null?re:$.value});Ne(s,re=>{(vt(re)||jn(re))&&($.value="")});let h=w.value;const p=H(!1),b=C(()=>e.allowClear&&!d.value&&!!w.value),v=H(!1),P=H(""),A=re=>{var g;return it(e.wordLength)?e.wordLength(re):(g=re.length)!=null?g:0},T=C(()=>A(w.value)),j=C(()=>m.value||!!(Be(e.maxLength)&&e.maxLength.errorOnly&&T.value>U.value)),J=C(()=>Be(e.maxLength)&&!!e.maxLength.errorOnly),U=C(()=>Be(e.maxLength)?e.maxLength.length:e.maxLength),O=re=>{var g,f;U.value&&!J.value&&A(re)>U.value&&(re=(f=(g=e.wordSlice)==null?void 0:g.call(e,re,U.value))!=null?f:re.slice(0,U.value)),$.value=re,t("update:modelValue",re)},N=re=>{u.value&&re.target!==u.value&&(re.preventDefault(),u.value.focus())},D=(re,g)=>{var f,k;re!==h&&(h=re,t("change",re,g),(k=(f=S.value)==null?void 0:f.onChange)==null||k.call(f,g))},V=re=>{var g,f;p.value=!0,h=w.value,t("focus",re),(f=(g=S.value)==null?void 0:g.onFocus)==null||f.call(g,re)},G=re=>{var g,f;p.value=!1,D(w.value,re),t("blur",re),(f=(g=S.value)==null?void 0:g.onBlur)==null||f.call(g,re)},B=re=>{var g,f,k;const{value:q,selectionStart:ne,selectionEnd:be}=re.target;if(re.type==="compositionend"){if(v.value=!1,P.value="",U.value&&!J.value&&w.value.length>=U.value&&A(q)>U.value&&ne===be){I();return}O(q),t("input",q,re),(f=(g=S.value)==null?void 0:g.onInput)==null||f.call(g,re),I()}else v.value=!0,P.value=w.value+((k=re.data)!=null?k:"")},I=()=>{L(),Je(()=>{u.value&&w.value!==u.value.value&&(u.value.value=w.value,y())})},Y=re=>{var g,f;const{value:k}=re.target;if(!v.value){if(U.value&&!J.value&&(w.value.length>=U.value||A(k)>U.value)&&re.inputType==="insertText"){I();return}O(k),t("input",k,re),(f=(g=S.value)==null?void 0:g.onInput)==null||f.call(g,re),I()}},Q=re=>{O(""),D("",re),t("clear",re)},he=re=>{const g=re.key||re.code;!v.value&&g===Ji.key&&(D(w.value,re),t("pressEnter",re))},me=C(()=>[`${l}-outer`,`${l}-outer-size-${E.value}`,{[`${l}-outer-has-suffix`]:!!n.suffix,[`${l}-outer-disabled`]:d.value}]),Se=C(()=>[`${l}-wrapper`,{[`${l}-error`]:j.value,[`${l}-disabled`]:d.value,[`${l}-focus`]:p.value}]),Ae=C(()=>[l,`${l}-size-${E.value}`]),Ie=C(()=>Vn(r,zt)),Te=C(()=>xn(r,zt)),we=C(()=>{const re=Va(Va({},Te.value),e.inputAttrs);return j.value&&(re["aria-invalid"]=!0),re}),Me=re=>{var g;return z("span",Fe({class:Se.value,onMousedown:N},re?void 0:Ie.value),[n.prefix&&z("span",{class:`${l}-prefix`},[n.prefix()]),z("input",Fe(we.value,{ref:u,class:Ae.value,value:w.value,type:e.type,placeholder:e.placeholder,readonly:e.readonly,disabled:d.value,onInput:Y,onKeydown:he,onFocus:V,onBlur:G,onCompositionstart:B,onCompositionupdate:B,onCompositionend:B}),null),b.value&&z(gt,{prefix:l,class:`${l}-clear-btn`,onClick:Q},{default:()=>[z(qt,null,null)]}),(n.suffix||!!e.maxLength&&e.showWordLimit||!!_.value)&&z("span",{class:[`${l}-suffix`,{[`${l}-suffix-has-feedback`]:_.value}]},[!!e.maxLength&&e.showWordLimit&&z("span",{class:`${l}-word-limit`},[T.value,nt("/"),U.value]),(g=n.suffix)==null?void 0:g.call(n),!!_.value&&z(Xi,{type:_.value},null)])])};return{inputRef:u,render:()=>n.prepend||n.append?z("span",Fe({class:me.value},Ie.value),[n.prepend&&z("span",{class:`${l}-prepend`},[n.prepend()]),Me(!0),n.append&&z("span",{class:`${l}-append`},[n.append()])]):Me()}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}},render(){return this.render()}});const If=te({name:"IconSearch",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-search`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Mf=["stroke-width","stroke-linecap","stroke-linejoin"],Rf=fe("path",{d:"M33.072 33.071c6.248-6.248 6.248-16.379 0-22.627-6.249-6.249-16.38-6.249-22.628 0-6.248 6.248-6.248 16.379 0 22.627 6.248 6.248 16.38 6.248 22.628 0Zm0 0 8.485 8.485"},null,-1),Bf=[Rf];function Df(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Bf,14,Mf)}var Zr=ce(If,[["render",Df]]);const mi=Object.assign(Zr,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Zr.name,Zr)}}),Jl=Symbol("ArcoButtonGroup"),Ff=te({name:"Button",components:{IconLoading:It},props:{type:{type:String},shape:{type:String},status:{type:String},size:{type:String},long:{type:Boolean,default:!1},loading:{type:Boolean,default:!1},disabled:{type:Boolean},htmlType:{type:String,default:"button"},href:String},emits:{click:e=>!0},setup(e,{emit:t}){const{size:n,disabled:r}=Re(e),o=oe("btn"),i=et(Jl,void 0),a=C(()=>{var _;return(_=n.value)!=null?_:i==null?void 0:i.size}),s=C(()=>!!(r.value||i!=null&&i.disabled)),{mergedSize:l,mergedDisabled:u}=yt({size:a,disabled:s}),{mergedSize:c}=Mt(l),d=C(()=>{var _,S,E,L,y,$;return[o,`${o}-${(S=(_=e.type)!=null?_:i==null?void 0:i.type)!=null?S:"secondary"}`,`${o}-shape-${(L=(E=e.shape)!=null?E:i==null?void 0:i.shape)!=null?L:"square"}`,`${o}-size-${c.value}`,`${o}-status-${($=(y=e.status)!=null?y:i==null?void 0:i.status)!=null?$:"normal"}`,{[`${o}-long`]:e.long,[`${o}-loading`]:e.loading,[`${o}-disabled`]:u.value,[`${o}-link`]:Vt(e.href)}]});return{prefixCls:o,cls:d,mergedDisabled:u,handleClick:_=>{if(e.disabled||e.loading){_.preventDefault();return}t("click",_)}}}}),jf=["href"],Vf=["type","disabled"];function xf(e,t,n,r,o,i){const a=ue("icon-loading");return e.href?(x(),ee("a",{key:0,class:K([e.cls,{[`${e.prefixCls}-only-icon`]:e.$slots.icon&&!e.$slots.default}]),href:e.mergedDisabled||e.loading?void 0:e.href,onClick:t[0]||(t[0]=(...s)=>e.handleClick&&e.handleClick(...s))},[e.loading||e.$slots.icon?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[e.loading?(x(),ve(a,{key:0,spin:"true"})):se(e.$slots,"icon",{key:1})],2)):pe("v-if",!0),se(e.$slots,"default")],10,jf)):(x(),ee("button",{key:1,class:K([e.cls,{[`${e.prefixCls}-only-icon`]:e.$slots.icon&&!e.$slots.default}]),type:e.htmlType,disabled:e.mergedDisabled,onClick:t[1]||(t[1]=(...s)=>e.handleClick&&e.handleClick(...s))},[e.loading||e.$slots.icon?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[e.loading?(x(),ve(a,{key:0,spin:!0})):se(e.$slots,"icon",{key:1})],2)):pe("v-if",!0),se(e.$slots,"default")],10,Vf))}var Qr=ce(Ff,[["render",xf]]);const zf=te({name:"ButtonGroup",props:{type:{type:String},status:{type:String},shape:{type:String},size:{type:String},disabled:{type:Boolean}},setup(e){const{type:t,size:n,status:r,disabled:o,shape:i}=Re(e),a=oe("btn-group");return $t(Jl,ze({type:t,size:n,shape:i,status:r,disabled:o})),{prefixCls:a}}});function Uf(e,t,n,r,o,i){return x(),ee("div",{class:K(e.prefixCls)},[se(e.$slots,"default")],2)}var eo=ce(zf,[["render",Uf]]);const vi=Object.assign(Qr,{Group:eo,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Qr.name,Qr),e.component(n+eo.name,eo)}});var to=te({name:"InputSearch",props:{searchButton:{type:Boolean,default:!1},loading:{type:Boolean,default:!1},disabled:{type:Boolean,default:!1},size:{type:String},buttonText:{type:String},buttonProps:{type:Object}},emits:{search:(e,t)=>!0},setup(e,{emit:t,slots:n}){const{size:r}=Re(e),o=oe("input-search"),{mergedSize:i}=Mt(r),a=H(),s=d=>{a.value.inputRef&&t("search",a.value.inputRef.value,d)},l=()=>{var d;return z(rt,null,[e.loading?z(It,null,null):z(gt,{onClick:s},{default:()=>[z(mi,null,null)]}),(d=n.suffix)==null?void 0:d.call(n)])},u=()=>{var d;let m={};return e.buttonText||n["button-default"]||n["button-icon"]?m={default:(d=n["button-default"])!=null?d:e.buttonText?()=>e.buttonText:void 0,icon:n["button-icon"]}:m={icon:()=>z(mi,null,null)},z(vi,Fe({type:"primary",class:`${o}-btn`,disabled:e.disabled,size:i.value,loading:e.loading},e.buttonProps,{onClick:s}),m)};return{inputRef:a,render:()=>z(An,{ref:a,class:o,size:i.value,disabled:e.disabled},{prepend:n.prepend,prefix:n.prefix,suffix:e.searchButton?n.suffix:l,append:e.searchButton?u:n.append})}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}},render(){return this.render()}});const Wf=te({name:"IconEye",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-eye`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Hf=["stroke-width","stroke-linecap","stroke-linejoin"],qf=fe("path",{"clip-rule":"evenodd",d:"M24 37c6.627 0 12.627-4.333 18-13-5.373-8.667-11.373-13-18-13-6.627 0-12.627 4.333-18 13 5.373 8.667 11.373 13 18 13Z"},null,-1),Gf=fe("path",{d:"M29 24a5 5 0 1 1-10 0 5 5 0 0 1 10 0Z"},null,-1),Kf=[qf,Gf];function Yf(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Kf,14,Hf)}var no=ce(Wf,[["render",Yf]]);const Xf=Object.assign(no,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+no.name,no)}}),Jf=te({name:"IconEyeInvisible",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-eye-invisible`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Zf=["stroke-width","stroke-linecap","stroke-linejoin"],Qf=fe("path",{d:"M14 14.5c-2.69 2-5.415 5.33-8 9.5 5.373 8.667 11.373 13 18 13 3.325 0 6.491-1.09 9.5-3.271M17.463 12.5C19 11 21.75 11 24 11c6.627 0 12.627 4.333 18 13-1.766 2.848-3.599 5.228-5.5 7.14"},null,-1),eh=fe("path",{d:"M29 24a5 5 0 1 1-10 0 5 5 0 0 1 10 0ZM6.852 7.103l34.294 34.294"},null,-1),th=[Qf,eh];function nh(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},th,14,Zf)}var ro=ce(Jf,[["render",nh]]);const rh=Object.assign(ro,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+ro.name,ro)}}),oh=te({name:"InputPassword",components:{IconEye:Xf,IconEyeInvisible:rh,AIconHover:gt,AInput:An},props:{invisibleButton:{type:Boolean,default:!0}},setup(){const e=H(),t=H(!0);return{inputRef:e,invisible:t,handleInvisible:()=>{t.value=!t.value}}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}}});function ih(e,t,n,r,o,i){const a=ue("icon-eye"),s=ue("icon-eye-invisible"),l=ue("a-icon-hover"),u=ue("a-input");return x(),ve(u,{ref:"inputRef",type:e.invisible?"password":"text"},Pl({_:2},[e.$slots.prepend?{name:"prepend",fn:ke(()=>[se(e.$slots,"prepend")])}:void 0,e.$slots.prefix?{name:"prefix",fn:ke(()=>[se(e.$slots,"prefix")])}:void 0,e.invisibleButton||e.$slots.suffix?{name:"suffix",fn:ke(()=>[e.invisibleButton?(x(),ve(l,{key:0,onClick:e.handleInvisible,onMousedown:t[0]||(t[0]=cn(()=>{},["prevent"])),onMouseup:t[1]||(t[1]=cn(()=>{},["prevent"]))},{default:ke(()=>[e.invisible?(x(),ve(s,{key:1})):(x(),ve(a,{key:0}))]),_:1},8,["onClick"])):pe("v-if",!0),se(e.$slots,"suffix")])}:void 0,e.$slots.append?{name:"append",fn:ke(()=>[se(e.$slots,"append")])}:void 0]),1032,["type"])}var oo=ce(oh,[["render",ih]]);const ah=te({name:"InputGroup",setup(){return{prefixCls:oe("input-group")}}});function sh(e,t,n,r,o,i){return x(),ee("div",{class:K(e.prefixCls)},[se(e.$slots,"default")],2)}var io=ce(ah,[["render",sh]]);const lh=Object.assign(An,{Search:to,Password:oo,Group:io,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+An.name,An),e.component(n+io.name,io),e.component(n+to.name,to),e.component(n+oo.name,oo)}});var uh=Object.defineProperty,xa=Object.getOwnPropertySymbols,ch=Object.prototype.hasOwnProperty,dh=Object.prototype.propertyIsEnumerable,za=(e,t,n)=>t in e?uh(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,$n=(e,t)=>{for(var n in t||(t={}))ch.call(t,n)&&za(e,n,t[n]);if(xa)for(var n of xa(t))dh.call(t,n)&&za(e,n,t[n]);return e};const fh=()=>({width:document.documentElement.clientWidth||window.innerWidth,height:document.documentElement.clientHeight||window.innerHeight}),Ua=(e,t)=>{var n,r;const o=e.getBoundingClientRect();return{top:o.top,bottom:o.bottom,left:o.left,right:o.right,scrollTop:o.top-t.top,scrollBottom:o.bottom-t.top,scrollLeft:o.left-t.left,scrollRight:o.right-t.left,width:(n=e.offsetWidth)!=null?n:e.clientWidth,height:(r=e.offsetHeight)!=null?r:e.clientHeight}},hh=e=>{switch(e){case"top":case"tl":case"tr":return"top";case"bottom":case"bl":case"br":return"bottom";case"left":case"lt":case"lb":return"left";case"right":case"rt":case"rb":return"right";default:return"top"}},Wn=(e,t)=>{switch(t){case"top":switch(e){case"bottom":return"top";case"bl":return"tl";case"br":return"tr";default:return e}case"bottom":switch(e){case"top":return"bottom";case"tl":return"bl";case"tr":return"br";default:return e}case"left":switch(e){case"right":return"left";case"rt":return"lt";case"rb":return"lb";default:return e}case"right":switch(e){case"left":return"right";case"lt":return"rt";case"lb":return"rb";default:return e}default:return e}},ph=(e,t,{containerRect:n,triggerRect:r,popupRect:o,offset:i,translate:a})=>{const s=hh(e),l=fh(),u={top:n.top+t.top,bottom:l.height-(n.top+t.top+o.height),left:n.left+t.left,right:l.width-(n.left+t.left+o.width)};let c=e;if(s==="top"&&u.top<0)if(r.top>o.height)t.top=-n.top;else{const d=On("bottom",r,o,{offset:i,translate:a});l.height-(n.top+d.top+o.height)>0&&(c=Wn(e,"bottom"),t.top=d.top)}if(s==="bottom"&&u.bottom<0)if(l.height-r.bottom>o.height)t.top=-n.top+(l.height-o.height);else{const d=On("top",r,o,{offset:i,translate:a});n.top+d.top>0&&(c=Wn(e,"top"),t.top=d.top)}if(s==="left"&&u.left<0)if(r.left>o.width)t.left=-n.left;else{const d=On("right",r,o,{offset:i,translate:a});l.width-(n.left+d.left+o.width)>0&&(c=Wn(e,"right"),t.left=d.left)}if(s==="right"&&u.right<0)if(l.width-r.right>o.width)t.left=-n.left+(l.width-o.width);else{const d=On("left",r,o,{offset:i,translate:a});n.left+d.left>0&&(c=Wn(e,"left"),t.left=d.left)}return(s==="top"||s==="bottom")&&(u.left<0?t.left=-n.left:u.right<0&&(t.left=-n.left+(l.width-o.width))),(s==="left"||s==="right")&&(u.top<0?t.top=-n.top:u.bottom<0&&(t.top=-n.top+(l.height-o.height))),{popupPosition:t,position:c}},On=(e,t,n,{offset:r=0,translate:o=[0,0]}={})=>{var i;const a=(i=We(o)?o:o[e])!=null?i:[0,0];switch(e){case"top":return{left:t.scrollLeft+Math.round(t.width/2)-Math.round(n.width/2)+a[0],top:t.scrollTop-n.height-r+a[1]};case"tl":return{left:t.scrollLeft+a[0],top:t.scrollTop-n.height-r+a[1]};case"tr":return{left:t.scrollRight-n.width+a[0],top:t.scrollTop-n.height-r+a[1]};case"bottom":return{left:t.scrollLeft+Math.round(t.width/2)-Math.round(n.width/2)+a[0],top:t.scrollBottom+r+a[1]};case"bl":return{left:t.scrollLeft+a[0],top:t.scrollBottom+r+a[1]};case"br":return{left:t.scrollRight-n.width+a[0],top:t.scrollBottom+r+a[1]};case"left":return{left:t.scrollLeft-n.width-r+a[0],top:t.scrollTop+Math.round(t.height/2)-Math.round(n.height/2)+a[1]};case"lt":return{left:t.scrollLeft-n.width-r+a[0],top:t.scrollTop+a[1]};case"lb":return{left:t.scrollLeft-n.width-r+a[0],top:t.scrollBottom-n.height+a[1]};case"right":return{left:t.scrollRight+r+a[0],top:t.scrollTop+Math.round(t.height/2)-Math.round(n.height/2)+a[1]};case"rt":return{left:t.scrollRight+r+a[0],top:t.scrollTop+a[1]};case"rb":return{left:t.scrollRight+r+a[0],top:t.scrollBottom-n.height+a[1]};default:return{left:0,top:0}}},mh=e=>{let t="0";["top","bottom"].includes(e)?t="50%":["left","lt","lb","tr","br"].includes(e)&&(t="100%");let n="0";return["left","right"].includes(e)?n="50%":["top","tl","tr","lt","rt"].includes(e)&&(n="100%"),`${t} ${n}`},vh=(e,t,n,r,{offset:o=0,translate:i=[0,0],customStyle:a={},autoFitPosition:s=!1}={})=>{let l=e,u=On(e,n,r,{offset:o,translate:i});if(s){const d=ph(e,u,{containerRect:t,popupRect:r,triggerRect:n,offset:o,translate:i});u=d.popupPosition,l=d.position}return{style:$n({left:`${u.left}px`,top:`${u.top}px`},a),position:l}},gh=(e,t,n,{customStyle:r={}})=>{if(["top","tl","tr","bottom","bl","br"].includes(e)){let i=Math.abs(t.scrollLeft+t.width/2-n.scrollLeft);return i>n.width-8&&(t.width>n.width?i=n.width/2:i=n.width-8),["top","tl","tr"].includes(e)?$n({left:`${i}px`,bottom:"0",transform:"translate(-50%,50%) rotate(45deg)"},r):$n({left:`${i}px`,top:"0",transform:"translate(-50%,-50%) rotate(45deg)"},r)}let o=Math.abs(t.scrollTop+t.height/2-n.scrollTop);return o>n.height-8&&(t.height>n.height?o=n.height/2:o=n.height-8),["left","lt","lb"].includes(e)?$n({top:`${o}px`,right:"0",transform:"translate(50%,-50%) rotate(45deg)"},r):$n({top:`${o}px`,left:"0",transform:"translate(-50%,-50%) rotate(45deg)"},r)},yh=e=>e.scrollHeight>e.offsetHeight||e.scrollWidth>e.offsetWidth,Wa=e=>{var t;const n=[];let r=e;for(;r&&r!==document.documentElement;)yh(r)&&n.push(r),r=(t=r.parentElement)!=null?t:void 0;return n},Zl=()=>{const e={},t=H(),n=()=>{const r=ql(e.value);r!==t.value&&(t.value=r)};return Ke(()=>n()),vn(()=>n()),{children:e,firstElement:t}};var Mn=te({name:"ResizeObserver",props:{watchOnUpdated:Boolean},emits:["resize"],setup(e,{emit:t,slots:n}){const{children:r,firstElement:o}=Zl();let i;const a=l=>{l&&(i=new Ui(u=>{const c=u[0];t("resize",c)}),i.observe(l))},s=()=>{i&&(i.disconnect(),i=null)};return Ne(o,l=>{i&&s(),l&&a(l)}),Ht(()=>{i&&s()}),()=>{var l;return r.value=(l=n.default)==null?void 0:l.call(n),r.value}}});function bh(e,t){const n=H(e[t]);return vn(()=>{const r=e[t];n.value!==r&&(n.value=r)}),n}const Ha=Symbol("ArcoTrigger"),_h=1e3,Ch=5e3,Sh=1;class Eh{constructor(){this.popupStack={popup:new Set,dialog:new Set,message:new Set},this.getNextZIndex=t=>(t==="message"?Array.from(this.popupStack.message).pop()||Ch:Array.from(this.popupStack.popup).pop()||_h)+Sh,this.add=t=>{const n=this.getNextZIndex(t);return this.popupStack[t].add(n),t==="dialog"&&this.popupStack.popup.add(n),n},this.delete=(t,n)=>{this.popupStack[n].delete(t),n==="dialog"&&this.popupStack.popup.delete(t)},this.isLastDialog=t=>this.popupStack.dialog.size>1?t===Array.from(this.popupStack.dialog).pop():!0}}const ao=new Eh;function Ql(e,{visible:t,runOnMounted:n}={}){const r=H(0),o=()=>{r.value=ao.add(e)},i=()=>{ao.delete(r.value,e)},a=()=>e==="dialog"?ao.isLastDialog(r.value):!1;return Ne(()=>t==null?void 0:t.value,s=>{s?o():i()},{immediate:!0}),n&&(Ke(()=>{o()}),Ht(()=>{i()})),{zIndex:qc(r),open:o,close:i,isLastDialog:a}}const wh=({elementRef:e,onResize:t})=>{let n;return{createResizeObserver:()=>{e.value&&(n=new Ui(i=>{const a=i[0];it(t)&&t(a)}),n.observe(e.value))},destroyResizeObserver:()=>{n&&(n.disconnect(),n=null)}}};var kh=te({name:"ClientOnly",setup(e,{slots:t}){const n=H(!1);return Ke(()=>n.value=!0),()=>{var r;return n.value?(r=t.default)==null?void 0:r.call(t):null}}});const $h=({popupContainer:e,visible:t,defaultContainer:n="body",documentContainer:r})=>{const o=H(e.value),i=H(),a=()=>{const s=Ra(e.value),l=s?e.value:n,u=s??(r?document.documentElement:Ra(n));l!==o.value&&(o.value=l),u!==i.value&&(i.value=u)};return Ke(()=>a()),Ne(t,s=>{o.value!==e.value&&s&&a()}),{teleportContainer:o,containerRef:i}};var Oh=Object.defineProperty,Lh=Object.defineProperties,Th=Object.getOwnPropertyDescriptors,qa=Object.getOwnPropertySymbols,Ah=Object.prototype.hasOwnProperty,Nh=Object.prototype.propertyIsEnumerable,Ga=(e,t,n)=>t in e?Oh(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Ph=(e,t)=>{for(var n in t||(t={}))Ah.call(t,n)&&Ga(e,n,t[n]);if(qa)for(var n of qa(t))Nh.call(t,n)&&Ga(e,n,t[n]);return e},Ih=(e,t)=>Lh(e,Th(t));const Mh=["onClick","onMouseenter","onMouseleave","onFocusin","onFocusout","onContextmenu"];var so=te({name:"Trigger",inheritAttrs:!1,props:{popupVisible:{type:Boolean,default:void 0},defaultPopupVisible:{type:Boolean,default:!1},trigger:{type:[String,Array],default:"hover"},position:{type:String,default:"bottom"},disabled:{type:Boolean,default:!1},popupOffset:{type:Number,default:0},popupTranslate:{type:[Array,Object]},showArrow:{type:Boolean,default:!1},alignPoint:{type:Boolean,default:!1},popupHoverStay:{type:Boolean,default:!0},blurToClose:{type:Boolean,default:!0},clickToClose:{type:Boolean,default:!0},clickOutsideToClose:{type:Boolean,default:!0},unmountOnClose:{type:Boolean,default:!0},contentClass:{type:[String,Array,Object]},contentStyle:{type:Object},arrowClass:{type:[String,Array,Object]},arrowStyle:{type:Object},popupStyle:{type:Object},animationName:{type:String,default:"fade-in"},duration:{type:[Number,Object]},mouseEnterDelay:{type:Number,default:100},mouseLeaveDelay:{type:Number,default:100},focusDelay:{type:Number,default:0},autoFitPopupWidth:{type:Boolean,default:!1},autoFitPopupMinWidth:{type:Boolean,default:!1},autoFixPosition:{type:Boolean,default:!0},popupContainer:{type:[String,Object]},updateAtScroll:{type:Boolean,default:!1},autoFitTransformOrigin:{type:Boolean,default:!1},hideEmpty:{type:Boolean,default:!1},openedClass:{type:[String,Array,Object]},autoFitPosition:{type:Boolean,default:!0},renderToBody:{type:Boolean,default:!0},preventFocus:{type:Boolean,default:!1}},emits:{"update:popupVisible":e=>!0,popupVisibleChange:e=>!0,show:()=>!0,hide:()=>!0,resize:()=>!0},setup(e,{emit:t,slots:n,attrs:r}){const{popupContainer:o}=Re(e),i=oe("trigger"),a=C(()=>Vn(r,Mh)),s=et(xt,void 0),l=C(()=>[].concat(e.trigger)),u=new Set,c=et(Ha,void 0),{children:d,firstElement:m}=Zl(),_=H(),S=H(e.defaultPopupVisible),E=H(e.position),L=H({}),y=H({}),$=H({}),w=H(),h=H({top:0,left:0}),p=C(()=>{var Z;return(Z=e.popupVisible)!=null?Z:S.value}),{teleportContainer:b,containerRef:v}=$h({popupContainer:o,visible:p,documentContainer:!0}),{zIndex:P}=Ql("popup",{visible:p});let A=0,T=!1;const j=()=>{A&&(window.clearTimeout(A),A=0)},J=Z=>{if(e.alignPoint){const{pageX:R,pageY:X}=Z;h.value={top:X,left:R}}},U=()=>{if(!m.value||!_.value||!v.value)return;const Z=v.value.getBoundingClientRect(),R=e.alignPoint?{top:h.value.top,bottom:h.value.top,left:h.value.left,right:h.value.left,scrollTop:h.value.top,scrollBottom:h.value.top,scrollLeft:h.value.left,scrollRight:h.value.left,width:0,height:0}:Ua(m.value,Z),X=()=>Ua(_.value,Z),Ue=X(),{style:W,position:ie}=vh(e.position,Z,R,Ue,{offset:e.popupOffset,translate:e.popupTranslate,customStyle:e.popupStyle,autoFitPosition:e.autoFitPosition});e.autoFitTransformOrigin&&(y.value={transformOrigin:mh(ie)}),e.autoFitPopupMinWidth?W.minWidth=`${R.width}px`:e.autoFitPopupWidth&&(W.width=`${R.width}px`),E.value!==ie&&(E.value=ie),L.value=W,e.showArrow&&Je(()=>{$.value=gh(ie,R,X(),{customStyle:e.arrowStyle})})},O=(Z,R)=>{if(Z===p.value&&A===0)return;const X=()=>{S.value=Z,t("update:popupVisible",Z),t("popupVisibleChange",Z),Z&&Je(()=>{U()})};R?(j(),Z!==p.value&&(A=window.setTimeout(X,R))):X()},N=Z=>{var R;(R=r.onClick)==null||R.call(r,Z),!(e.disabled||p.value&&!e.clickToClose)&&(l.value.includes("click")?(J(Z),O(!p.value)):l.value.includes("contextMenu")&&p.value&&O(!1))},D=Z=>{var R;(R=r.onMouseenter)==null||R.call(r,Z),!(e.disabled||!l.value.includes("hover"))&&(J(Z),O(!0,e.mouseEnterDelay))},V=Z=>{c==null||c.onMouseenter(Z),D(Z)},G=Z=>{var R;(R=r.onMouseleave)==null||R.call(r,Z),!(e.disabled||!l.value.includes("hover"))&&O(!1,e.mouseLeaveDelay)},B=Z=>{c==null||c.onMouseleave(Z),G(Z)},I=Z=>{var R;(R=r.onFocusin)==null||R.call(r,Z),!(e.disabled||!l.value.includes("focus"))&&O(!0,e.focusDelay)},Y=Z=>{var R;(R=r.onFocusout)==null||R.call(r,Z),!(e.disabled||!l.value.includes("focus"))&&e.blurToClose&&O(!1)},Q=Z=>{var R;(R=r.onContextmenu)==null||R.call(r,Z),!(e.disabled||!l.value.includes("contextMenu")||p.value&&!e.clickToClose)&&(J(Z),O(!p.value),Z.preventDefault())};$t(Ha,ze({onMouseenter:V,onMouseleave:B,addChildRef:Z=>{u.add(Z),c==null||c.addChildRef(Z)},removeChildRef:Z=>{u.delete(Z),c==null||c.removeChildRef(Z)}}));const Se=()=>{In(document.documentElement,"mousedown",Te),T=!1},Ae=bh(n,"content"),Ie=C(()=>{var Z;return e.hideEmpty&&Rd((Z=Ae.value)==null?void 0:Z.call(Ae))}),Te=Z=>{var R,X,Ue;if(!((R=m.value)!=null&&R.contains(Z.target)||(X=_.value)!=null&&X.contains(Z.target))){for(const W of u)if((Ue=W.value)!=null&&Ue.contains(Z.target))return;Se(),O(!1)}},we=Dd(()=>{p.value&&U()}),Me=()=>{p.value&&U()},xe=()=>{Me(),t("resize")},re=Z=>{e.preventFocus&&Z.preventDefault()};c==null||c.addChildRef(_);const g=C(()=>p.value?e.openedClass:void 0);let f;Ne(p,Z=>{if(e.clickOutsideToClose&&(!Z&&T?Se():Z&&!T&&(Ft(document.documentElement,"mousedown",Te),T=!0)),e.updateAtScroll||s!=null&&s.updateAtScroll){if(Z){f=Wa(m.value);for(const R of f)R.addEventListener("scroll",we)}else if(f){for(const R of f)R.removeEventListener("scroll",we);f=void 0}}Z&&(ne.value=!0)}),Ne(()=>[e.autoFitPopupWidth,e.autoFitPopupMinWidth],()=>{p.value&&U()});const{createResizeObserver:k,destroyResizeObserver:q}=wh({elementRef:v,onResize:Me});Ke(()=>{if(k(),p.value&&(U(),e.clickOutsideToClose&&!T&&(Ft(document.documentElement,"mousedown",Te),T=!0),e.updateAtScroll||s!=null&&s.updateAtScroll)){f=Wa(m.value);for(const Z of f)Z.addEventListener("scroll",we)}}),vn(()=>{p.value&&U()}),Gc(()=>{O(!1)}),Ht(()=>{if(c==null||c.removeChildRef(_),q(),T&&Se(),f){for(const Z of f)Z.removeEventListener("scroll",we);f=void 0}});const ne=H(p.value),be=H(!1),Ye=()=>{be.value=!0},Ze=()=>{be.value=!1,p.value&&t("show")},Sn=()=>{be.value=!1,p.value||(ne.value=!1,t("hide"))};return()=>{var Z,R;return d.value=(R=(Z=n.default)==null?void 0:Z.call(n))!=null?R:[],Ul(d.value,{class:g.value,onClick:N,onMouseenter:D,onMouseleave:G,onFocusin:I,onFocusout:Y,onContextmenu:Q}),z(rt,null,[e.autoFixPosition?z(Mn,{onResize:xe},{default:()=>[d.value]}):d.value,z(kh,null,{default:()=>[z(Kc,{to:b.value,disabled:!e.renderToBody},{default:()=>[(!e.unmountOnClose||p.value||ne.value)&&!Ie.value&&z(Mn,{onResize:Me},{default:()=>[z("div",Fe({ref:_,class:[`${i}-popup`,`${i}-position-${E.value}`],style:Ih(Ph({},L.value),{zIndex:P.value,pointerEvents:be.value?"none":"auto"}),"trigger-placement":E.value,onMouseenter:V,onMouseleave:B,onMousedown:re},a.value),[z(Pn,{name:e.animationName,duration:e.duration,appear:!0,onBeforeEnter:Ye,onAfterEnter:Ze,onBeforeLeave:Ye,onAfterLeave:Sn},{default:()=>{var X;return[xi(z("div",{class:`${i}-popup-wrapper`,style:y.value},[z("div",{class:[`${i}-content`,e.contentClass],style:e.contentStyle},[(X=n.content)==null?void 0:X.call(n)]),e.showArrow&&z("div",{ref:w,class:[`${i}-arrow`,e.arrowClass],style:$.value},null)]),[[zi,p.value]])]}})])]})]})]})])}}});const hr=Object.assign(so,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+so.name,so)}}),Rh=te({name:"IconEmpty",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-empty`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Bh=["stroke-width","stroke-linecap","stroke-linejoin"],Dh=fe("path",{d:"M24 5v6m7 1 4-4m-18 4-4-4m28.5 22H28s-1 3-4 3-4-3-4-3H6.5M40 41H8a2 2 0 0 1-2-2v-8.46a2 2 0 0 1 .272-1.007l6.15-10.54A2 2 0 0 1 14.148 18H33.85a2 2 0 0 1 1.728.992l6.149 10.541A2 2 0 0 1 42 30.541V39a2 2 0 0 1-2 2Z"},null,-1),Fh=[Dh];function jh(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Fh,14,Bh)}var lo=ce(Rh,[["render",jh]]);const Vh=Object.assign(lo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+lo.name,lo)}});var uo=te({name:"Empty",props:{description:String,imgSrc:String},setup(e,{slots:t}){const n=oe("empty"),{t:r}=sd(),o=et(xt,void 0);return()=>{var i,a,s,l;return o!=null&&o.slots.empty&&!(t.image||e.imgSrc)?o.slots.empty():z("div",{class:n},[z("div",{class:`${n}-image`},[(a=(i=t.image)==null?void 0:i.call(t))!=null?a:e.imgSrc?z("img",{src:e.imgSrc,alt:e.description||"empty"},null):z(Vh,null,null)]),z("div",{class:`${n}-description`},[(l=(s=t.default)==null?void 0:s.call(t))!=null?l:e.description||r("empty.description")])])}}});const xh=Object.assign(uo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+uo.name,uo)}}),zh=5;var Uh=te({name:"DotLoading",props:{size:{type:Number}},setup(e){const t=oe("dot-loading");return()=>{const n=e.size?{width:`${e.size}px`,height:`${e.size}px`}:{};return z("div",{class:t,style:{width:e.size?`${e.size*7}px`:void 0,height:e.size?`${e.size}px`:void 0}},[Array(zh).fill(1).map((r,o)=>z("div",{class:`${t}-item`,key:o,style:n},null))])}}}),co=te({name:"Spin",props:{size:{type:Number},loading:Boolean,dot:Boolean,tip:String},setup(e,{slots:t}){const n=oe("spin"),r=et(xt,void 0),o=C(()=>[n,{[`${n}-loading`]:e.loading,[`${n}-with-tip`]:e.tip&&!t.default}]),i=()=>{if(t.icon){const s=ln(t.icon());if(s)return Cr(s,{spin:!0})}return t.element?t.element():e.dot?z(Uh,{size:e.size},null):r!=null&&r.slots.loading?r.slots.loading():z(It,{spin:!0,size:e.size},null)},a=()=>{const s=e.size?{fontSize:`${e.size}px`}:void 0;return z(rt,null,[z("div",{class:`${n}-icon`,style:s},[i()]),e.tip&&z("div",{class:`${n}-tip`},[e.tip])])};return()=>z("div",{class:o.value},[t.default?z(rt,null,[t.default(),e.loading&&z("div",{class:`${n}-mask`},[z("div",{class:`${n}-mask-icon`},[a()])])]):a()])}});const Wh=Object.assign(co,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+co.name,co)}}),Hh=te({name:"Thumb",props:{data:{type:Object},direction:{type:String,default:"horizontal"},alwaysShow:{type:Boolean,default:!1},both:{type:Boolean,default:!1}},emits:["scroll"],setup(e,{emit:t}){const n=oe("scrollbar"),r=H(!1),o=H(),i=H(),a=C(()=>e.direction==="horizontal"?{size:"width",direction:"left",offset:"offsetWidth",client:"clientX"}:{size:"height",direction:"top",offset:"offsetHeight",client:"clientY"}),s=H(0),l=H(!1),u=H(0),c=C(()=>{var $,w;return{[a.value.size]:`${(w=($=e.data)==null?void 0:$.thumbSize)!=null?w:0}px`,[a.value.direction]:`${s.value}px`}}),d=$=>{$.preventDefault(),i.value&&(u.value=$[a.value.client]-i.value.getBoundingClientRect()[a.value.direction],l.value=!0,Ft(window,"mousemove",S),Ft(window,"mouseup",E),Ft(window,"contextmenu",E))},m=$=>{var w,h,p,b;if($.preventDefault(),i.value){const v=_($[a.value.client]>i.value.getBoundingClientRect()[a.value.direction]?s.value+((h=(w=e.data)==null?void 0:w.thumbSize)!=null?h:0):s.value-((b=(p=e.data)==null?void 0:p.thumbSize)!=null?b:0));v!==s.value&&(s.value=v,t("scroll",v))}},_=$=>$<0?0:e.data&&$>e.data.max?e.data.max:$,S=$=>{if(o.value&&i.value){const w=_($[a.value.client]-o.value.getBoundingClientRect()[a.value.direction]-u.value);w!==s.value&&(s.value=w,t("scroll",w))}},E=()=>{l.value=!1,In(window,"mousemove",S),In(window,"mouseup",E)},L=$=>{l.value||($=_($),$!==s.value&&(s.value=$))},y=C(()=>[`${n}-thumb`,`${n}-thumb-direction-${e.direction}`,{[`${n}-thumb-dragging`]:l.value}]);return{visible:r,trackRef:o,thumbRef:i,prefixCls:n,thumbCls:y,thumbStyle:c,handleThumbMouseDown:d,handleTrackClick:m,setOffset:L}}});function qh(e,t,n,r,o,i){return x(),ve(Pn,null,{default:ke(()=>[fe("div",{ref:"trackRef",class:K([`${e.prefixCls}-track`,`${e.prefixCls}-track-direction-${e.direction}`]),onMousedown:t[1]||(t[1]=cn((...a)=>e.handleTrackClick&&e.handleTrackClick(...a),["self"]))},[fe("div",{ref:"thumbRef",class:K(e.thumbCls),style:$e(e.thumbStyle),onMousedown:t[0]||(t[0]=(...a)=>e.handleThumbMouseDown&&e.handleThumbMouseDown(...a))},[fe("div",{class:K(`${e.prefixCls}-thumb-bar`)},null,2)],38)],34)]),_:1})}var Gh=ce(Hh,[["render",qh]]);const Ka=20,Hn=15,Kh=te({name:"Scrollbar",components:{ResizeObserver:Mn,Thumb:Gh},inheritAttrs:!1,props:{type:{type:String,default:"embed"},outerClass:[String,Object,Array],outerStyle:{type:[String,Object,Array]},hide:{type:Boolean,default:!1},disableHorizontal:{type:Boolean,default:!1},disableVertical:{type:Boolean,default:!1}},emits:{scroll:e=>!0},setup(e,{emit:t}){const n=oe("scrollbar"),r=H(),o=H(),i=H(),a=H(),s=H(),l=H(!1),u=H(!1),c=C(()=>l.value&&!e.disableHorizontal),d=C(()=>u.value&&!e.disableVertical),m=H(!1),_=()=>{var h,p,b,v,P,A;if(r.value){const{clientWidth:T,clientHeight:j,offsetWidth:J,offsetHeight:U,scrollWidth:O,scrollHeight:N,scrollTop:D,scrollLeft:V}=r.value;l.value=O>T,u.value=N>j,m.value=c.value&&d.value;const G=e.type==="embed"&&m.value?J-Hn:J,B=e.type==="embed"&&m.value?U-Hn:U,I=Math.round(G/Math.min(O/T,G/Ka)),Y=G-I,Q=(O-T)/Y,he=Math.round(B/Math.min(N/j,B/Ka)),me=B-he,Se=(N-j)/me;if(o.value={ratio:Q,thumbSize:I,max:Y},i.value={ratio:Se,thumbSize:he,max:me},D>0){const Ae=Math.round(D/((p=(h=i.value)==null?void 0:h.ratio)!=null?p:1));(b=s.value)==null||b.setOffset(Ae)}if(V>0){const Ae=Math.round(V/((P=(v=i.value)==null?void 0:v.ratio)!=null?P:1));(A=a.value)==null||A.setOffset(Ae)}}};Ke(()=>{_()});const S=()=>{_()},E=h=>{var p,b,v,P,A,T;if(r.value){if(c.value&&!e.disableHorizontal){const j=Math.round(r.value.scrollLeft/((b=(p=o.value)==null?void 0:p.ratio)!=null?b:1));(v=a.value)==null||v.setOffset(j)}if(d.value&&!e.disableVertical){const j=Math.round(r.value.scrollTop/((A=(P=i.value)==null?void 0:P.ratio)!=null?A:1));(T=s.value)==null||T.setOffset(j)}}t("scroll",h)},L=h=>{var p,b;r.value&&r.value.scrollTo({left:h*((b=(p=o.value)==null?void 0:p.ratio)!=null?b:1)})},y=h=>{var p,b;r.value&&r.value.scrollTo({top:h*((b=(p=i.value)==null?void 0:p.ratio)!=null?b:1)})},$=C(()=>{const h={};return e.type==="track"&&(c.value&&(h.paddingBottom=`${Hn}px`),d.value&&(h.paddingRight=`${Hn}px`)),[h,e.outerStyle]}),w=C(()=>[`${n}`,`${n}-type-${e.type}`,{[`${n}-both`]:m.value},e.outerClass]);return{prefixCls:n,cls:w,style:$,containerRef:r,horizontalThumbRef:a,verticalThumbRef:s,horizontalData:o,verticalData:i,isBoth:m,hasHorizontalScrollbar:c,hasVerticalScrollbar:d,handleResize:S,handleScroll:E,handleHorizontalScroll:L,handleVerticalScroll:y}},methods:{scrollTo(e,t){var n,r;Be(e)?(n=this.$refs.containerRef)==null||n.scrollTo(e):(e||t)&&((r=this.$refs.containerRef)==null||r.scrollTo(e,t))},scrollTop(e){var t;(t=this.$refs.containerRef)==null||t.scrollTo({top:e})},scrollLeft(e){var t;(t=this.$refs.containerRef)==null||t.scrollTo({left:e})}}});function Yh(e,t,n,r,o,i){const a=ue("ResizeObserver"),s=ue("thumb");return x(),ee("div",{class:K(e.cls),style:$e(e.style)},[z(a,{onResize:e.handleResize},{default:ke(()=>[fe("div",Fe({ref:"containerRef",class:`${e.prefixCls}-container`},e.$attrs,{onScroll:t[0]||(t[0]=(...l)=>e.handleScroll&&e.handleScroll(...l))}),[z(a,{onResize:e.handleResize},{default:ke(()=>[se(e.$slots,"default")]),_:3},8,["onResize"])],16)]),_:3},8,["onResize"]),!e.hide&&e.hasHorizontalScrollbar?(x(),ve(s,{key:0,ref:"horizontalThumbRef",data:e.horizontalData,direction:"horizontal",both:e.isBoth,onScroll:e.handleHorizontalScroll},null,8,["data","both","onScroll"])):pe("v-if",!0),!e.hide&&e.hasVerticalScrollbar?(x(),ve(s,{key:1,ref:"verticalThumbRef",data:e.verticalData,direction:"vertical",both:e.isBoth,onScroll:e.handleVerticalScroll},null,8,["data","both","onScroll"])):pe("v-if",!0)],6)}var fo=ce(Kh,[["render",Yh]]);const Xh=Object.assign(fo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+fo.name,fo)}}),Jh=e=>{const t=H(),n=()=>Bl(t.value)?t.value.$refs[e]:t.value,r=H();return Ke(()=>{r.value=n()}),Ne([t],()=>{r.value=n()}),{componentRef:t,elementRef:r}};var Zh=Object.defineProperty,Ya=Object.getOwnPropertySymbols,Qh=Object.prototype.hasOwnProperty,ep=Object.prototype.propertyIsEnumerable,Xa=(e,t,n)=>t in e?Zh(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,tp=(e,t)=>{for(var n in t||(t={}))Qh.call(t,n)&&Xa(e,n,t[n]);if(Ya)for(var n of Ya(t))ep.call(t,n)&&Xa(e,n,t[n]);return e};const np=e=>{const t=C(()=>!!e.value),n=C(()=>{if(e.value)return tp({type:"embed"},nd(e.value)?void 0:e.value)});return{displayScrollbar:t,scrollbarProps:n}},rp=te({name:"SelectDropdown",components:{ScrollbarComponent:Xh,Empty:xh,Spin:Wh},props:{loading:Boolean,empty:Boolean,virtualList:Boolean,bottomOffset:{type:Number,default:0},scrollbar:{type:[Boolean,Object],default:!0},onScroll:{type:[Function,Array]},onReachBottom:{type:[Function,Array]}},emits:["scroll","reachBottom"],setup(e,{emit:t,slots:n}){const{scrollbar:r}=Re(e),o=oe("select-dropdown"),{componentRef:i,elementRef:a}=Jh("containerRef"),{displayScrollbar:s,scrollbarProps:l}=np(r),u=d=>{const{scrollTop:m,scrollHeight:_,offsetHeight:S}=d.target;_-(m+S)<=e.bottomOffset&&t("reachBottom",d),t("scroll",d)},c=C(()=>[o,{[`${o}-has-header`]:!!n.header,[`${o}-has-footer`]:!!n.footer}]);return{prefixCls:o,cls:c,wrapperRef:a,wrapperComRef:i,handleScroll:u,displayScrollbar:s,scrollbarProps:l}}});function op(e,t,n,r,o,i){const a=ue("spin"),s=ue("empty");return x(),ee("div",{class:K(e.cls)},[e.loading?(x(),ve(a,{key:0,class:K(`${e.prefixCls}-loading`)},null,8,["class"])):e.empty?(x(),ee("div",{key:1,class:K(`${e.prefixCls}-empty`)},[se(e.$slots,"empty",{},()=>[z(s)])],2)):pe("v-if",!0),e.$slots.header&&!e.empty?(x(),ee("div",{key:2,class:K(`${e.prefixCls}-header`)},[se(e.$slots,"header")],2)):pe("v-if",!0),e.virtualList&&!e.loading&&!e.empty?se(e.$slots,"virtual-list",{key:3}):pe("v-if",!0),e.virtualList?pe("v-if",!0):xi((x(),ve(sn(e.displayScrollbar?"ScrollbarComponent":"div"),Fe({key:4,ref:"wrapperComRef",class:`${e.prefixCls}-list-wrapper`},e.scrollbarProps,{onScroll:e.handleScroll}),{default:ke(()=>[fe("ul",{class:K(`${e.prefixCls}-list`)},[se(e.$slots,"default")],2)]),_:3},16,["class","onScroll"])),[[zi,!e.loading&&!e.empty]]),e.$slots.footer&&!e.empty?(x(),ee("div",{key:5,class:K(`${e.prefixCls}-footer`)},[se(e.$slots,"footer")],2)):pe("v-if",!0)],2)}var ip=ce(rp,[["render",op]]),Ja=te({name:"IconCheck",render(){return z("svg",{"aria-hidden":"true",focusable:"false",viewBox:"0 0 1024 1024",width:"200",height:"200",fill:"currentColor"},[z("path",{d:"M877.44815445 206.10060629a64.72691371 64.72691371 0 0 0-95.14856334 4.01306852L380.73381888 685.46812814 235.22771741 533.48933518a64.72691371 64.72691371 0 0 0-92.43003222-1.03563036l-45.82665557 45.82665443a64.72691371 64.72691371 0 0 0-0.90617629 90.61767965l239.61903446 250.10479331a64.72691371 64.72691371 0 0 0 71.19960405 15.14609778 64.33855261 64.33855261 0 0 0 35.08198741-21.23042702l36.24707186-42.71976334 40.5190474-40.77795556-3.36579926-3.49525333 411.40426297-486.74638962a64.72691371 64.72691371 0 0 0-3.88361443-87.64024149l-45.3088404-45.43829334z","p-id":"840"},null)])}});const eu=Symbol("ArcoCheckboxGroup");var Qn=te({name:"Checkbox",components:{IconCheck:Ja,IconHover:gt},props:{modelValue:{type:[Boolean,Array],default:void 0},defaultChecked:{type:Boolean,default:!1},value:{type:[String,Number]},disabled:{type:Boolean,default:!1},indeterminate:{type:Boolean,default:!1},uninjectGroupContext:{type:Boolean,default:!1}},emits:{"update:modelValue":e=>!0,change:(e,t)=>!0},setup(e,{emit:t,slots:n}){const{disabled:r,modelValue:o}=Re(e),i=oe("checkbox"),a=H(),s=e.uninjectGroupContext?void 0:et(eu,void 0),l=(s==null?void 0:s.name)==="ArcoCheckboxGroup",{mergedDisabled:u,eventHandlers:c}=yt({disabled:r}),d=H(e.defaultChecked),m=C(()=>{var h;return l?s==null?void 0:s.computedValue:(h=e.modelValue)!=null?h:d.value}),_=C(()=>{var h;return We(m.value)?m.value.includes((h=e.value)!=null?h:!0):m.value}),S=C(()=>(s==null?void 0:s.disabled)||(u==null?void 0:u.value)||!_.value&&(s==null?void 0:s.isMaxed)),E=h=>{h.stopPropagation()},L=h=>{var p,b,v,P;const{checked:A}=h.target;let T=A;if(We(m.value)){const j=new Set(m.value);A?j.add((p=e.value)!=null?p:!0):j.delete((b=e.value)!=null?b:!0),T=Array.from(j)}d.value=A,l&&We(T)?s==null||s.handleChange(T,h):(t("update:modelValue",T),t("change",T,h),(P=(v=c.value)==null?void 0:v.onChange)==null||P.call(v,h)),Je(()=>{a.value&&a.value.checked!==_.value&&(a.value.checked=_.value)})},y=C(()=>[i,{[`${i}-checked`]:_.value,[`${i}-indeterminate`]:e.indeterminate,[`${i}-disabled`]:S.value}]),$=h=>{var p,b;(b=(p=c.value)==null?void 0:p.onFocus)==null||b.call(p,h)},w=h=>{var p,b;(b=(p=c.value)==null?void 0:p.onBlur)==null||b.call(p,h)};return Ne(o,h=>{(vt(h)||jn(h))&&(d.value=!1)}),Ne(m,h=>{var p;let b;We(h)?b=h.includes((p=e.value)!=null?p:!0):b=h,d.value!==b&&(d.value=b),a.value&&a.value.checked!==b&&(a.value.checked=b)}),()=>{var h,p,b,v;return z("label",{"aria-disabled":S.value,class:y.value},[z("input",{ref:a,type:"checkbox",checked:_.value,value:e.value,class:`${i}-target`,disabled:S.value,onClick:E,onChange:L,onFocus:$,onBlur:w},null),(v=(b=(p=n.checkbox)!=null?p:(h=s==null?void 0:s.slots)==null?void 0:h.checkbox)==null?void 0:b({checked:_.value,disabled:S.value}))!=null?v:z(gt,{class:`${i}-icon-hover`,disabled:S.value||_.value},{default:()=>[z("div",{class:`${i}-icon`},[_.value&&z(Ja,{class:`${i}-icon-check`},null)])]}),n.default&&z("span",{class:`${i}-label`},[n.default()])])}}}),ho=te({name:"CheckboxGroup",props:{modelValue:{type:Array,default:void 0},defaultValue:{type:Array,default:()=>[]},max:{type:Number},options:{type:Array},direction:{type:String,default:"horizontal"},disabled:{type:Boolean,default:!1}},emits:{"update:modelValue":e=>!0,change:(e,t)=>!0},setup(e,{emit:t,slots:n}){const{disabled:r}=Re(e),o=oe("checkbox-group"),{mergedDisabled:i,eventHandlers:a}=yt({disabled:r}),s=H(e.defaultValue),l=C(()=>We(e.modelValue)?e.modelValue:s.value),u=C(()=>e.max===void 0?!1:l.value.length>=e.max),c=C(()=>{var S;return((S=e.options)!=null?S:[]).map(E=>Vt(E)||de(E)?{label:E,value:E}:E)});$t(eu,ze({name:"ArcoCheckboxGroup",computedValue:l,disabled:i,isMaxed:u,slots:n,handleChange:(S,E)=>{var L,y;s.value=S,t("update:modelValue",S),t("change",S,E),(y=(L=a.value)==null?void 0:L.onChange)==null||y.call(L,E)}}));const m=C(()=>[o,`${o}-direction-${e.direction}`]);Ne(()=>e.modelValue,S=>{We(S)?s.value=[...S]:s.value=[]});const _=()=>c.value.map(S=>{const E=l.value.includes(S.value);return z(Qn,{key:S.value,value:S.value,disabled:S.disabled||!E&&u.value,indeterminate:S.indeterminate,modelValue:E},{default:()=>[n.label?n.label({data:S}):it(S.label)?S.label():S.label]})});return()=>{var S;return z("span",{class:m.value},[c.value.length>0?_():(S=n.default)==null?void 0:S.call(n)])}}});const ap=Object.assign(Qn,{Group:ho,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Qn.name,Qn),e.component(n+ho.name,ho)}}),tu=Symbol("ArcoSelectContext");var sp=Object.defineProperty,lp=Object.defineProperties,up=Object.getOwnPropertyDescriptors,Za=Object.getOwnPropertySymbols,cp=Object.prototype.hasOwnProperty,dp=Object.prototype.propertyIsEnumerable,Qa=(e,t,n)=>t in e?sp(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Qi=(e,t)=>{for(var n in t||(t={}))cp.call(t,n)&&Qa(e,n,t[n]);if(Za)for(var n of Za(t))dp.call(t,n)&&Qa(e,n,t[n]);return e},nu=(e,t)=>lp(e,up(t));const fp=e=>Be(e)&&"isGroup"in e,ru=e=>Be(e)&&"isGroup"in e,hp=(e,t="value")=>String(Be(e)?e[t]:e),Rn=(e,t="value")=>Be(e)?`__arco__option__object__${e[t]}`:e||de(e)?`__arco__option__${typeof e}-${e}`:"",pp=(e,{valueKey:t,fieldNames:n,origin:r,index:o=-1})=>{var i;if(Be(e)){const s=e[n.value];return{raw:e,index:o,key:Rn(s,t),origin:r,value:s,label:(i=e[n.label])!=null?i:hp(s,t),render:e[n.render],disabled:!!e[n.disabled],tagProps:e[n.tagProps]}}const a={value:e,label:String(e),disabled:!1};return Qi({raw:a,index:o,key:Rn(e,t),origin:r},a)},gi=(e,{valueKey:t,fieldNames:n,origin:r,optionInfoMap:o})=>{var i;const a=[];for(const s of e)if(fp(s)){const l=gi((i=s.options)!=null?i:[],{valueKey:t,fieldNames:n,origin:r,optionInfoMap:o});l.length>0&&a.push(nu(Qi({},s),{key:`__arco__group__${s.label}`,options:l}))}else{const l=pp(s,{valueKey:t,fieldNames:n,origin:r});a.push(l),o.get(l.key)||o.set(l.key,l)}return a},es=(e,{inputValue:t,filterOption:n})=>{const r=o=>{var i;const a=[];for(const s of o)if(ru(s)){const l=r((i=s.options)!=null?i:[]);l.length>0&&a.push(nu(Qi({},s),{options:l}))}else $r(s,{inputValue:t,filterOption:n})&&a.push(s);return a};return r(e)},$r=(e,{inputValue:t,filterOption:n})=>it(n)?!t||n(t,e.raw):n?e.label.toLowerCase().includes((t??"").toLowerCase()):!0,mp=(e,t)=>{if(!e||!t||e.length!==t.length)return!1;for(const n of Object.keys(e))if(!ea(e[n],t[n]))return!1;return!0},vp=(e,t)=>{if(!e||!t)return!1;const{length:n}=e;if(n!==t.length)return!1;for(let r=0;r{const n=Object.prototype.toString.call(e);return n!==Object.prototype.toString.call(t)?!1:n==="[object Object]"?mp(e,t):n==="[object Array]"?vp(e,t):n==="[object Function]"?e===t?!0:e.toString()===t.toString():e===t},gp=te({name:"Option",components:{Checkbox:ap},props:{value:[String,Number,Object],label:String,disabled:Boolean,tagProps:{type:Object},extra:{type:Object},index:{type:Number},internal:Boolean},setup(e){const{disabled:t,tagProps:n,index:r}=Re(e),o=oe("select-option"),i=et(tu,void 0),a=Wt(),s=H(),l=H(n.value);Ne(n,(b,v)=>{ea(b,v)||(l.value=b)});const u=H(""),c=C(()=>{var b,v;return(v=(b=e.value)!=null?b:e.label)!=null?v:u.value}),d=C(()=>{var b;return(b=e.label)!=null?b:u.value}),m=C(()=>Rn(c.value,i==null?void 0:i.valueKey)),_=C(()=>{var b;return(b=i==null?void 0:i.component)!=null?b:"li"}),S=()=>{var b;if(!e.label&&s.value){const v=(b=s.value.textContent)!=null?b:"";u.value!==v&&(u.value=v)}};Ke(()=>S()),vn(()=>S());const E=C(()=>{var b;return(b=i==null?void 0:i.valueKeys.includes(m.value))!=null?b:!1}),L=C(()=>(i==null?void 0:i.activeKey)===m.value);let y=H(!0);if(!e.internal){const b=ze({raw:{value:c,label:d,disabled:t,tagProps:l},ref:s,index:r,key:m,origin:"slot",value:c,label:d,disabled:t,tagProps:l});y=C(()=>$r(b,{inputValue:i==null?void 0:i.inputValue,filterOption:i==null?void 0:i.filterOption})),a&&(i==null||i.addSlotOptionInfo(a.uid,b)),Ht(()=>{a&&(i==null||i.removeSlotOptionInfo(a.uid))})}const $=b=>{e.disabled||i==null||i.onSelect(m.value,b)},w=()=>{e.disabled||i==null||i.setActiveKey(m.value)},h=()=>{e.disabled||i==null||i.setActiveKey()},p=C(()=>[o,{[`${o}-disabled`]:e.disabled,[`${o}-active`]:L.value,[`${o}-multiple`]:i==null?void 0:i.multiple}]);return{prefixCls:o,cls:p,selectCtx:i,itemRef:s,component:_,isSelected:E,isValid:y,handleClick:$,handleMouseEnter:w,handleMouseLeave:h}}});function yp(e,t,n,r,o,i){const a=ue("checkbox");return xi((x(),ve(sn(e.component),{ref:"itemRef",class:K([e.cls,{[`${e.prefixCls}-has-suffix`]:!!e.$slots.suffix}]),onClick:e.handleClick,onMouseenter:e.handleMouseEnter,onMouseleave:e.handleMouseLeave},{default:ke(()=>[e.$slots.icon?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[se(e.$slots,"icon")],2)):pe("v-if",!0),e.selectCtx&&e.selectCtx.multiple?(x(),ve(a,{key:1,class:K(`${e.prefixCls}-checkbox`),"model-value":e.isSelected,disabled:e.disabled,"uninject-group-context":""},{default:ke(()=>[se(e.$slots,"default",{},()=>[nt(Xe(e.label),1)])]),_:3},8,["class","model-value","disabled"])):(x(),ee("span",{key:2,class:K(`${e.prefixCls}-content`)},[se(e.$slots,"default",{},()=>[nt(Xe(e.label),1)])],2)),e.$slots.suffix?(x(),ee("span",{key:3,class:K(`${e.prefixCls}-suffix`)},[se(e.$slots,"suffix")],2)):pe("v-if",!0)]),_:3},8,["class","onClick","onMouseenter","onMouseleave"])),[[zi,e.isValid]])}var er=ce(gp,[["render",yp]]),bp=Object.defineProperty,_p=Object.defineProperties,Cp=Object.getOwnPropertyDescriptors,ts=Object.getOwnPropertySymbols,Sp=Object.prototype.hasOwnProperty,Ep=Object.prototype.propertyIsEnumerable,ns=(e,t,n)=>t in e?bp(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,po=(e,t)=>{for(var n in t||(t={}))Sp.call(t,n)&&ns(e,n,t[n]);if(ts)for(var n of ts(t))Ep.call(t,n)&&ns(e,n,t[n]);return e},wp=(e,t)=>_p(e,Cp(t));const kp={value:"value",label:"label",disabled:"disabled",tagProps:"tagProps",render:"render"},$p=({options:e,extraOptions:t,inputValue:n,filterOption:r,showExtraOptions:o,valueKey:i,fieldNames:a})=>{const s=C(()=>po(po({},kp),a==null?void 0:a.value)),l=ze(new Map),u=H([]);Ne(l,w=>{u.value=Array.from(w.values()).sort((h,p)=>de(h.index)&&de(p.index)?h.index-p.index:0)},{deep:!0});const c=C(()=>{var w,h;const p=new Map;return{optionInfos:gi((w=e==null?void 0:e.value)!=null?w:[],{valueKey:(h=i==null?void 0:i.value)!=null?h:"value",fieldNames:s.value,origin:"options",optionInfoMap:p}),optionInfoMap:p}}),d=C(()=>{var w,h;const p=new Map;return{optionInfos:gi((w=t==null?void 0:t.value)!=null?w:[],{valueKey:(h=i==null?void 0:i.value)!=null?h:"value",fieldNames:s.value,origin:"extraOptions",optionInfoMap:p}),optionInfoMap:p}}),m=ze(new Map);Ne([l,e??H([]),t??H([]),i??H("value")],()=>{m.clear(),u.value.forEach((w,h)=>{m.set(w.key,wp(po({},w),{index:h}))}),c.value.optionInfoMap.forEach(w=>{m.has(w.key)||(w.index=m.size,m.set(w.key,w))}),d.value.optionInfoMap.forEach(w=>{m.has(w.key)||(w.index=m.size,m.set(w.key,w))})},{immediate:!0,deep:!0});const _=C(()=>{var w;const h=es(c.value.optionInfos,{inputValue:n==null?void 0:n.value,filterOption:r==null?void 0:r.value});return((w=o==null?void 0:o.value)==null||w)&&h.push(...es(d.value.optionInfos,{inputValue:n==null?void 0:n.value,filterOption:r==null?void 0:r.value})),h}),S=C(()=>Array.from(m.values()).filter(w=>w.origin==="extraOptions"&&(o==null?void 0:o.value)===!1?!1:$r(w,{inputValue:n==null?void 0:n.value,filterOption:r==null?void 0:r.value}))),E=C(()=>S.value.filter(w=>!w.disabled).map(w=>w.key));return{validOptions:_,optionInfoMap:m,validOptionInfos:S,enabledOptionKeys:E,getNextSlotOptionIndex:()=>l.size,addSlotOptionInfo:(w,h)=>{l.set(w,h)},removeSlotOptionInfo:w=>{l.delete(w)}}},an={ENTER:"Enter",ESC:"Escape",BACKSPACE:"Backspace",TAB:"Tab",SPACE:" ",ARROW_UP:"ArrowUp",ARROW_DOWN:"ArrowDown",ARROW_LEFT:"ArrowLeft",ARROW_RIGHT:"ArrowRight"},rs=e=>JSON.stringify({key:e.key,ctrl:!!e.ctrl,shift:!!e.shift,alt:!!e.alt,meta:!!e.meta}),ou=e=>{const t={};return e.forEach((n,r)=>{const o=Vt(r)?{key:r}:r;t[rs(o)]=n}),n=>{const r=rs({key:n.key,ctrl:n.ctrlKey,shift:n.shiftKey,alt:n.altKey,meta:n.metaKey}),o=t[r];o&&(n.stopPropagation(),o(n))}},Op=({multiple:e,options:t,extraOptions:n,inputValue:r,filterOption:o,showExtraOptions:i,component:a,valueKey:s,fieldNames:l,loading:u,popupVisible:c,valueKeys:d,dropdownRef:m,optionRefs:_,virtualListRef:S,onSelect:E,onPopupVisibleChange:L,enterToOpen:y=!0,defaultActiveFirstOption:$})=>{const{validOptions:w,optionInfoMap:h,validOptionInfos:p,enabledOptionKeys:b,getNextSlotOptionIndex:v,addSlotOptionInfo:P,removeSlotOptionInfo:A}=$p({options:t,extraOptions:n,inputValue:r,filterOption:o,showExtraOptions:i,valueKey:s,fieldNames:l}),T=H();Ne(b,N=>{(!T.value||!N.includes(T.value))&&(T.value=N[0])});const j=N=>{T.value=N},J=N=>{const D=b.value.length;if(D===0)return;if(!T.value)return N==="down"?b.value[0]:b.value[D-1];const V=b.value.indexOf(T.value),G=(D+V+(N==="up"?-1:1))%D;return b.value[G]},U=N=>{var D,V;S!=null&&S.value&&S.value.scrollTo({key:N});const G=h.get(N),B=(D=m==null?void 0:m.value)==null?void 0:D.wrapperRef,I=(V=_==null?void 0:_.value[N])!=null?V:G==null?void 0:G.ref;if(!B||!I||B.scrollHeight===B.offsetHeight)return;const Y=Vd(I,B),Q=B.scrollTop;Y.top<0?B.scrollTo(0,Q+Y.top):Y.bottom<0&&B.scrollTo(0,Q-Y.bottom)};Ne(c,N=>{var D;if(N){const V=d.value[d.value.length-1];let G=(D=$==null?void 0:$.value)==null||D?b.value[0]:void 0;b.value.includes(V)&&(G=V),G!==T.value&&(T.value=G),Je(()=>{T.value&&U(T.value)})}});const O=ou(new Map([[an.ENTER,N=>{u!=null&&u.value||(c.value?T.value&&(E(T.value,N),N.preventDefault()):y&&(L(!0),N.preventDefault()))}],[an.ESC,N=>{c.value&&(L(!1),N.preventDefault())}],[an.ARROW_DOWN,N=>{if(c.value){const D=J("down");D&&(T.value=D,U(D)),N.preventDefault()}}],[an.ARROW_UP,N=>{if(c.value){const D=J("up");D&&(T.value=D,U(D)),N.preventDefault()}}]]));return $t(tu,ze({multiple:e,valueKey:s,inputValue:r,filterOption:o,component:a,valueKeys:d,activeKey:T,setActiveKey:j,onSelect:E,getNextSlotOptionIndex:v,addSlotOptionInfo:P,removeSlotOptionInfo:A})),{validOptions:w,optionInfoMap:h,validOptionInfos:p,enabledOptionKeys:b,activeKey:T,setActiveKey:j,addSlotOptionInfo:P,removeSlotOptionInfo:A,getNextActiveKey:J,scrollIntoView:U,handleKeyDown:O}},iu=({itemRef:e,selector:t,index:n,parentClassName:r})=>{const o=H(-1),i=C(()=>{var u;return(u=n==null?void 0:n.value)!=null?u:o.value}),a=H(),s=()=>{var u,c,d;let m=(c=(u=e.value)==null?void 0:u.parentElement)!=null?c:void 0;if(r)for(;m&&!m.className.includes(r);)m=(d=m.parentElement)!=null?d:void 0;return m},l=()=>{if(vt(n==null?void 0:n.value)&&a.value&&e.value){const u=Array.from(a.value.querySelectorAll(t)).indexOf(e.value);u!==o.value&&(o.value=u)}};return Ne(e,()=>{e.value&&!a.value&&(a.value=s())}),Ke(()=>{e.value&&(a.value=s()),l()}),vn(()=>l()),{computedIndex:i}},au=Symbol("ArcoAvatarGroup"),Lp=te({name:"IconImageClose",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-image-close`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Tp=["stroke-width","stroke-linecap","stroke-linejoin"],Ap=Yc('',5),Np=[Ap];function Pp(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Np,14,Tp)}var mo=ce(Lp,[["render",Pp]]);const Ip=Object.assign(mo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+mo.name,mo)}});var Mp=Object.defineProperty,os=Object.getOwnPropertySymbols,Rp=Object.prototype.hasOwnProperty,Bp=Object.prototype.propertyIsEnumerable,is=(e,t,n)=>t in e?Mp(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,as=(e,t)=>{for(var n in t||(t={}))Rp.call(t,n)&&is(e,n,t[n]);if(os)for(var n of os(t))Bp.call(t,n)&&is(e,n,t[n]);return e};const Dp=te({name:"Avatar",components:{ResizeObserver:Mn,IconImageClose:Ip,IconLoading:It},props:{shape:{type:String,default:"circle"},imageUrl:String,size:Number,autoFixFontSize:{type:Boolean,default:!0},triggerType:{type:String,default:"button"},triggerIconStyle:{type:Object}},emits:{click:e=>!0,error:()=>!0,load:()=>!0},setup(e,{slots:t,emit:n,attrs:r}){const{shape:o,size:i,autoFixFontSize:a,triggerType:s,triggerIconStyle:l}=Re(e),u=oe("avatar"),c=et(au,void 0),d=H(),m=H(),_=C(()=>{var O;return(O=c==null?void 0:c.shape)!=null?O:o.value}),S=C(()=>{var O;return(O=c==null?void 0:c.size)!=null?O:i.value}),E=C(()=>{var O;return(O=c==null?void 0:c.autoFixFontSize)!=null?O:a.value}),L=H(!1),y=H(!1),$=H(!0),w=H(!1),h=c?iu({itemRef:d,selector:`.${u}`}).computedIndex:H(-1),p=C(()=>{var O;const N=de(S.value)?{width:`${S.value}px`,height:`${S.value}px`,fontSize:`${S.value/2}px`}:{};return c&&(N.zIndex=c.zIndexAscend?h.value+1:c.total-h.value,N.marginLeft=h.value!==0?`-${((O=S.value)!=null?O:40)/4}px`:"0"),N}),b=Fp({triggerIconStyle:l==null?void 0:l.value,inlineStyle:r.style,triggerType:s.value}),v=()=>{!L.value&&!e.imageUrl&&Je(()=>{var O;if(!m.value||!d.value)return;const N=m.value.clientWidth,D=(O=S.value)!=null?O:d.value.offsetWidth,V=D/(N+8);D&&V<1&&(m.value.style.transform=`scale(${V}) translateX(-50%)`),$.value=!0})};Ke(()=>{var O;(O=m.value)!=null&&O.firstElementChild&&["IMG","PICTURE"].includes(m.value.firstElementChild.tagName)&&(L.value=!0),E.value&&v()}),Ne(i,()=>{E.value&&v()});const P=C(()=>[u,`${u}-${_.value}`]),A=C(()=>L.value||e.imageUrl?`${u}-image`:`${u}-text`);return{prefixCls:u,itemRef:d,cls:P,outerStyle:p,wrapperRef:m,wrapperCls:A,computedTriggerIconStyle:b,isImage:L,shouldLoad:$,isLoaded:w,hasError:y,onClick:O=>{n("click",O)},handleResize:()=>{E.value&&v()},handleImgLoad:()=>{w.value=!0,n("load")},handleImgError:()=>{y.value=!0,n("error")}}}}),Fp=({triggerType:e,inlineStyle:t={},triggerIconStyle:n={}})=>{let r={};return e==="button"&&(!n||n&&!n.color)&&t&&t.backgroundColor&&(r={color:t.backgroundColor}),as(as({},n),r)},jp=["src"];function Vp(e,t,n,r,o,i){const a=ue("IconImageClose"),s=ue("IconLoading"),l=ue("resize-observer");return x(),ee("div",{ref:"itemRef",style:$e(e.outerStyle),class:K([e.cls,{[`${e.prefixCls}-with-trigger-icon`]:!!e.$slots["trigger-icon"]}]),onClick:t[2]||(t[2]=(...u)=>e.onClick&&e.onClick(...u))},[z(l,{onResize:e.handleResize},{default:ke(()=>[fe("span",{ref:"wrapperRef",class:K(e.wrapperCls)},[e.imageUrl?(x(),ee(rt,{key:0},[e.hasError?se(e.$slots,"error",{key:0},()=>[fe("div",{class:K(`${e.prefixCls}-image-icon`)},[z(a)],2)]):pe("v-if",!0),!(e.hasError||!e.shouldLoad)&&!e.isLoaded?se(e.$slots,"default",{key:1},()=>[fe("div",{class:K(`${e.prefixCls}-image-icon`)},[z(s)],2)]):pe("v-if",!0),e.hasError||!e.shouldLoad?pe("v-if",!0):(x(),ee("img",{key:2,src:e.imageUrl,style:$e({width:e.size+"px",height:e.size+"px"}),alt:"avatar",onLoad:t[0]||(t[0]=(...u)=>e.handleImgLoad&&e.handleImgLoad(...u)),onError:t[1]||(t[1]=(...u)=>e.handleImgError&&e.handleImgError(...u))},null,44,jp))],64)):se(e.$slots,"default",{key:1})],2)]),_:3},8,["onResize"]),e.$slots["trigger-icon"]?(x(),ee("div",{key:0,class:K(`${e.prefixCls}-trigger-icon-${e.triggerType}`),style:$e(e.computedTriggerIconStyle)},[se(e.$slots,"trigger-icon")],6)):pe("v-if",!0)],6)}var tr=ce(Dp,[["render",Vp]]);const xp=te({name:"Popover",components:{Trigger:hr},props:{popupVisible:{type:Boolean,default:void 0},defaultPopupVisible:{type:Boolean,default:!1},title:String,content:String,trigger:{type:[String,Array],default:"hover"},position:{type:String,default:"top"},contentClass:{type:[String,Array,Object]},contentStyle:{type:Object},arrowClass:{type:[String,Array,Object]},arrowStyle:{type:Object},popupContainer:{type:[String,Object]}},emits:{"update:popupVisible":e=>!0,popupVisibleChange:e=>!0},setup(e,{emit:t}){const n=oe("popover"),r=H(e.defaultPopupVisible),o=C(()=>{var l;return(l=e.popupVisible)!=null?l:r.value}),i=l=>{r.value=l,t("update:popupVisible",l),t("popupVisibleChange",l)},a=C(()=>[`${n}-popup-content`,e.contentClass]),s=C(()=>[`${n}-popup-arrow`,e.arrowClass]);return{prefixCls:n,computedPopupVisible:o,contentCls:a,arrowCls:s,handlePopupVisibleChange:i}}});function zp(e,t,n,r,o,i){const a=ue("trigger");return x(),ve(a,{class:K(e.prefixCls),trigger:e.trigger,position:e.position,"popup-visible":e.computedPopupVisible,"popup-offset":10,"content-class":e.contentCls,"content-style":e.contentStyle,"arrow-class":e.arrowCls,"arrow-style":e.arrowStyle,"show-arrow":"","popup-container":e.popupContainer,"animation-name":"zoom-in-fade-out","auto-fit-transform-origin":"",onPopupVisibleChange:e.handlePopupVisibleChange},{content:ke(()=>[fe("div",{class:K(`${e.prefixCls}-title`)},[se(e.$slots,"title",{},()=>[nt(Xe(e.title),1)])],2),fe("div",{class:K(`${e.prefixCls}-content`)},[se(e.$slots,"content",{},()=>[nt(Xe(e.content),1)])],2)]),default:ke(()=>[se(e.$slots,"default")]),_:3},8,["class","trigger","position","popup-visible","content-class","content-style","arrow-class","arrow-style","popup-container","onPopupVisibleChange"])}var vo=ce(xp,[["render",zp]]);const Up=Object.assign(vo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+vo.name,vo)}}),go=te({name:"AvatarGroup",props:{shape:{type:String,default:"circle"},size:Number,autoFixFontSize:{type:Boolean,default:!0},maxCount:{type:Number,default:0},zIndexAscend:{type:Boolean,default:!1},maxStyle:{type:Object},maxPopoverTriggerProps:{type:Object}},setup(e,{slots:t}){const{shape:n,size:r,autoFixFontSize:o,zIndexAscend:i}=Re(e),a=oe("avatar-group"),s=H(0);return $t(au,ze({shape:n,size:r,autoFixFontSize:o,zIndexAscend:i,total:s})),()=>{var l,u;const c=Tn((u=(l=t.default)==null?void 0:l.call(t))!=null?u:[]),d=e.maxCount>0?c.slice(0,e.maxCount):c,m=e.maxCount>0?c.slice(e.maxCount):[];return s.value!==d.length&&(s.value=d.length),z("div",{class:a},[d,m.length>0&&z(Up,e.maxPopoverTriggerProps,{default:()=>[z(tr,{class:`${a}-max-count-avatar`,style:e.maxStyle},{default:()=>[nt("+"),m.length]})],content:()=>z("div",null,[m])})])}}}),qC=Object.assign(tr,{Group:go,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+tr.name,tr),e.component(n+go.name,go)}}),Wp=te({name:"IconDown",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-down`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Hp=["stroke-width","stroke-linecap","stroke-linejoin"],qp=fe("path",{d:"M39.6 17.443 24.043 33 8.487 17.443"},null,-1),Gp=[qp];function Kp(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Gp,14,Hp)}var yo=ce(Wp,[["render",Kp]]);const su=Object.assign(yo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+yo.name,yo)}}),Yp=({popupVisible:e,defaultPopupVisible:t,emit:n})=>{var r;const o=H((r=t==null?void 0:t.value)!=null?r:!1),i=C(()=>{var s;return(s=e==null?void 0:e.value)!=null?s:o.value}),a=s=>{s!==i.value&&(o.value=s,n("update:popupVisible",s),n("popupVisibleChange",s))};return Ne(i,s=>{o.value!==s&&(o.value=s)}),{computedPopupVisible:i,handlePopupVisibleChange:a}},Xp=te({name:"IconRight",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-right`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Jp=["stroke-width","stroke-linecap","stroke-linejoin"],Zp=fe("path",{d:"m16 39.513 15.556-15.557L16 8.4"},null,-1),Qp=[Zp];function em(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Qp,14,Jp)}var bo=ce(Xp,[["render",em]]);const GC=Object.assign(bo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+bo.name,bo)}});var lu={exports:{}};(function(e,t){(function(n,r){e.exports=r()})(di,function(){var n=1e3,r=6e4,o=36e5,i="millisecond",a="second",s="minute",l="hour",u="day",c="week",d="month",m="quarter",_="year",S="date",E="Invalid Date",L=/^(\d{4})[-/]?(\d{1,2})?[-/]?(\d{0,2})[Tt\s]*(\d{1,2})?:?(\d{1,2})?:?(\d{1,2})?[.:]?(\d+)?$/,y=/\[([^\]]+)]|Y{1,4}|M{1,4}|D{1,2}|d{1,4}|H{1,2}|h{1,2}|a|A|m{1,2}|s{1,2}|Z{1,2}|SSS/g,$={name:"en",weekdays:"Sunday_Monday_Tuesday_Wednesday_Thursday_Friday_Saturday".split("_"),months:"January_February_March_April_May_June_July_August_September_October_November_December".split("_"),ordinal:function(U){var O=["th","st","nd","rd"],N=U%100;return"["+U+(O[(N-20)%10]||O[N]||O[0])+"]"}},w=function(U,O,N){var D=String(U);return!D||D.length>=O?U:""+Array(O+1-D.length).join(N)+U},h={s:w,z:function(U){var O=-U.utcOffset(),N=Math.abs(O),D=Math.floor(N/60),V=N%60;return(O<=0?"+":"-")+w(D,2,"0")+":"+w(V,2,"0")},m:function U(O,N){if(O.date()1)return U(B[0])}else{var I=O.name;b[I]=O,V=I}return!D&&V&&(p=V),V||!D&&p},A=function(U,O){if(v(U))return U.clone();var N=typeof O=="object"?O:{};return N.date=U,N.args=arguments,new j(N)},T=h;T.l=P,T.i=v,T.w=function(U,O){return A(U,{locale:O.$L,utc:O.$u,x:O.$x,$offset:O.$offset})};var j=function(){function U(N){this.$L=P(N.locale,null,!0),this.parse(N)}var O=U.prototype;return O.parse=function(N){this.$d=function(D){var V=D.date,G=D.utc;if(V===null)return new Date(NaN);if(T.u(V))return new Date;if(V instanceof Date)return new Date(V);if(typeof V=="string"&&!/Z$/i.test(V)){var B=V.match(L);if(B){var I=B[2]-1||0,Y=(B[7]||"0").substring(0,3);return G?new Date(Date.UTC(B[1],I,B[3]||1,B[4]||0,B[5]||0,B[6]||0,Y)):new Date(B[1],I,B[3]||1,B[4]||0,B[5]||0,B[6]||0,Y)}}return new Date(V)}(N),this.$x=N.x||{},this.init()},O.init=function(){var N=this.$d;this.$y=N.getFullYear(),this.$M=N.getMonth(),this.$D=N.getDate(),this.$W=N.getDay(),this.$H=N.getHours(),this.$m=N.getMinutes(),this.$s=N.getSeconds(),this.$ms=N.getMilliseconds()},O.$utils=function(){return T},O.isValid=function(){return this.$d.toString()!==E},O.isSame=function(N,D){var V=A(N);return this.startOf(D)<=V&&V<=this.endOf(D)},O.isAfter=function(N,D){return A(N){var a;const s=H(),l=H((a=e==null?void 0:e.value)!=null?a:""),u=H(!1),c=H(!1),d=H("");let m;const _=C(()=>{var b;return(b=t==null?void 0:t.value)!=null?b:l.value}),S=(b,v)=>{l.value=b,n(o,b),n(r,b,v)},E=b=>{const{value:v}=b.target;c.value||(S(v,b),Je(()=>{s.value&&_.value!==s.value.value&&(s.value.value=_.value)}))},L=b=>{r==="input"&&_.value!==m&&(m=_.value,n("change",_.value,b))},y=b=>{var v;const{value:P}=b.target;b.type==="compositionend"?(c.value=!1,d.value="",S(P,b),Je(()=>{s.value&&_.value!==s.value.value&&(s.value.value=_.value)})):(c.value=!0,d.value=_.value+((v=b.data)!=null?v:""))},$=b=>{var v,P;u.value=!0,m=_.value,n("focus",b),(P=(v=i==null?void 0:i.value)==null?void 0:v.onFocus)==null||P.call(v,b)},w=b=>{var v,P;u.value=!1,n("blur",b),(P=(v=i==null?void 0:i.value)==null?void 0:v.onBlur)==null||P.call(v,b),L(b)},h=b=>{const v=b.key||b.code;!c.value&&v===Ji.key&&(n("pressEnter",b),L(b))},p=b=>{s.value&&b.target!==s.value&&(b.preventDefault(),s.value.focus())};return Ne(_,b=>{s.value&&b!==s.value.value&&(s.value.value=b)}),{inputRef:s,_value:l,_focused:u,isComposition:c,compositionValue:d,computedValue:_,handleInput:E,handleComposition:y,handleFocus:$,handleBlur:w,handleKeyDown:h,handleMousedown:p}};var rm=te({name:"InputLabel",inheritAttrs:!1,props:{modelValue:Object,inputValue:{type:String,default:""},enabledInput:Boolean,formatLabel:Function,placeholder:String,retainInputValue:Boolean,disabled:Boolean,baseCls:String,size:String,error:Boolean,focused:Boolean,uninjectFormItemContext:Boolean},emits:["update:inputValue","inputValueChange","focus","blur"],setup(e,{attrs:t,emit:n,slots:r}){var o;const{size:i,disabled:a,error:s,inputValue:l,uninjectFormItemContext:u}=Re(e),c=(o=e.baseCls)!=null?o:oe("input-label"),{mergedSize:d,mergedDisabled:m,mergedError:_,eventHandlers:S}=yt({size:i,disabled:a,error:s,uninject:u==null?void 0:u.value}),{mergedSize:E}=Mt(d),{inputRef:L,_focused:y,computedValue:$,handleInput:w,handleComposition:h,handleFocus:p,handleBlur:b,handleMousedown:v}=nm({modelValue:l,emit:n,eventName:"inputValueChange",updateEventName:"update:inputValue",eventHandlers:S}),P=C(()=>{var D;return(D=e.focused)!=null?D:y.value}),A=C(()=>e.enabledInput&&y.value||!e.modelValue),T=C(()=>e.enabledInput&&e.modelValue?e.modelValue.label:e.placeholder),j=()=>{var D,V,G,B,I;return e.modelValue?(I=(G=(D=r.default)==null?void 0:D.call(r,{data:e.modelValue}))!=null?G:(V=e.formatLabel)==null?void 0:V.call(e,e.modelValue))!=null?I:(B=e.modelValue)==null?void 0:B.label:null},J=C(()=>[c,`${c}-size-${E.value}`,{[`${c}-search`]:e.enabledInput,[`${c}-focus`]:P.value,[`${c}-disabled`]:m.value,[`${c}-error`]:_.value}]),U=C(()=>Vn(t,zt)),O=C(()=>xn(t,zt));return{inputRef:L,render:()=>z("span",Fe(U.value,{class:J.value,onMousedown:v}),[r.prefix&&z("span",{class:`${c}-prefix`},[r.prefix()]),z("input",Fe(O.value,{ref:L,class:[`${c}-input`,{[`${c}-input-hidden`]:!A.value}],value:$.value,readonly:!e.enabledInput,placeholder:T.value,disabled:m.value,onInput:w,onFocus:p,onBlur:b,onCompositionstart:h,onCompositionupdate:h,onCompositionend:h}),null),z("span",{class:[`${c}-value`,{[`${c}-value-hidden`]:A.value}]},[j()]),r.suffix&&z("span",{class:`${c}-suffix`},[r.suffix()])])}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}},render(){return this.render()}}),om=Object.defineProperty,ss=Object.getOwnPropertySymbols,im=Object.prototype.hasOwnProperty,am=Object.prototype.propertyIsEnumerable,ls=(e,t,n)=>t in e?om(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,sm=(e,t)=>{for(var n in t||(t={}))im.call(t,n)&&ls(e,n,t[n]);if(ss)for(var n of ss(t))am.call(t,n)&&ls(e,n,t[n]);return e};const lm=(e,t)=>{const n=[];for(const r of e)if(Be(r))n.push({raw:r,value:r[t.value],label:r[t.label],closable:r[t.closable],tagProps:r[t.tagProps]});else if(e||de(e)){const o={value:r,label:String(r),closable:!0};n.push(sm({raw:o},o))}return n},us=["red","orangered","orange","gold","lime","green","cyan","blue","arcoblue","purple","pinkpurple","magenta","gray"],um=te({name:"Tag",components:{IconHover:gt,IconClose:qt,IconLoading:It},props:{color:{type:String},size:{type:String},bordered:{type:Boolean,default:!1},visible:{type:Boolean,default:void 0},defaultVisible:{type:Boolean,default:!0},loading:{type:Boolean,default:!1},closable:{type:Boolean,default:!1},checkable:{type:Boolean,default:!1},checked:{type:Boolean,default:void 0},defaultChecked:{type:Boolean,default:!0}},emits:{"update:visible":e=>!0,"update:checked":e=>!0,close:e=>!0,check:(e,t)=>!0},setup(e,{emit:t}){const{size:n}=Re(e),r=oe("tag"),o=C(()=>e.color&&us.includes(e.color)),i=C(()=>e.color&&!us.includes(e.color)),a=H(e.defaultVisible),s=H(e.defaultChecked),l=C(()=>{var L;return(L=e.visible)!=null?L:a.value}),u=C(()=>{var L;return e.checkable?(L=e.checked)!=null?L:s.value:!0}),{mergedSize:c}=Mt(n),d=C(()=>c.value==="mini"?"small":c.value),m=L=>{a.value=!1,t("update:visible",!1),t("close",L)},_=L=>{if(e.checkable){const y=!u.value;s.value=y,t("update:checked",y),t("check",y,L)}},S=C(()=>[r,`${r}-size-${d.value}`,{[`${r}-loading`]:e.loading,[`${r}-hide`]:!l.value,[`${r}-${e.color}`]:o.value,[`${r}-bordered`]:e.bordered,[`${r}-checkable`]:e.checkable,[`${r}-checked`]:u.value,[`${r}-custom-color`]:i.value}]),E=C(()=>{if(i.value)return{backgroundColor:e.color}});return{prefixCls:r,cls:S,style:E,computedVisible:l,computedChecked:u,handleClick:_,handleClose:m}}});function cm(e,t,n,r,o,i){const a=ue("icon-close"),s=ue("icon-hover"),l=ue("icon-loading");return e.computedVisible?(x(),ee("span",{key:0,class:K(e.cls),style:$e(e.style),onClick:t[0]||(t[0]=(...u)=>e.handleClick&&e.handleClick(...u))},[e.$slots.icon?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[se(e.$slots,"icon")],2)):pe("v-if",!0),se(e.$slots,"default"),e.closable?(x(),ve(s,{key:1,role:"button","aria-label":"Close",prefix:e.prefixCls,class:K(`${e.prefixCls}-close-btn`),onClick:cn(e.handleClose,["stop"])},{default:ke(()=>[se(e.$slots,"close-icon",{},()=>[z(a)])]),_:3},8,["prefix","class","onClick"])):pe("v-if",!0),e.loading?(x(),ee("span",{key:2,class:K(`${e.prefixCls}-loading-icon`)},[z(l)],2)):pe("v-if",!0)],6)):pe("v-if",!0)}var _o=ce(um,[["render",cm]]);const dm=Object.assign(_o,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+_o.name,_o)}});var fm=Object.defineProperty,cs=Object.getOwnPropertySymbols,hm=Object.prototype.hasOwnProperty,pm=Object.prototype.propertyIsEnumerable,ds=(e,t,n)=>t in e?fm(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,qn=(e,t)=>{for(var n in t||(t={}))hm.call(t,n)&&ds(e,n,t[n]);if(cs)for(var n of cs(t))pm.call(t,n)&&ds(e,n,t[n]);return e};const mm={value:"value",label:"label",closable:"closable",tagProps:"tagProps"};var Co=te({name:"InputTag",inheritAttrs:!1,props:{modelValue:{type:Array},defaultValue:{type:Array,default:()=>[]},inputValue:String,defaultInputValue:{type:String,default:""},placeholder:String,disabled:{type:Boolean,default:!1},error:{type:Boolean,default:!1},readonly:{type:Boolean,default:!1},allowClear:{type:Boolean,default:!1},size:{type:String},maxTagCount:{type:Number,default:0},retainInputValue:{type:[Boolean,Object],default:!1},formatTag:{type:Function},uniqueValue:{type:Boolean,default:!1},fieldNames:{type:Object},baseCls:String,focused:Boolean,disabledInput:Boolean,uninjectFormItemContext:Boolean},emits:{"update:modelValue":e=>!0,"update:inputValue":e=>!0,change:(e,t)=>!0,inputValueChange:(e,t)=>!0,pressEnter:(e,t)=>!0,remove:(e,t)=>!0,clear:e=>!0,focus:e=>!0,blur:e=>!0},setup(e,{emit:t,slots:n,attrs:r}){const{size:o,disabled:i,error:a,uninjectFormItemContext:s,modelValue:l}=Re(e),u=e.baseCls||oe("input-tag"),c=H(),d=H(),{mergedSize:m,mergedDisabled:_,mergedError:S,feedback:E,eventHandlers:L}=yt({size:o,disabled:i,error:a,uninject:s==null?void 0:s.value}),{mergedSize:y}=Mt(m),$=C(()=>qn(qn({},mm),e.fieldNames)),w=H(!1),h=H(e.defaultValue),p=H(e.defaultInputValue),b=H(!1),v=H(""),P=C(()=>Be(e.retainInputValue)?qn({create:!1,blur:!1},e.retainInputValue):{create:e.retainInputValue,blur:e.retainInputValue}),A=ze({width:"12px"}),T=C(()=>e.focused||w.value),j=(f,k)=>{p.value=f,t("update:inputValue",f),t("inputValueChange",f,k)},J=f=>{var k;const{value:q}=f.target;f.type==="compositionend"?(b.value=!1,v.value="",j(q,f),Je(()=>{c.value&&O.value!==c.value.value&&(c.value.value=O.value)})):(b.value=!0,v.value=O.value+((k=f.data)!=null?k:""))},U=C(()=>{var f;return(f=e.modelValue)!=null?f:h.value}),O=C(()=>{var f;return(f=e.inputValue)!=null?f:p.value});Ne(l,f=>{(vt(f)||jn(f))&&(h.value=[])});const N=f=>{c.value&&f.target!==c.value&&(f.preventDefault(),c.value.focus())},D=f=>{const{value:k}=f.target;b.value||(j(k,f),Je(()=>{c.value&&O.value!==c.value.value&&(c.value.value=O.value)}))},V=C(()=>lm(U.value,$.value)),G=C(()=>{if(e.maxTagCount>0){const f=V.value.length-e.maxTagCount;if(f>0){const k=V.value.slice(0,e.maxTagCount),q={value:"__arco__more",label:`+${f}...`,closable:!1};return k.push(qn({raw:q},q)),k}}return V.value}),B=(f,k)=>{var q,ne;h.value=f,t("update:modelValue",f),t("change",f,k),(ne=(q=L.value)==null?void 0:q.onChange)==null||ne.call(q,k)},I=(f,k,q)=>{var ne;const be=(ne=U.value)==null?void 0:ne.filter((Ye,Ze)=>Ze!==k);B(be,q),t("remove",f,q)},Y=f=>{B([],f),t("clear",f)},Q=C(()=>!_.value&&!e.readonly&&e.allowClear&&!!U.value.length),he=f=>{var k;if(O.value){if(f.preventDefault(),e.uniqueValue&&((k=U.value)!=null&&k.includes(O.value))){t("pressEnter",O.value,f);return}const q=U.value.concat(O.value);B(q,f),t("pressEnter",O.value,f),P.value.create||j("",f)}},me=f=>{var k,q;w.value=!0,t("focus",f),(q=(k=L.value)==null?void 0:k.onFocus)==null||q.call(k,f)},Se=f=>{var k,q;w.value=!1,!P.value.blur&&O.value&&j("",f),t("blur",f),(q=(k=L.value)==null?void 0:k.onBlur)==null||q.call(k,f)},Ae=()=>{for(let f=V.value.length-1;f>=0;f--)if(V.value[f].closable)return f;return-1},Ie=f=>{const k=f.key||f.code;if(!b.value&&O.value&&k===Ji.key&&he(f),!b.value&&G.value.length>0&&!O.value&&k===kf.key){const q=Ae();q>=0&&I(V.value[q].value,q,f)}},Te=f=>{f>12?A.width=`${f}px`:A.width="12px"};Ke(()=>{d.value&&Te(d.value.offsetWidth)});const we=()=>{d.value&&Te(d.value.offsetWidth)};Ne(O,f=>{c.value&&!b.value&&f!==c.value.value&&(c.value.value=f)});const Me=C(()=>[u,`${u}-size-${y.value}`,{[`${u}-disabled`]:_.value,[`${u}-disabled-input`]:e.disabledInput,[`${u}-error`]:S.value,[`${u}-focus`]:T.value,[`${u}-readonly`]:e.readonly,[`${u}-has-tag`]:G.value.length>0,[`${u}-has-prefix`]:!!n.prefix,[`${u}-has-suffix`]:!!n.suffix||Q.value||E.value,[`${u}-has-placeholder`]:!U.value.length}]),xe=C(()=>Vn(r,zt)),re=C(()=>xn(r,zt));return{inputRef:c,render:()=>{var f;return z("span",Fe({class:Me.value,onMousedown:N},xe.value),[z(Gl,{onResize:we},{default:()=>[z("span",{ref:d,class:`${u}-mirror`},[G.value.length>0?v.value||O.value:v.value||O.value||e.placeholder])]}),n.prefix&&z("span",{class:`${u}-prefix`},[n.prefix()]),z(Il,{tag:"span",name:"input-tag-zoom",class:`${u}-inner`},{default:()=>[G.value.map((k,q)=>z(dm,Fe({key:`tag-${k.value}`,class:`${u}-tag`,closable:!_.value&&!e.readonly&&k.closable,visible:!0},k.tagProps,{onClose:ne=>I(k.value,q,ne)}),{default:()=>{var ne,be,Ye,Ze;return[(Ze=(Ye=(ne=n.tag)==null?void 0:ne.call(n,{data:k.raw}))!=null?Ye:(be=e.formatTag)==null?void 0:be.call(e,k.raw))!=null?Ze:k.label]}})),z("input",Fe(re.value,{ref:c,key:"input-tag-input",class:`${u}-input`,style:A,placeholder:G.value.length===0?e.placeholder:void 0,disabled:_.value,readonly:e.readonly||e.disabledInput,onInput:D,onKeydown:Ie,onFocus:me,onBlur:Se,onCompositionstart:J,onCompositionupdate:J,onCompositionend:J}),null)]}),Q.value&&z(gt,{class:`${u}-clear-btn`,onClick:Y,onMousedown:k=>k.stopPropagation()},{default:()=>[z(qt,null,null)]}),(n.suffix||!!E.value)&&z("span",{class:`${u}-suffix`},[(f=n.suffix)==null?void 0:f.call(n),!!E.value&&z(Xi,{type:E.value},null)])])}}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}},render(){return this.render()}});const vm=Object.assign(Co,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Co.name,Co)}});var fs=te({name:"SelectView",props:{modelValue:{type:Array,required:!0},inputValue:String,placeholder:String,disabled:{type:Boolean,default:!1},error:{type:Boolean,default:!1},loading:{type:Boolean,default:!1},opened:{type:Boolean,default:!1},size:{type:String},bordered:{type:Boolean,default:!0},multiple:{type:Boolean,default:!1},allowClear:{type:Boolean,default:!1},allowCreate:{type:Boolean,default:!1},allowSearch:{type:Boolean,default:e=>We(e.modelValue)},maxTagCount:{type:Number,default:0},retainInputValue:{type:Boolean,default:!1}},emits:["remove","clear","focus","blur"],setup(e,{emit:t,slots:n}){const{size:r,disabled:o,error:i}=Re(e),a=oe("select-view"),{feedback:s,eventHandlers:l,mergedDisabled:u,mergedSize:c,mergedError:d}=yt({size:r,disabled:o,error:i}),{mergedSize:m}=Mt(c),{opened:_}=Re(e),S=H(),E=C(()=>{var j;return(j=S.value)==null?void 0:j.inputRef}),L=C(()=>e.modelValue.length===0),y=C(()=>e.allowSearch||e.allowCreate),$=C(()=>e.allowClear&&!e.disabled&&!L.value),w=j=>{var J,U;t("focus",j),(U=(J=l.value)==null?void 0:J.onFocus)==null||U.call(J,j)},h=j=>{var J,U;t("blur",j),(U=(J=l.value)==null?void 0:J.onBlur)==null||U.call(J,j)},p=j=>{t("remove",j)},b=j=>{t("clear",j)},v=()=>{var j,J,U,O;return e.loading?(J=(j=n["loading-icon"])==null?void 0:j.call(n))!=null?J:z(It,null,null):e.allowSearch&&e.opened?(O=(U=n["search-icon"])==null?void 0:U.call(n))!=null?O:z(mi,null,null):n["arrow-icon"]?n["arrow-icon"]():z(su,{class:`${a}-arrow-icon`},null)},P=()=>z(rt,null,[$.value&&z(gt,{class:`${a}-clear-btn`,onClick:b,onMousedown:j=>j.stopPropagation()},{default:()=>[z(qt,null,null)]}),z("span",{class:`${a}-icon`},[v()]),!!s.value&&z(Xi,{type:s.value},null)]);Ne(_,j=>{!j&&E.value&&E.value.isSameNode(document.activeElement)&&E.value.blur()});const A=C(()=>[`${a}-${e.multiple?"multiple":"single"}`,{[`${a}-opened`]:e.opened,[`${a}-borderless`]:!e.bordered}]);return{inputRef:E,handleFocus:w,handleBlur:h,render:()=>e.multiple?z(vm,{ref:S,baseCls:a,class:A.value,modelValue:e.modelValue,inputValue:e.inputValue,focused:e.opened,placeholder:e.placeholder,disabled:u.value,size:m.value,error:d.value,maxTagCount:e.maxTagCount,disabledInput:!e.allowSearch&&!e.allowCreate,retainInputValue:!0,uninjectFormItemContext:!0,onRemove:p,onFocus:w,onBlur:h},{prefix:n.prefix,suffix:P,tag:n.label}):z(rm,{ref:S,baseCls:a,class:A.value,modelValue:e.modelValue[0],inputValue:e.inputValue,focused:e.opened,placeholder:e.placeholder,disabled:u.value,size:m.value,error:d.value,enabledInput:y.value,uninjectFormItemContext:!0,onFocus:w,onBlur:h},{default:n.label,prefix:n.prefix,suffix:P})}},methods:{focus(){this.inputRef&&this.inputRef.focus()},blur(){this.inputRef&&this.inputRef.blur()}},render(){return this.render()}});const gm=te({name:"Optgroup",props:{label:{type:String}},setup(){return{prefixCls:oe("select-group")}}});function ym(e,t,n,r,o,i){return x(),ee(rt,null,[fe("li",{class:K(`${e.prefixCls}-title`)},[se(e.$slots,"label",{},()=>[nt(Xe(e.label),1)])],2),se(e.$slots,"default")],64)}var nr=ce(gm,[["render",ym]]);const bm=({dataKeys:e,contentRef:t,fixedSize:n,estimatedSize:r,buffer:o})=>{const i=H(0),a=new Map,s=C(()=>e.value.length),l=H(0),u=C(()=>{const v=l.value+o.value*3;return v>s.value?s.value:v}),c=C(()=>{const v=s.value-o.value*3;return v<0?0:v}),d=v=>{v<0?l.value=0:v>c.value?l.value=c.value:l.value=v},m=H(n.value),_=C(()=>r.value!==30?r.value:i.value||r.value),S=(v,P)=>{a.set(v,P)},E=v=>{var P;if(m.value)return _.value;const A=e.value[v];return(P=a.get(A))!=null?P:_.value},L=v=>a.has(v);Ke(()=>{const v=Array.from(a.values()).reduce((P,A)=>P+A,0);v>0&&(i.value=v/a.size)});const y=v=>m.value?_.value*v:$(0,v),$=(v,P)=>{let A=0;for(let T=v;Tm.value?_.value*l.value:$(0,l.value)),h=v=>{const P=v>=w.value;let A=Math.abs(v-w.value);const T=P?l.value:l.value-1;let j=0;for(;A>0;)A-=E(T+j),P?j++:j--;return j},p=v=>{const P=h(v),A=l.value+P-o.value;return A<0?0:A>c.value?c.value:A},b=C(()=>m.value?_.value*(s.value-u.value):$(u.value,s.value));return{frontPadding:w,behindPadding:b,start:l,end:u,getStartByScroll:p,setItemSize:S,hasItemSize:L,setStart:d,getScrollOffset:y}};var _m=te({name:"VirtualListItem",props:{hasItemSize:{type:Function,required:!0},setItemSize:{type:Function,required:!0}},setup(e,{slots:t}){var n;const r=(n=Wt())==null?void 0:n.vnode.key,o=H(),i=()=>{var a,s,l,u;const c=(s=(a=o.value)==null?void 0:a.$el)!=null?s:o.value,d=(u=(l=c==null?void 0:c.getBoundingClientRect)==null?void 0:l.call(c).height)!=null?u:c==null?void 0:c.offsetHeight;d&&e.setItemSize(r,d)};return Ke(()=>i()),Ht(()=>i()),()=>{var a;const s=ln((a=t.default)==null?void 0:a.call(t));return s?Cr(s,{ref:o},!0):null}}}),Cm=Object.defineProperty,hs=Object.getOwnPropertySymbols,Sm=Object.prototype.hasOwnProperty,Em=Object.prototype.propertyIsEnumerable,ps=(e,t,n)=>t in e?Cm(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,wm=(e,t)=>{for(var n in t||(t={}))Sm.call(t,n)&&ps(e,n,t[n]);if(hs)for(var n of hs(t))Em.call(t,n)&&ps(e,n,t[n]);return e};const km=te({name:"VirtualList",components:{VirtualListItem:_m},props:{height:{type:[Number,String],default:200},data:{type:Array,default:()=>[]},threshold:{type:Number,default:0},itemKey:{type:String,default:"key"},fixedSize:{type:Boolean,default:!1},estimatedSize:{type:Number,default:30},buffer:{type:Number,default:10},component:{type:[String,Object],default:"div"},listAttrs:{type:Object},contentAttrs:{type:Object},paddingPosition:{type:String,default:"content"}},emits:{scroll:e=>!0,reachBottom:e=>!0},setup(e,{emit:t}){const{data:n,itemKey:r,fixedSize:o,estimatedSize:i,buffer:a,height:s}=Re(e),l=oe("virtual-list"),u=C(()=>Be(e.component)?wm({container:"div",list:"div",content:"div"},e.component):{container:e.component,list:"div",content:"div"}),c=H(),d=H(),m=C(()=>({height:de(s.value)?`${s.value}px`:s.value,overflow:"auto"})),_=C(()=>n.value.map((T,j)=>{var J;return(J=T[r.value])!=null?J:j})),{frontPadding:S,behindPadding:E,start:L,end:y,getStartByScroll:$,setItemSize:w,hasItemSize:h,setStart:p,getScrollOffset:b}=bm({dataKeys:_,contentRef:d,fixedSize:o,estimatedSize:i,buffer:a}),v=C(()=>e.threshold&&n.value.length<=e.threshold?n.value:n.value.slice(L.value,y.value));return{prefixCls:l,containerRef:c,contentRef:d,frontPadding:S,currentList:v,behindPadding:E,onScroll:T=>{const{scrollTop:j,scrollHeight:J,offsetHeight:U}=T.target,O=$(j);O!==L.value&&p(O),t("scroll",T),Math.floor(J-(j+U))<=0&&t("reachBottom",T)},setItemSize:w,hasItemSize:h,start:L,scrollTo:T=>{var j,J;if(c.value)if(de(T))c.value.scrollTop=T;else{const U=(J=T.index)!=null?J:_.value.indexOf((j=T.key)!=null?j:"");p(U-a.value),c.value.scrollTop=b(U),Je(()=>{if(c.value){const O=b(U);O!==c.value.scrollTop&&(c.value.scrollTop=O)}})}},style:m,mergedComponent:u}}});function $m(e,t,n,r,o,i){const a=ue("VirtualListItem");return x(),ve(sn(e.mergedComponent.container),{ref:"containerRef",class:K(e.prefixCls),style:$e(e.style),onScroll:e.onScroll},{default:ke(()=>[(x(),ve(sn(e.mergedComponent.list),Fe(e.listAttrs,{style:e.paddingPosition==="list"?{paddingTop:`${e.frontPadding}px`,paddingBottom:`${e.behindPadding}px`}:{}}),{default:ke(()=>[(x(),ve(sn(e.mergedComponent.content),Fe({ref:"contentRef"},e.contentAttrs,{style:e.paddingPosition==="content"?{paddingTop:`${e.frontPadding}px`,paddingBottom:`${e.behindPadding}px`}:{}}),{default:ke(()=>[(x(!0),ee(rt,null,Fn(e.currentList,(s,l)=>{var u;return x(),ve(a,{key:(u=s[e.itemKey])!=null?u:e.start+l,"has-item-size":e.hasItemSize,"set-item-size":e.setItemSize},{default:ke(()=>[se(e.$slots,"item",{item:s,index:e.start+l})]),_:2},1032,["has-item-size","set-item-size"])}),128))]),_:3},16,["style"]))]),_:3},16,["style"]))]),_:3},8,["class","style","onScroll"])}var Om=ce(km,[["render",$m]]);const ms=typeof window>"u"?global:window;function Lm(e,t){let n=0;return(...r)=>{n&&ms.clearTimeout(n),n=ms.setTimeout(()=>{n=0,e(...r)},t)}}var Tm=Object.defineProperty,Am=Object.defineProperties,Nm=Object.getOwnPropertyDescriptors,vs=Object.getOwnPropertySymbols,Pm=Object.prototype.hasOwnProperty,Im=Object.prototype.propertyIsEnumerable,gs=(e,t,n)=>t in e?Tm(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,En=(e,t)=>{for(var n in t||(t={}))Pm.call(t,n)&&gs(e,n,t[n]);if(vs)for(var n of vs(t))Im.call(t,n)&&gs(e,n,t[n]);return e},Mm=(e,t)=>Am(e,Nm(t));function Rm(e){return typeof e=="function"||Object.prototype.toString.call(e)==="[object Object]"&&!Ml(e)}const Bm={value:"value",label:"label",disabled:"disabled",tagProps:"tagProps",render:"render"};var So=te({name:"Select",components:{Trigger:hr,SelectView:fs},inheritAttrs:!1,props:{multiple:{type:Boolean,default:!1},modelValue:{type:[String,Number,Object,Array]},defaultValue:{type:[String,Number,Object,Array],default:e=>vt(e.multiple)?"":[]},inputValue:{type:String},defaultInputValue:{type:String,default:""},size:{type:String},placeholder:String,loading:{type:Boolean,default:!1},disabled:{type:Boolean,default:!1},error:{type:Boolean,default:!1},allowClear:{type:Boolean,default:!1},allowSearch:{type:[Boolean,Object],default:e=>!!e.multiple},allowCreate:{type:Boolean,default:!1},maxTagCount:{type:Number,default:0},popupContainer:{type:[String,Object]},bordered:{type:Boolean,default:!0},defaultActiveFirstOption:{type:Boolean,default:!0},popupVisible:{type:Boolean,default:void 0},defaultPopupVisible:{type:Boolean,default:!1},unmountOnClose:{type:Boolean,default:!1},filterOption:{type:[Boolean,Function],default:!0},options:{type:Array,default:()=>[]},virtualListProps:{type:Object},triggerProps:{type:Object},formatLabel:{type:Function},fallbackOption:{type:[Boolean,Function],default:!0},showExtraOptions:{type:Boolean,default:!0},valueKey:{type:String,default:"value"},searchDelay:{type:Number,default:500},limit:{type:Number,default:0},fieldNames:{type:Object},scrollbar:{type:[Boolean,Object],default:!0}},emits:{"update:modelValue":e=>!0,"update:inputValue":e=>!0,"update:popupVisible":e=>!0,change:e=>!0,inputValueChange:e=>!0,popupVisibleChange:e=>!0,clear:e=>!0,remove:e=>!0,search:e=>!0,dropdownScroll:e=>!0,dropdownReachBottom:e=>!0,exceedLimit:(e,t)=>!0},setup(e,{slots:t,emit:n,attrs:r}){const{size:o,disabled:i,error:a,options:s,filterOption:l,valueKey:u,multiple:c,popupVisible:d,showExtraOptions:m,modelValue:_,fieldNames:S,loading:E,defaultActiveFirstOption:L}=Re(e),y=oe("select"),{mergedSize:$,mergedDisabled:w,mergedError:h,eventHandlers:p}=yt({size:o,disabled:i,error:a}),b=C(()=>e.virtualListProps?"div":"li"),v=C(()=>Be(e.allowSearch)&&!!e.allowSearch.retainInputValue);C(()=>{if(it(e.formatLabel))return W=>{const ie=ne.get(W.value);return e.formatLabel(ie)}});const P=H(),A=H({}),T=H(),{computedPopupVisible:j,handlePopupVisibleChange:J}=Yp({popupVisible:d,emit:n}),U=H(e.defaultValue),O=C(()=>{var W;const ie=(W=e.modelValue)!=null?W:U.value;return(We(ie)?ie:ie||de(ie)?[ie]:[]).map(Ce=>({value:Ce,key:Rn(Ce,e.valueKey)}))});Ne(_,W=>{(vt(W)||jn(W))&&(U.value=c.value?[]:"")});const N=C(()=>O.value.map(W=>W.key)),D=C(()=>En(En({},Bm),S==null?void 0:S.value)),V=H(),G=W=>{const ie={};return W.forEach(le=>{ie[le]=ne.get(le)}),ie},B=W=>{V.value=G(W)},I=W=>it(e.fallbackOption)?e.fallbackOption(W):{[D.value.value]:W,[D.value.label]:String(Be(W)?W[u==null?void 0:u.value]:W)},Y=()=>{const W=[],ie=[];if(e.allowCreate||e.fallbackOption){for(const le of O.value)if(!ie.includes(le.key)){const Ce=ne.get(le.key);(!Ce||Ce.origin==="extraOptions")&&(W.push(le),ie.push(le.key))}}if(e.allowCreate&&Se.value){const le=Rn(Se.value);if(!ie.includes(le)){const Ce=ne.get(le);(!Ce||Ce.origin==="extraOptions")&&W.push({value:Se.value,key:le})}}return W},Q=H([]),he=C(()=>Q.value.map(W=>{var ie;let le=I(W.value);const Ce=(ie=V.value)==null?void 0:ie[W.key];return!vt(Ce)&&!rd(Ce)&&(le=En(En({},le),Ce)),le}));Je(()=>{Xc(()=>{var W;const ie=Y();if(ie.length!==Q.value.length)Q.value=ie;else if(ie.length>0){for(let le=0;le{var W;return(W=e.inputValue)!=null?W:me.value});Ne(j,W=>{!W&&!v.value&&Se.value&&Te("")});const Ae=W=>{var ie,le;return e.multiple?W.map(Ce=>{var st,Tt;return(Tt=(st=ne.get(Ce))==null?void 0:st.value)!=null?Tt:""}):(le=(ie=ne.get(W[0]))==null?void 0:ie.value)!=null?le:""},Ie=W=>{var ie,le;const Ce=Ae(W);U.value=Ce,n("update:modelValue",Ce),n("change",Ce),(le=(ie=p.value)==null?void 0:ie.onChange)==null||le.call(ie),B(W)},Te=W=>{me.value=W,n("update:inputValue",W),n("inputValueChange",W)},we=(W,ie)=>{if(e.multiple){if(N.value.includes(W)){const le=N.value.filter(Ce=>Ce!==W);Ie(le)}else if(Ye.value.includes(W))if(e.limit>0&&N.value.length>=e.limit){const le=ne.get(W);n("exceedLimit",le==null?void 0:le.value,ie)}else{const le=N.value.concat(W);Ie(le)}v.value||Te("")}else{if(W!==N.value[0]&&Ie([W]),v.value){const le=ne.get(W);le&&Te(le.label)}J(!1)}},Me=Lm(W=>{n("search",W)},e.searchDelay),xe=W=>{W!==Se.value&&(j.value||J(!0),Te(W),e.allowSearch&&Me(W))},re=W=>{const ie=ne.get(W),le=N.value.filter(Ce=>Ce!==W);Ie(le),n("remove",ie==null?void 0:ie.value)},g=W=>{W==null||W.stopPropagation();const ie=N.value.filter(le=>{var Ce;return(Ce=ne.get(le))==null?void 0:Ce.disabled});Ie(ie),Te(""),n("clear",W)},f=W=>{n("dropdownScroll",W)},k=W=>{n("dropdownReachBottom",W)},{validOptions:q,optionInfoMap:ne,validOptionInfos:be,enabledOptionKeys:Ye,handleKeyDown:Ze}=Op({multiple:c,options:s,extraOptions:he,inputValue:Se,filterOption:l,showExtraOptions:m,component:b,valueKey:u,fieldNames:S,loading:E,popupVisible:j,valueKeys:N,dropdownRef:P,optionRefs:A,virtualListRef:T,defaultActiveFirstOption:L,onSelect:we,onPopupVisibleChange:J}),Sn=C(()=>{var W;const ie=[];for(const le of O.value){const Ce=ne.get(le.key);Ce&&ie.push(Mm(En({},Ce),{value:le.key,label:(W=Ce==null?void 0:Ce.label)!=null?W:String(Be(le.value)?le.value[u==null?void 0:u.value]:le.value),closable:!(Ce!=null&&Ce.disabled),tagProps:Ce==null?void 0:Ce.tagProps}))}return ie}),Z=W=>{if(it(t.option)){const ie=t.option;return()=>ie({data:W.raw})}return it(W.render)?W.render:()=>W.label},R=W=>{if(ru(W)){let ie;return z(nr,{key:W.key,label:W.label},Rm(ie=W.options.map(le=>R(le)))?ie:{default:()=>[ie]})}return $r(W,{inputValue:Se.value,filterOption:l==null?void 0:l.value})?z(er,{ref:ie=>{ie!=null&&ie.$el&&(A.value[W.key]=ie.$el)},key:W.key,value:W.value,label:W.label,disabled:W.disabled,internal:!0},{default:Z(W)}):null},X=()=>z(ip,{ref:P,loading:e.loading,empty:be.value.length===0,virtualList:!!e.virtualListProps,scrollbar:e.scrollbar,onScroll:f,onReachBottom:k},{default:()=>{var W,ie;return[...(ie=(W=t.default)==null?void 0:W.call(t))!=null?ie:[],...q.value.map(R)]},"virtual-list":()=>z(Om,Fe(e.virtualListProps,{ref:T,data:q.value}),{item:({item:W})=>R(W)}),empty:t.empty,header:t.header,footer:t.footer}),Ue=({data:W})=>{var ie,le,Ce,st;if((t.label||it(e.formatLabel))&&W){const Tt=ne.get(W.value);if(Tt!=null&&Tt.raw)return(Ce=(ie=t.label)==null?void 0:ie.call(t,{data:Tt.raw}))!=null?Ce:(le=e.formatLabel)==null?void 0:le.call(e,Tt.raw)}return(st=W==null?void 0:W.label)!=null?st:""};return()=>z(hr,Fe({trigger:"click",position:"bl",popupOffset:4,animationName:"slide-dynamic-origin",hideEmpty:!0,preventFocus:!0,autoFitPopupWidth:!0,autoFitTransformOrigin:!0,disabled:w.value,popupVisible:j.value,unmountOnClose:e.unmountOnClose,clickToClose:!(e.allowSearch||e.allowCreate),popupContainer:e.popupContainer,onPopupVisibleChange:J},e.triggerProps),{default:()=>{var W,ie;return[(ie=(W=t.trigger)==null?void 0:W.call(t))!=null?ie:z(fs,Fe({class:y,modelValue:Sn.value,inputValue:Se.value,multiple:e.multiple,disabled:w.value,error:h.value,loading:e.loading,allowClear:e.allowClear,allowCreate:e.allowCreate,allowSearch:!!e.allowSearch,opened:j.value,maxTagCount:e.maxTagCount,placeholder:e.placeholder,bordered:e.bordered,size:$.value,onInputValueChange:xe,onRemove:re,onClear:g,onKeydown:Ze},r),{label:Ue,prefix:t.prefix,"arrow-icon":t["arrow-icon"],"loading-icon":t["loading-icon"],"search-icon":t["search-icon"]})]},content:X})}});const YC=Object.assign(So,{Option:er,OptGroup:nr,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+So.name,So),e.component(n+er.name,er),e.component(n+nr.name,nr)}}),Dm=te({name:"IconLeft",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-left`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Fm=["stroke-width","stroke-linecap","stroke-linejoin"],jm=fe("path",{d:"M32 8.4 16.444 23.956 32 39.513"},null,-1),Vm=[jm];function xm(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Vm,14,Fm)}var Eo=ce(Dm,[["render",xm]]);const XC=Object.assign(Eo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Eo.name,Eo)}}),zm=te({name:"IconUp",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-up`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Um=["stroke-width","stroke-linecap","stroke-linejoin"],Wm=fe("path",{d:"M39.6 30.557 24.043 15 8.487 30.557"},null,-1),Hm=[Wm];function qm(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Hm,14,Um)}var wo=ce(zm,[["render",qm]]);const Gm=Object.assign(wo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+wo.name,wo)}});function Km(e,t,n){return C(()=>!!(e[n]||t[n]))}const Ym=te({name:"ConfigProvider",props:{prefixCls:{type:String,default:"arco"},locale:{type:Object},size:{type:String},global:{type:Boolean,default:!1},updateAtScroll:{type:Boolean,default:!1}},setup(e,{slots:t}){const{prefixCls:n,locale:r,size:o,updateAtScroll:i}=Re(e),a=ze({slots:t,prefixCls:n,locale:r,size:o,updateAtScroll:i});if(e.global){const s=Wt();s&&s.appContext.app.provide(xt,a)}else $t(xt,a)}});function Xm(e,t,n,r,o,i){return se(e.$slots,"default")}var ko=ce(Ym,[["render",Xm]]);const JC=Object.assign(ko,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+ko.name,ko)}}),Jm=te({name:"IconLink",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-link`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Zm=["stroke-width","stroke-linecap","stroke-linejoin"],Qm=fe("path",{d:"m14.1 25.414-4.95 4.95a6 6 0 0 0 8.486 8.485l8.485-8.485a6 6 0 0 0 0-8.485m7.779.707 4.95-4.95a6 6 0 1 0-8.486-8.485l-8.485 8.485a6 6 0 0 0 0 8.485"},null,-1),ev=[Qm];function tv(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},ev,14,Zm)}var $o=ce(Jm,[["render",tv]]);const nv=Object.assign($o,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+$o.name,$o)}}),rv=te({name:"Link",components:{IconLink:nv,IconLoading:It},props:{href:String,status:{type:String,default:"normal"},hoverable:{type:Boolean,default:!0},icon:Boolean,loading:Boolean,disabled:Boolean},emits:{click:e=>!0},setup(e,{slots:t,emit:n}){const r=oe("link"),o=Km(e,t,"icon"),i=s=>{if(e.disabled||e.loading){s.preventDefault();return}n("click",s)};return{cls:C(()=>[r,`${r}-status-${e.status}`,{[`${r}-disabled`]:e.disabled,[`${r}-loading`]:e.loading,[`${r}-hoverless`]:!e.hoverable,[`${r}-with-icon`]:e.loading||o.value}]),prefixCls:r,showIcon:o,handleClick:i}}}),ov=["href"];function iv(e,t,n,r,o,i){const a=ue("icon-loading"),s=ue("icon-link");return x(),ee("a",{href:e.disabled?void 0:e.href,class:K(e.cls),onClick:t[0]||(t[0]=(...l)=>e.handleClick&&e.handleClick(...l))},[e.loading||e.showIcon?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[e.loading?(x(),ve(a,{key:0})):se(e.$slots,"icon",{key:1},()=>[z(s)])],2)):pe("v-if",!0),se(e.$slots,"default")],10,ov)}var Oo=ce(rv,[["render",iv]]);const ZC=Object.assign(Oo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Oo.name,Oo)}}),ys=(e,t)=>{if(!e||!t)return;const n=t.split(".");if(n.length===0)return;let r=e;for(let o=0;o{if(!e||!t)return;const r=t.split(".");if(r.length===0)return;let o=e;for(let i=0;it in e?av(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,dv=(e,t)=>{for(var n in t||(t={}))uv.call(t,n)&&Cs(e,n,t[n]);if(_s)for(var n of _s(t))cv.call(t,n)&&Cs(e,n,t[n]);return e},fv=(e,t)=>sv(e,lv(t));const pr=["xxl","xl","lg","md","sm","xs"],Gn={xs:"(max-width: 575px)",sm:"(min-width: 576px)",md:"(min-width: 768px)",lg:"(min-width: 992px)",xl:"(min-width: 1200px)",xxl:"(min-width: 1600px)"};let Kt=[],hv=-1,Kn={};const Ss={matchHandlers:{},dispatch(e,t){return Kn=e,Kt.length<1?!1:(Kt.forEach(n=>{n.func(Kn,t)}),!0)},subscribe(e){Kt.length===0&&this.register();const t=(++hv).toString();return Kt.push({token:t,func:e}),e(Kn,null),t},unsubscribe(e){Kt=Kt.filter(t=>t.token!==e),Kt.length===0&&this.unregister()},unregister(){Object.keys(Gn).forEach(e=>{const t=Gn[e];if(!t)return;const n=this.matchHandlers[t];n&&n.mql&&n.listener&&(n.mql.removeEventListener?n.mql.removeEventListener("change",n.listener):n.mql.removeListener(n.listener))})},register(){Object.keys(Gn).forEach(e=>{const t=Gn[e];if(!t)return;const n=({matches:o})=>{this.dispatch(fv(dv({},Kn),{[e]:o}),e)},r=window.matchMedia(t);r.addEventListener?r.addEventListener("change",n):r.addListener(n),this.matchHandlers[t]={mql:r,listener:n},n(r)})}};function Es(e){return Be(e)}function yi(e,t,n=!1){const r=H({xs:!0,sm:!0,md:!0,lg:!0,xl:!0,xxl:!0}),o=C(()=>{let a=t;if(Es(e.value))for(let s=0;s{i=Ss.subscribe(a=>{Es(e.value)&&(r.value=a)})}),Sr(()=>{i&&Ss.unsubscribe(i)}),o}var Lo=te({name:"Divider",props:{direction:{type:String,default:"horizontal"},orientation:{type:String,default:"center"},type:{type:String},size:{type:Number},margin:{type:[Number,String]}},setup(e,{slots:t}){const n=oe("divider"),r=C(()=>e.direction==="horizontal"),o=C(()=>{const i={};if(e.size&&(i[r.value?"border-left-width":"border-bottom-width"]=de(e.size)?`${e.size}px`:e.size),e.type&&(i[r.value?"border-left-style":"border-bottom-style"]=e.type),e.margin){const a=de(e.margin)?`${e.margin}px`:e.margin;i.margin=r.value?`${a} 0`:`0 ${a}`}return i});return()=>{var i;const a=(i=t.default)==null?void 0:i.call(t),s=[n,`${n}-${e.direction}`,{[`${n}-with-text`]:a}];return z("div",{role:"separator",class:s,style:o.value},[a&&e.direction==="horizontal"&&z("span",{class:[`${n}-text`,`${n}-text-${e.orientation}`]},[a])])}}});const QC=Object.assign(Lo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Lo.name,Lo)}}),pv=te({name:"Form",props:{model:{type:Object,required:!0},layout:{type:String,default:"horizontal"},size:{type:String},labelColProps:{type:Object,default:()=>({span:5,offset:0})},wrapperColProps:{type:Object,default:()=>({span:19,offset:0})},labelColStyle:Object,wrapperColStyle:Object,labelAlign:{type:String,default:"right"},disabled:{type:Boolean,default:void 0},rules:{type:Object},autoLabelWidth:{type:Boolean,default:!1}},emits:{submit:(e,t)=>!0,submitSuccess:(e,t)=>!0,submitFailed:(e,t)=>!0},setup(e,{emit:t}){const n=oe("form"),{model:r,layout:o,disabled:i,labelAlign:a,labelColProps:s,wrapperColProps:l,labelColStyle:u,wrapperColStyle:c,size:d,rules:m}=Re(e),{mergedSize:_}=Mt(d),S=C(()=>e.layout==="horizontal"&&e.autoLabelWidth),E=[],L=[],y=ze({}),$=C(()=>Math.max(...Object.values(y))),w=O=>{O&&O.field&&E.push(O)},h=O=>{O&&O.field&&E.splice(E.indexOf(O),1)},p=O=>{E.forEach(N=>{O[N.field]&&N.setField(O[N.field])})},b=(O,N)=>{N&&y[N]!==O&&(y[N]=O)},v=O=>{O&&delete y[O]},P=O=>{const N=O?[].concat(O):[];E.forEach(D=>{(N.length===0||N.includes(D.field))&&D.resetField()})},A=O=>{const N=O?[].concat(O):[];E.forEach(D=>{(N.length===0||N.includes(D.field))&&D.clearValidate()})},T=O=>{const N=[];return E.forEach(D=>{N.push(D.validate())}),Promise.all(N).then(D=>{const V={};let G=!1;return D.forEach(B=>{B&&(G=!0,V[B.field]=B)}),it(O)&&O(G?V:void 0),G?V:void 0})},j=(O,N)=>{const D=[];for(const V of E)(We(O)&&O.includes(V.field)||O===V.field)&&D.push(V.validate());return Promise.all(D).then(V=>{const G={};let B=!1;return V.forEach(I=>{I&&(B=!0,G[I.field]=I)}),it(N)&&N(B?G:void 0),B?G:void 0})},J=O=>{const N=[];E.forEach(D=>{N.push(D.validate())}),Promise.all(N).then(D=>{const V={};let G=!1;D.forEach(B=>{B&&(G=!0,V[B.field]=B)}),G?t("submitFailed",{values:r.value,errors:V},O):t("submitSuccess",r.value,O),t("submit",{values:r.value,errors:G?V:void 0},O)})};return $t(Zi,ze({layout:o,disabled:i,labelAlign:a,labelColProps:s,wrapperColProps:l,labelColStyle:u,wrapperColStyle:c,model:r,size:_,rules:m,fields:E,touchedFields:L,addField:w,removeField:h,validateField:j,setLabelWidth:b,removeLabelWidth:v,maxLabelWidth:$,autoLabelWidth:S})),{cls:C(()=>[n,`${n}-layout-${e.layout}`,`${n}-size-${_.value}`,{[`${n}-auto-label-width`]:e.autoLabelWidth}]),handleSubmit:J,innerValidate:T,innerValidateField:j,innerResetFields:P,innerClearValidate:A,innerSetFields:p}},methods:{validate(e){return this.innerValidate(e)},validateField(e,t){return this.innerValidateField(e,t)},resetFields(e){return this.innerResetFields(e)},clearValidate(e){return this.innerClearValidate(e)},setFields(e){return this.innerSetFields(e)}}});function mv(e,t,n,r,o,i){return x(),ee("form",{class:K(e.cls),onSubmit:t[0]||(t[0]=cn((...a)=>e.handleSubmit&&e.handleSubmit(...a),["prevent"]))},[se(e.$slots,"default")],34)}var To=ce(pv,[["render",mv]]),yn=Object.prototype.toString;function Or(e){return yn.call(e)==="[object Array]"}function Nt(e){return yn.call(e)==="[object Object]"}function bi(e){return yn.call(e)==="[object String]"}function vv(e){return yn.call(e)==="[object Number]"&&e===e}function gv(e){return yn.call(e)==="[object Boolean]"}function _i(e){return yn.call(e)==="[object Function]"}function yv(e){return Nt(e)&&Object.keys(e).length===0}function Yt(e){return e==null||e===""}function uu(e){return Or(e)&&!e.length}var ta=function(e,t){if(typeof e!="object"||typeof t!="object")return e===t;if(_i(e)&&_i(t))return e===t||e.toString()===t.toString();if(Object.keys(e).length!==Object.keys(t).length)return!1;for(var n in e){var r=ta(e[n],t[n]);if(!r)return!1}return!0},bv=function(e,t){var n={};return Object.keys(e).forEach(function(r){var o=e[r],i=t&&t[r];n[r]=Nt(o)?Object.assign(Object.assign({},o),i):i||o}),n},_v=function(e,t){for(var n=t.split("."),r=e,o=0;o=o,this.getValidateMsg("string.minLength",{minLength:o})):this},t.prototype.length=function(o){return this.obj?this.validate(this.obj.length===o,this.getValidateMsg("string.length",{length:o})):this},t.prototype.match=function(o){var i=o instanceof RegExp;return i&&(o.lastIndex=0),this.validate(this.obj===void 0||i&&o.test(this.obj),this.getValidateMsg("string.match",{pattern:o}))},n.uppercase.get=function(){return this.obj?this.validate(this.obj.toUpperCase()===this.obj,this.getValidateMsg("string.uppercase")):this},n.lowercase.get=function(){return this.obj?this.validate(this.obj.toLowerCase()===this.obj,this.getValidateMsg("string.lowercase")):this},Object.defineProperties(t.prototype,n),t}(bt),Ev=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"number"})),this.validate(o&&o.strict?vv(this.obj):!0,this.getValidateMsg("type.number"))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={positive:{configurable:!0},negative:{configurable:!0}};return t.prototype.min=function(o){return Yt(this.obj)?this:this.validate(this.obj>=o,this.getValidateMsg("number.min",{min:o}))},t.prototype.max=function(o){return Yt(this.obj)?this:this.validate(this.obj<=o,this.getValidateMsg("number.max",{max:o}))},t.prototype.equal=function(o){return Yt(this.obj)?this:this.validate(this.obj===o,this.getValidateMsg("number.equal",{equal:o}))},t.prototype.range=function(o,i){return Yt(this.obj)?this:this.validate(this.obj>=o&&this.obj<=i,this.getValidateMsg("number.range",{min:o,max:i}))},n.positive.get=function(){return Yt(this.obj)?this:this.validate(this.obj>0,this.getValidateMsg("number.positive"))},n.negative.get=function(){return Yt(this.obj)?this:this.validate(this.obj<0,this.getValidateMsg("number.negative"))},Object.defineProperties(t.prototype,n),t}(bt),wv=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"array"})),this.validate(o&&o.strict?Or(this.obj):!0,this.getValidateMsg("type.array",{value:this.obj,type:this.type}))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={empty:{configurable:!0}};return t.prototype.length=function(o){return this.obj?this.validate(this.obj.length===o,this.getValidateMsg("array.length",{value:this.obj,length:o})):this},t.prototype.minLength=function(o){return this.obj?this.validate(this.obj.length>=o,this.getValidateMsg("array.minLength",{value:this.obj,minLength:o})):this},t.prototype.maxLength=function(o){return this.obj?this.validate(this.obj.length<=o,this.getValidateMsg("array.maxLength",{value:this.obj,maxLength:o})):this},t.prototype.includes=function(o){var i=this;return this.obj?this.validate(o.every(function(a){return i.obj.indexOf(a)!==-1}),this.getValidateMsg("array.includes",{value:this.obj,includes:o})):this},t.prototype.deepEqual=function(o){return this.obj?this.validate(ta(this.obj,o),this.getValidateMsg("array.deepEqual",{value:this.obj,deepEqual:o})):this},n.empty.get=function(){return this.validate(uu(this.obj),this.getValidateMsg("array.empty",{value:this.obj}))},Object.defineProperties(t.prototype,n),t}(bt),kv=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"object"})),this.validate(o&&o.strict?Nt(this.obj):!0,this.getValidateMsg("type.object"))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={empty:{configurable:!0}};return t.prototype.deepEqual=function(o){return this.obj?this.validate(ta(this.obj,o),this.getValidateMsg("object.deepEqual",{deepEqual:o})):this},t.prototype.hasKeys=function(o){var i=this;return this.obj?this.validate(o.every(function(a){return i.obj[a]}),this.getValidateMsg("object.hasKeys",{keys:o})):this},n.empty.get=function(){return this.validate(yv(this.obj),this.getValidateMsg("object.empty"))},Object.defineProperties(t.prototype,n),t}(bt),$v=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"boolean"})),this.validate(o&&o.strict?gv(this.obj):!0,this.getValidateMsg("type.boolean"))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={true:{configurable:!0},false:{configurable:!0}};return n.true.get=function(){return this.validate(this.obj===!0,this.getValidateMsg("boolean.true"))},n.false.get=function(){return this.validate(this.obj===!1,this.getValidateMsg("boolean.false"))},Object.defineProperties(t.prototype,n),t}(bt),Ov=/^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/,Lv=new RegExp("^(?!mailto:)(?:(?:http|https|ftp)://)(?:\\S+(?::\\S*)?@)?(?:(?:(?:[1-9]\\d?|1\\d\\d|2[01]\\d|22[0-3])(?:\\.(?:1?\\d{1,2}|2[0-4]\\d|25[0-5])){2}(?:\\.(?:[0-9]\\d?|1\\d\\d|2[0-4]\\d|25[0-4]))|(?:(?:[a-z\\u00a1-\\uffff0-9]+-?)*[a-z\\u00a1-\\uffff0-9]+)(?:\\.(?:[a-z\\u00a1-\\uffff0-9]+-?)*[a-z\\u00a1-\\uffff0-9]+)*(?:\\.(?:[a-z\\u00a1-\\uffff]{2,})))|localhost)(?::\\d{2,5})?(?:(/|\\?|#)[^\\s]*)?$","i"),Tv=/^(2(5[0-5]{1}|[0-4]\d{1})|[0-1]?\d{1,2})(\.(2(5[0-5]{1}|[0-4]\d{1})|[0-1]?\d{1,2})){3}$/,Av=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"type"}))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={email:{configurable:!0},url:{configurable:!0},ip:{configurable:!0}};return n.email.get=function(){return this.type="email",this.validate(this.obj===void 0||Ov.test(this.obj),this.getValidateMsg("type.email"))},n.url.get=function(){return this.type="url",this.validate(this.obj===void 0||Lv.test(this.obj),this.getValidateMsg("type.url"))},n.ip.get=function(){return this.type="ip",this.validate(this.obj===void 0||Tv.test(this.obj),this.getValidateMsg("type.ip"))},Object.defineProperties(t.prototype,n),t}(bt),Nv=function(e){function t(r,o){e.call(this,r,Object.assign(Object.assign({},o),{type:"custom"}))}e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t;var n={validate:{configurable:!0}};return n.validate.get=function(){var r=this;return function(o,i){var a;if(o)return a=o(r.obj,r.addError.bind(r)),a&&a.then?(i&&a.then(function(){i&&i(r.error)},function(s){console.error(s)}),[a,r]):(i&&i(r.error),r.error)}},Object.defineProperties(t.prototype,n),t}(bt),Pv=function(t,n){this.string=new Sv(t,n),this.number=new Ev(t,n),this.array=new wv(t,n),this.object=new kv(t,n),this.boolean=new $v(t,n),this.type=new Av(t,n),this.custom=new Nv(t,n)},cu=function(t,n){n===void 0&&(n={}),this.schema=t,this.options=n};cu.prototype.validate=function(t,n){var r=this;if(!Nt(t))return;var o=[],i=null;function a(s,l){i||(i={}),(!i[s]||l.requiredError)&&(i[s]=l)}this.schema&&Object.keys(this.schema).forEach(function(s){if(Or(r.schema[s]))for(var l=function(d){var m=r.schema[s][d],_=m.type,S=m.message;if(!_&&!m.validator)throw"You must specify a type to field "+s+"!";var E=new Pv(t[s],Object.assign(Object.assign({},r.options),{message:S,field:s})),L=E.type[_]||null;if(!L)if(m.validator){L=E.custom.validate(m.validator),Object.prototype.toString.call(L)==="[object Array]"&&L[0].then?o.push({function:L[0],_this:L[1],key:s}):L&&a(s,L);return}else L=E[_];if(Object.keys(m).forEach(function(y){m.required&&(L=L.isRequired),y!=="message"&&L[y]&&m[y]&&typeof L[y]=="object"&&(L=L[y]),L[y]&&m[y]!==void 0&&typeof L[y]=="function"&&(L=L[y](m[y]))}),L.collect(function(y){y&&a(s,y)}),i)return"break"},u=0;u0?Promise.all(o.map(function(s){return s.function})).then(function(){o.forEach(function(s){s._this.error&&a(s.key,s._this.error)}),n&&n(i)}):n&&n(i)};const du=Symbol("RowContextInjectionKey"),Iv=te({name:"Row",props:{gutter:{type:[Number,Object,Array],default:0},justify:{type:String,default:"start"},align:{type:String,default:"start"},div:{type:Boolean},wrap:{type:Boolean,default:!0}},setup(e){const{gutter:t,align:n,justify:r,div:o,wrap:i}=Re(e),a=oe("row"),s=C(()=>({[`${a}`]:!o.value,[`${a}-nowrap`]:!i.value,[`${a}-align-${n.value}`]:n.value,[`${a}-justify-${r.value}`]:r.value})),l=C(()=>Array.isArray(t.value)?t.value[0]:t.value),u=C(()=>Array.isArray(t.value)?t.value[1]:0),c=yi(l,0),d=yi(u,0),m=C(()=>{const S={};if((c.value||d.value)&&!o.value){const E=-c.value/2,L=-d.value/2;E&&(S.marginLeft=`${E}px`,S.marginRight=`${E}px`),L&&(S.marginTop=`${L}px`,S.marginBottom=`${L}px`)}return S}),_=C(()=>[c.value,d.value]);return $t(du,ze({gutter:_,div:o})),{classNames:s,styles:m}}});function Mv(e,t,n,r,o,i){return x(),ee("div",{class:K(e.classNames),style:$e(e.styles)},[se(e.$slots,"default")],6)}var Rv=ce(Iv,[["render",Mv]]);function Bv(e){return C(()=>{const{val:n,key:r,xs:o,sm:i,md:a,lg:s,xl:l,xxl:u}=e.value;if(!o&&!i&&!a&&!s&&!l&&!u)return n;const c={};return pr.forEach(d=>{const m=e.value[d];de(m)?c[d]=m:Be(m)&&de(m[r])&&(c[d]=m[r])}),c})}var Dv=Object.defineProperty,ws=Object.getOwnPropertySymbols,Fv=Object.prototype.hasOwnProperty,jv=Object.prototype.propertyIsEnumerable,ks=(e,t,n)=>t in e?Dv(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Ao=(e,t)=>{for(var n in t||(t={}))Fv.call(t,n)&&ks(e,n,t[n]);if(ws)for(var n of ws(t))jv.call(t,n)&&ks(e,n,t[n]);return e};function Vv(e){if(Vt(e)&&(["initial","auto","none"].includes(e)||/^\d+$/.test(e))||de(e))return e;if(Vt(e)&&/^\d+(px|em|rem|%)$/.test(e))return`0 0 ${e}`}const xv=te({name:"Col",props:{span:{type:Number,default:24},offset:{type:Number},order:{type:Number},xs:{type:[Number,Object]},sm:{type:[Number,Object]},md:{type:[Number,Object]},lg:{type:[Number,Object]},xl:{type:[Number,Object]},xxl:{type:[Number,Object]},flex:{type:[Number,String]}},setup(e){const t=oe("col"),n=et(du,{}),r=C(()=>Vv(e.flex)),o=C(()=>{const{div:d}=n,{span:m,offset:_,order:S,xs:E,sm:L,md:y,lg:$,xl:w,xxl:h}=e,p={[`${t}`]:!d,[`${t}-order-${S}`]:S,[`${t}-${m}`]:!d&&!E&&!L&&!y&&!$&&!w&&!h,[`${t}-offset-${_}`]:_&&_>0},b={xs:E,sm:L,md:y,lg:$,xl:w,xxl:h};return Object.keys(b).forEach(v=>{const P=b[v];P&&de(P)?p[`${t}-${v}-${P}`]=!0:P&&Be(P)&&(p[`${t}-${v}-${P.span}`]=P.span,p[`${t}-${v}-offset-${P.offset}`]=P.offset,p[`${t}-${v}-order-${P.order}`]=P.order)}),p}),i=C(()=>r.value?t:o.value),a=C(()=>{const{gutter:d,div:m}=n,_={};if(Array.isArray(d)&&!m){const S=d[0]&&d[0]/2||0,E=d[1]&&d[1]/2||0;S&&(_.paddingLeft=`${S}px`,_.paddingRight=`${S}px`),E&&(_.paddingTop=`${E}px`,_.paddingBottom=`${E}px`)}return _}),s=C(()=>r.value?{flex:r.value}:{}),l=C(()=>xn(e,pr)),u=Bv(C(()=>Ao({val:e.span,key:"span"},l.value))),c=yi(u,24,!0);return{visible:C(()=>!!c.value),classNames:i,styles:C(()=>Ao(Ao({},a.value),s.value))}}});function zv(e,t,n,r,o,i){return e.visible?(x(),ee("div",{key:0,class:K(e.classNames),style:$e(e.styles)},[se(e.$slots,"default")],6)):pe("v-if",!0)}var Uv=ce(xv,[["render",zv]]),Wv=Object.defineProperty,$s=Object.getOwnPropertySymbols,Hv=Object.prototype.hasOwnProperty,qv=Object.prototype.propertyIsEnumerable,Os=(e,t,n)=>t in e?Wv(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Ls=(e,t)=>{for(var n in t||(t={}))Hv.call(t,n)&&Os(e,n,t[n]);if($s)for(var n of $s(t))qv.call(t,n)&&Os(e,n,t[n]);return e};const Gv=te({name:"Tooltip",components:{Trigger:hr},props:{popupVisible:{type:Boolean,default:void 0},defaultPopupVisible:{type:Boolean,default:!1},content:String,position:{type:String,default:"top"},mini:{type:Boolean,default:!1},backgroundColor:{type:String},contentClass:{type:[String,Array,Object]},contentStyle:{type:Object},arrowClass:{type:[String,Array,Object]},arrowStyle:{type:Object},popupContainer:{type:[String,Object]}},emits:{"update:popupVisible":e=>!0,popupVisibleChange:e=>!0},setup(e,{emit:t}){const n=oe("tooltip"),r=H(e.defaultPopupVisible),o=C(()=>{var c;return(c=e.popupVisible)!=null?c:r.value}),i=c=>{r.value=c,t("update:popupVisible",c),t("popupVisibleChange",c)},a=C(()=>[`${n}-content`,e.contentClass,{[`${n}-mini`]:e.mini}]),s=C(()=>{if(e.backgroundColor||e.contentStyle)return Ls({backgroundColor:e.backgroundColor},e.contentStyle)}),l=C(()=>[`${n}-popup-arrow`,e.arrowClass]),u=C(()=>{if(e.backgroundColor||e.arrowStyle)return Ls({backgroundColor:e.backgroundColor},e.arrowStyle)});return{prefixCls:n,computedPopupVisible:o,contentCls:a,computedContentStyle:s,arrowCls:l,computedArrowStyle:u,handlePopupVisibleChange:i}}});function Kv(e,t,n,r,o,i){const a=ue("Trigger");return x(),ve(a,{class:K(e.prefixCls),trigger:"hover",position:e.position,"popup-visible":e.computedPopupVisible,"popup-offset":10,"show-arrow":"","content-class":e.contentCls,"content-style":e.computedContentStyle,"arrow-class":e.arrowCls,"arrow-style":e.computedArrowStyle,"popup-container":e.popupContainer,"animation-name":"zoom-in-fade-out","auto-fit-transform-origin":"",role:"tooltip",onPopupVisibleChange:e.handlePopupVisibleChange},{content:ke(()=>[se(e.$slots,"content",{},()=>[nt(Xe(e.content),1)])]),default:ke(()=>[se(e.$slots,"default")]),_:3},8,["class","position","popup-visible","content-class","content-style","arrow-class","arrow-style","popup-container","onPopupVisibleChange"])}var No=ce(Gv,[["render",Kv]]);const fu=Object.assign(No,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+No.name,No)}}),Yv=te({name:"IconQuestionCircle",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-question-circle`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Xv=["stroke-width","stroke-linecap","stroke-linejoin"],Jv=fe("path",{d:"M42 24c0 9.941-8.059 18-18 18S6 33.941 6 24 14.059 6 24 6s18 8.059 18 18Z"},null,-1),Zv=fe("path",{d:"M24.006 31v4.008m0-6.008L24 28c0-3 3-4 4.78-6.402C30.558 19.195 28.288 15 23.987 15c-4.014 0-5.382 2.548-5.388 4.514v.465"},null,-1),Qv=[Jv,Zv];function eg(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Qv,14,Xv)}var Po=ce(Yv,[["render",eg]]);const tg=Object.assign(Po,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Po.name,Po)}}),ng=te({name:"FormItemLabel",components:{ResizeObserver:Mn,Tooltip:fu,IconQuestionCircle:tg},props:{required:{type:Boolean,default:!1},showColon:{type:Boolean,default:!1},component:{type:String,default:"label"},asteriskPosition:{type:String,default:"start"},tooltip:{type:String},attrs:Object},setup(){const e=oe("form-item-label"),t=et(Zi,void 0),n=Wt(),r=H(),o=()=>{r.value&&de(r.value.offsetWidth)&&(t==null||t.setLabelWidth(r.value.offsetWidth,n==null?void 0:n.uid))};return Ke(()=>{r.value&&de(r.value.offsetWidth)&&(t==null||t.setLabelWidth(r.value.offsetWidth,n==null?void 0:n.uid))}),Ht(()=>{t==null||t.removeLabelWidth(n==null?void 0:n.uid)}),{prefixCls:e,labelRef:r,handleResize:o}}}),rg=fe("svg",{fill:"currentColor",viewBox:"0 0 1024 1024",width:"1em",height:"1em"},[fe("path",{d:"M583.338667 17.066667c18.773333 0 34.133333 15.36 34.133333 34.133333v349.013333l313.344-101.888a34.133333 34.133333 0 0 1 43.008 22.016l42.154667 129.706667a34.133333 34.133333 0 0 1-21.845334 43.178667l-315.733333 102.4 208.896 287.744a34.133333 34.133333 0 0 1-7.509333 47.786666l-110.421334 80.213334a34.133333 34.133333 0 0 1-47.786666-7.509334L505.685333 706.218667 288.426667 1005.226667a34.133333 34.133333 0 0 1-47.786667 7.509333l-110.421333-80.213333a34.133333 34.133333 0 0 1-7.509334-47.786667l214.186667-295.253333L29.013333 489.813333a34.133333 34.133333 0 0 1-22.016-43.008l42.154667-129.877333a34.133333 34.133333 0 0 1 43.008-22.016l320.512 104.106667L412.672 51.2c0-18.773333 15.36-34.133333 34.133333-34.133333h136.533334z"})],-1),og=[rg],ig=fe("svg",{fill:"currentColor",viewBox:"0 0 1024 1024",width:"1em",height:"1em"},[fe("path",{d:"M583.338667 17.066667c18.773333 0 34.133333 15.36 34.133333 34.133333v349.013333l313.344-101.888a34.133333 34.133333 0 0 1 43.008 22.016l42.154667 129.706667a34.133333 34.133333 0 0 1-21.845334 43.178667l-315.733333 102.4 208.896 287.744a34.133333 34.133333 0 0 1-7.509333 47.786666l-110.421334 80.213334a34.133333 34.133333 0 0 1-47.786666-7.509334L505.685333 706.218667 288.426667 1005.226667a34.133333 34.133333 0 0 1-47.786667 7.509333l-110.421333-80.213333a34.133333 34.133333 0 0 1-7.509334-47.786667l214.186667-295.253333L29.013333 489.813333a34.133333 34.133333 0 0 1-22.016-43.008l42.154667-129.877333a34.133333 34.133333 0 0 1 43.008-22.016l320.512 104.106667L412.672 51.2c0-18.773333 15.36-34.133333 34.133333-34.133333h136.533334z"})],-1),ag=[ig];function sg(e,t,n,r,o,i){const a=ue("icon-question-circle"),s=ue("Tooltip"),l=ue("ResizeObserver");return x(),ve(l,{onResize:e.handleResize},{default:ke(()=>[(x(),ve(sn(e.component),Fe({ref:"labelRef",class:e.prefixCls},e.attrs),{default:ke(()=>[e.required&&e.asteriskPosition==="start"?(x(),ee("strong",{key:0,class:K(`${e.prefixCls}-required-symbol`)},og,2)):pe("v-if",!0),se(e.$slots,"default"),e.tooltip?(x(),ve(s,{key:1,content:e.tooltip},{default:ke(()=>[z(a,{class:K(`${e.prefixCls}-tooltip`)},null,8,["class"])]),_:1},8,["content"])):pe("v-if",!0),e.required&&e.asteriskPosition==="end"?(x(),ee("strong",{key:2,class:K(`${e.prefixCls}-required-symbol`)},ag,2)):pe("v-if",!0),nt(" "+Xe(e.showColon?":":""),1)]),_:3},16,["class"]))]),_:3},8,["onResize"])}var lg=ce(ng,[["render",sg]]);const ug=te({name:"FormItemMessage",props:{error:Array,help:String},setup(){return{prefixCls:oe("form-item-message")}}});function cg(e,t,n,r,o,i){return e.help||e.$slots.help?(x(),ve(Pn,{key:0,name:"form-blink",appear:""},{default:ke(()=>[fe("div",{class:K([e.prefixCls,`${e.prefixCls}-help`])},[se(e.$slots,"help",{},()=>[nt(Xe(e.help),1)])],2)]),_:3})):(x(!0),ee(rt,{key:1},Fn(e.error,a=>(x(),ve(Pn,{key:a,name:"form-blink",appear:""},{default:ke(()=>[fe("div",{role:"alert",class:K([e.prefixCls])},Xe(a),3)]),_:2},1024))),128))}var dg=ce(ug,[["render",cg]]);const Ts=["success","warning","error","validating"],fg=e=>{let t="";for(const n of Object.keys(e)){const r=e[n];r&&(!t||Ts.indexOf(r)>Ts.indexOf(t))&&(t=e[n])}return t},hg=e=>{const t=[];for(const n of Object.keys(e)){const r=e[n];r&&t.push(r)}return t};var pg=Object.defineProperty,mr=Object.getOwnPropertySymbols,hu=Object.prototype.hasOwnProperty,pu=Object.prototype.propertyIsEnumerable,As=(e,t,n)=>t in e?pg(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,Ns=(e,t)=>{for(var n in t||(t={}))hu.call(t,n)&&As(e,n,t[n]);if(mr)for(var n of mr(t))pu.call(t,n)&&As(e,n,t[n]);return e},mg=(e,t)=>{var n={};for(var r in e)hu.call(e,r)&&t.indexOf(r)<0&&(n[r]=e[r]);if(e!=null&&mr)for(var r of mr(e))t.indexOf(r)<0&&pu.call(e,r)&&(n[r]=e[r]);return n};const vg=te({name:"FormItem",components:{ArcoRow:Rv,ArcoCol:Uv,FormItemLabel:lg,FormItemMessage:dg},props:{field:{type:String,default:""},label:String,tooltip:{type:String},showColon:{type:Boolean,default:!1},noStyle:{type:Boolean,default:!1},disabled:{type:Boolean,default:void 0},help:String,extra:String,required:{type:Boolean,default:!1},asteriskPosition:{type:String,default:"start"},rules:{type:[Object,Array]},validateStatus:{type:String},validateTrigger:{type:[String,Array],default:"change"},labelColProps:Object,wrapperColProps:Object,hideLabel:{type:Boolean,default:!1},hideAsterisk:{type:Boolean,default:!1},labelColStyle:Object,wrapperColStyle:Object,rowProps:Object,rowClass:[String,Array,Object],contentClass:[String,Array,Object],contentFlex:{type:Boolean,default:!0},mergeProps:{type:[Boolean,Function],default:!0},labelColFlex:{type:[Number,String]},feedback:{type:Boolean,default:!1},labelComponent:{type:String,default:"label"},labelAttrs:Object},setup(e){const t=oe("form-item"),{field:n}=Re(e),r=et(Zi,{}),{autoLabelWidth:o,layout:i}=Re(r),a=C(()=>{var B;const I=Ns({},(B=e.labelColProps)!=null?B:r.labelColProps);return e.labelColFlex?I.flex=e.labelColFlex:r.autoLabelWidth&&(I.flex=`${r.maxLabelWidth}px`),I}),s=C(()=>{var B;const I=Ns({},(B=e.wrapperColProps)!=null?B:r.wrapperColProps);return(e.labelColFlex||r.autoLabelWidth)&&(I.flex="auto"),I}),l=C(()=>{var B;return(B=e.labelColStyle)!=null?B:r.labelColStyle}),u=C(()=>{var B;return(B=e.wrapperColStyle)!=null?B:r.wrapperColStyle}),c=ys(r.model,e.field),d=ze({}),m=ze({}),_=C(()=>fg(d)),S=C(()=>hg(m)),E=H(!1),L=C(()=>ys(r.model,e.field)),y=C(()=>{var B;return!!((B=e.disabled)!=null?B:r!=null&&r.disabled)}),$=C(()=>{var B;return(B=e.validateStatus)!=null?B:_.value}),w=C(()=>$.value==="error"),h=C(()=>{var B,I,Y;const Q=[].concat((Y=(I=e.rules)!=null?I:(B=r==null?void 0:r.rules)==null?void 0:B[e.field])!=null?Y:[]),he=Q.some(me=>me.required);return e.required&&!he?[{required:!0}].concat(Q):Q}),p=C(()=>h.value.some(B=>B.required)),b=e.noStyle?et(pi,void 0):void 0,v=(B,{status:I,message:Y})=>{d[B]=I,m[B]=Y,e.noStyle&&(b==null||b.updateValidateState(B,{status:I,message:Y}))},P=C(()=>e.feedback&&$.value?$.value:void 0),A=()=>{if(E.value)return Promise.resolve();const B=h.value;if(!n.value||B.length===0)return _.value&&J(),Promise.resolve();const I=n.value,Y=L.value;v(I,{status:"",message:""});const Q=new cu({[I]:B.map(he=>{var me=mg(he,[]);return!me.type&&!me.validator&&(me.type="string"),me})},{ignoreEmptyString:!0});return new Promise(he=>{Q.validate({[I]:Y},me=>{var Se;const Ae=!!(me!=null&&me[I]);v(I,{status:Ae?"error":"",message:(Se=me==null?void 0:me[I].message)!=null?Se:""});const Ie=Ae?{label:e.label,field:n.value,value:me[I].value,type:me[I].type,isRequiredError:!!me[I].requiredError,message:me[I].message}:void 0;he(Ie)})})},T=C(()=>[].concat(e.validateTrigger)),j=C(()=>T.value.reduce((B,I)=>{switch(I){case"change":return B.onChange=()=>{A()},B;case"input":return B.onInput=()=>{Je(()=>{A()})},B;case"focus":return B.onFocus=()=>{A()},B;case"blur":return B.onBlur=()=>{A()},B;default:return B}},{}));$t(pi,ze({eventHandlers:j,size:r&&cr(r,"size"),disabled:y,error:w,feedback:P,updateValidateState:v}));const J=()=>{n.value&&v(n.value,{status:"",message:""})},N=ze({field:n,disabled:y,error:w,validate:A,clearValidate:J,resetField:()=>{J(),E.value=!0,r!=null&&r.model&&n.value&&bs(r.model,n.value,c),Je(()=>{E.value=!1})},setField:B=>{var I,Y;n.value&&(E.value=!0,"value"in B&&(r!=null&&r.model)&&n.value&&bs(r.model,n.value,B.value),(B.status||B.message)&&v(n.value,{status:(I=B.status)!=null?I:"",message:(Y=B.message)!=null?Y:""}),Je(()=>{E.value=!1}))}});Ke(()=>{var B;N.field&&((B=r.addField)==null||B.call(r,N))}),Ht(()=>{var B;N.field&&((B=r.removeField)==null||B.call(r,N))});const D=C(()=>[t,`${t}-layout-${r.layout}`,{[`${t}-error`]:w.value,[`${t}-status-${$.value}`]:!!$.value},e.rowClass]),V=C(()=>[`${t}-label-col`,{[`${t}-label-col-left`]:r.labelAlign==="left",[`${t}-label-col-flex`]:r.autoLabelWidth||e.labelColFlex}]),G=C(()=>[`${t}-wrapper-col`,{[`${t}-wrapper-col-flex`]:!s.value}]);return{prefixCls:t,cls:D,isRequired:p,isError:w,finalMessage:S,mergedLabelCol:a,mergedWrapperCol:s,labelColCls:V,autoLabelWidth:o,layout:i,mergedLabelStyle:l,wrapperColCls:G,mergedWrapperStyle:u}}});function gg(e,t,n,r,o,i){var a;const s=ue("FormItemLabel"),l=ue("ArcoCol"),u=ue("FormItemMessage"),c=ue("ArcoRow");return e.noStyle?se(e.$slots,"default",{key:0}):(x(),ve(c,Fe({key:1,class:[e.cls,{[`${e.prefixCls}-has-help`]:!!((a=e.$slots.help)!=null?a:e.help)}],wrap:!(e.labelColFlex||e.autoLabelWidth),div:e.layout!=="horizontal"||e.hideLabel},e.rowProps),{default:ke(()=>[e.hideLabel?pe("v-if",!0):(x(),ve(l,Fe({key:0,class:e.labelColCls,style:e.mergedLabelStyle},e.mergedLabelCol),{default:ke(()=>[z(s,{required:e.hideAsterisk?!1:e.isRequired,"show-colon":e.showColon,"asterisk-position":e.asteriskPosition,component:e.labelComponent,attrs:e.labelAttrs,tooltip:e.tooltip},{default:ke(()=>[e.$slots.label||e.label?se(e.$slots,"label",{key:0},()=>[nt(Xe(e.label),1)]):pe("v-if",!0)]),_:3},8,["required","show-colon","asterisk-position","component","attrs","tooltip"])]),_:3},16,["class","style"])),z(l,Fe({class:e.wrapperColCls,style:e.mergedWrapperStyle},e.mergedWrapperCol),{default:ke(()=>[fe("div",{class:K(`${e.prefixCls}-content-wrapper`)},[fe("div",{class:K([`${e.prefixCls}-content`,{[`${e.prefixCls}-content-flex`]:e.contentFlex},e.contentClass])},[se(e.$slots,"default")],2)],2),e.isError||e.$slots.help||e.help?(x(),ve(u,{key:0,error:e.finalMessage,help:e.help},Pl({_:2},[e.$slots.help?{name:"help",fn:ke(()=>[se(e.$slots,"help")])}:void 0]),1032,["error","help"])):pe("v-if",!0),e.$slots.extra||e.extra?(x(),ee("div",{key:1,class:K(`${e.prefixCls}-extra`)},[se(e.$slots,"extra",{},()=>[nt(Xe(e.extra),1)])],2)):pe("v-if",!0)]),_:3},16,["class","style"])]),_:3},16,["class","wrap","div"]))}var Io=ce(vg,[["render",gg]]);const eS=Object.assign(To,{Item:Io,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+To.name,To),e.component(n+Io.name,Io)}});function na(e,t){return t===void 0&&(t=15),+parseFloat(Number(e).toPrecision(t))}function wt(e){var t=e.toString().split(/[eE]/),n=(t[0].split(".")[1]||"").length-+(t[1]||0);return n>0?n:0}function Bn(e){if(e.toString().indexOf("e")===-1)return Number(e.toString().replace(".",""));var t=wt(e);return t>0?na(Number(e)*Math.pow(10,t)):Number(e)}function Ci(e){vu&&(e>Number.MAX_SAFE_INTEGER||e["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-plus`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Eg=["stroke-width","stroke-linecap","stroke-linejoin"],wg=fe("path",{d:"M5 24h38M24 5v38"},null,-1),kg=[wg];function $g(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},kg,14,Eg)}var Mo=ce(Sg,[["render",$g]]);const Og=Object.assign(Mo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Mo.name,Mo)}}),Lg=te({name:"IconMinus",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-minus`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Tg=["stroke-width","stroke-linecap","stroke-linejoin"],Ag=fe("path",{d:"M5 24h38"},null,-1),Ng=[Ag];function Pg(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Ng,14,Tg)}var Ro=ce(Lg,[["render",Pg]]);const Ig=Object.assign(Ro,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Ro.name,Ro)}}),Mg=150;Qt.enableBoundaryChecking(!1);var Bo=te({name:"InputNumber",props:{modelValue:Number,defaultValue:Number,mode:{type:String,default:"embed"},precision:Number,step:{type:Number,default:1},disabled:{type:Boolean,default:!1},error:{type:Boolean,default:!1},max:{type:Number,default:1/0},min:{type:Number,default:-1/0},formatter:{type:Function},parser:{type:Function},placeholder:String,hideButton:{type:Boolean,default:!1},size:{type:String},allowClear:{type:Boolean,default:!1},modelEvent:{type:String,default:"change"},readOnly:{type:Boolean,default:!1}},emits:{"update:modelValue":e=>!0,change:(e,t)=>!0,focus:e=>!0,blur:e=>!0,clear:e=>!0,input:(e,t,n)=>!0},setup(e,{emit:t,slots:n}){var r;const{size:o,disabled:i}=Re(e),a=oe("input-number"),s=H(),{mergedSize:l,mergedDisabled:u,eventHandlers:c}=yt({size:o,disabled:i}),{mergedSize:d}=Mt(l),m=C(()=>{if(de(e.precision)){const I=`${e.step}`.split(".")[1],Y=I&&I.length||0;return Math.max(Y,e.precision)}}),_=I=>{var Y,Q;if(!de(I))return"";const he=m.value?I.toFixed(m.value):String(I);return(Q=(Y=e.formatter)==null?void 0:Y.call(e,he))!=null?Q:he},S=H(_((r=e.modelValue)!=null?r:e.defaultValue)),E=C(()=>{var I,Y;if(!S.value)return;const Q=Number((Y=(I=e.parser)==null?void 0:I.call(e,S.value))!=null?Y:S.value);return Number.isNaN(Q)?void 0:Q}),L=H(de(E.value)&&E.value<=e.min),y=H(de(E.value)&&E.value>=e.max);let $=0;const w=()=>{$&&(window.clearTimeout($),$=0)},h=I=>{if(!vt(I))return de(e.min)&&Ie.max&&(I=e.max),de(m.value)?Qt.round(I,m.value):I},p=I=>{let Y=!1,Q=!1;de(I)&&(I<=e.min&&(Y=!0),I>=e.max&&(Q=!0)),y.value!==Q&&(y.value=Q),L.value!==Y&&(L.value=Y)},b=()=>{const I=h(E.value),Y=_(I);(I!==E.value||S.value!==Y)&&(S.value=Y),t("update:modelValue",I)};Ne(()=>e.min,I=>{const Y=de(E.value)&&E.value<=I;L.value!==Y&&(L.value=Y),de(E.value)&&E.valuee.max,I=>{const Y=de(E.value)&&E.value>=I;y.value!==Y&&(y.value=Y),de(E.value)&&E.value>I&&b()});const v=(I,Y)=>{if(u.value||I==="plus"&&y.value||I==="minus"&&L.value)return;let Q;de(E.value)?Q=h(Qt[I](E.value,e.step)):Q=e.min===-1/0?0:e.min,S.value=_(Q),p(Q),t("update:modelValue",Q),t("change",Q,Y)},P=(I,Y,Q=!1)=>{var he;I.preventDefault(),(he=s.value)==null||he.focus(),v(Y,I),Q&&($=window.setTimeout(()=>I.target.dispatchEvent(I),Mg))},A=(I,Y)=>{var Q,he,me,Se;I=I.trim().replace(/。/g,"."),I=(he=(Q=e.parser)==null?void 0:Q.call(e,I))!=null?he:I,(de(Number(I))||/^(\.|-)$/.test(I))&&(S.value=(Se=(me=e.formatter)==null?void 0:me.call(e,I))!=null?Se:I,p(E.value),e.modelEvent==="input"&&t("update:modelValue",E.value),t("input",E.value,S.value,Y))},T=I=>{t("focus",I)},j=(I,Y)=>{const Q=h(E.value),he=_(Q);(Q!==E.value||S.value!==he)&&(S.value=he,p(Q)),Je(()=>{de(e.modelValue)&&e.modelValue!==Q&&(S.value=_(e.modelValue),p(e.modelValue))}),t("update:modelValue",Q),t("change",Q,Y)},J=I=>{t("blur",I)},U=I=>{var Y,Q;S.value="",t("update:modelValue",void 0),t("change",void 0,I),(Q=(Y=c.value)==null?void 0:Y.onChange)==null||Q.call(Y,I),t("clear",I)},O=ou(new Map([[an.ARROW_UP,I=>{I.preventDefault(),!e.readOnly&&v("plus",I)}],[an.ARROW_DOWN,I=>{I.preventDefault(),!e.readOnly&&v("minus",I)}]]));Ne(()=>e.modelValue,I=>{I!==E.value&&(S.value=_(I),p(I))});const N=()=>{var I;return e.readOnly?null:z(rt,null,[(I=n.suffix)==null?void 0:I.call(n),z("div",{class:`${a}-step`},[z("button",{class:[`${a}-step-button`,{[`${a}-step-button-disabled`]:u.value||y.value}],type:"button",tabindex:"-1",disabled:u.value||y.value,onMousedown:Y=>P(Y,"plus",!0),onMouseup:w,onMouseleave:w},[z(Gm,null,null)]),z("button",{class:[`${a}-step-button`,{[`${a}-step-button-disabled`]:u.value||L.value}],type:"button",tabindex:"-1",disabled:u.value||L.value,onMousedown:Y=>P(Y,"minus",!0),onMouseup:w,onMouseleave:w},[z(su,null,null)])])])},D=C(()=>[a,`${a}-mode-${e.mode}`,`${a}-size-${d.value}`,{[`${a}-readonly`]:e.readOnly}]),V=()=>z(vi,{size:d.value,tabindex:"-1",class:`${a}-step-button`,disabled:u.value||L.value,onMousedown:I=>P(I,"minus",!0),onMouseup:w,onMouseleave:w},{icon:()=>z(Ig,null,null)}),G=()=>z(vi,{size:d.value,tabindex:"-1",class:`${a}-step-button`,disabled:u.value||y.value,onMousedown:I=>P(I,"plus",!0),onMouseup:w,onMouseleave:w},{icon:()=>z(Og,null,null)});return{inputRef:s,render:()=>{const I=e.mode==="embed"?{prepend:n.prepend,prefix:n.prefix,suffix:e.hideButton?n.suffix:N,append:n.append}:{prepend:V,prefix:n.prefix,suffix:n.suffix,append:G};return z(lh,{key:`__arco__${e.mode}`,ref:s,class:D.value,type:"text",allowClear:e.allowClear,size:d.value,modelValue:S.value,placeholder:e.placeholder,disabled:u.value,readonly:e.readOnly,error:e.error,inputAttrs:{role:"spinbutton","aria-valuemax":e.max,"aria-valuemin":e.min,"aria-valuenow":S.value},onInput:A,onFocus:T,onBlur:J,onClear:U,onChange:j,onKeydown:O},I)}}},methods:{focus(){var e;(e=this.inputRef)==null||e.focus()},blur(){var e;(e=this.inputRef)==null||e.blur()}},render(){return this.render()}});const Rg=Object.assign(Bo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Bo.name,Bo)}}),Bg=["border-width","box-sizing","font-family","font-weight","font-size","font-variant","letter-spacing","line-height","padding-top","padding-bottom","padding-left","padding-right","text-indent","text-rendering","text-transform","white-space","overflow-wrap","width"],Dg=e=>{const t={};return Bg.forEach(n=>{t[n]=e.getPropertyValue(n)}),t},Fg=te({name:"Textarea",components:{ResizeObserver:Gl,IconHover:gt,IconClose:qt},inheritAttrs:!1,props:{modelValue:String,defaultValue:{type:String,default:""},placeholder:String,disabled:{type:Boolean,default:!1},error:{type:Boolean,default:!1},maxLength:{type:[Number,Object],default:0},showWordLimit:{type:Boolean,default:!1},allowClear:{type:Boolean,default:!1},autoSize:{type:[Boolean,Object],default:!1},wordLength:{type:Function},wordSlice:{type:Function}},emits:{"update:modelValue":e=>!0,input:(e,t)=>!0,change:(e,t)=>!0,clear:e=>!0,focus:e=>!0,blur:e=>!0},setup(e,{emit:t,attrs:n}){const{disabled:r,error:o,modelValue:i}=Re(e),a=oe("textarea"),{mergedDisabled:s,mergedError:l,eventHandlers:u}=yt({disabled:r,error:o}),c=H(),d=H(),m=H(),_=H(),S=H(e.defaultValue),E=C(()=>{var g;return(g=i.value)!=null?g:S.value}),[L,y]=Xl(c);Ne(i,g=>{(vt(g)||jn(g))&&(S.value="")});const $=C(()=>Be(e.maxLength)&&!!e.maxLength.errorOnly),w=C(()=>Be(e.maxLength)?e.maxLength.length:e.maxLength),h=g=>{var f;return it(e.wordLength)?e.wordLength(g):(f=g.length)!=null?f:0},p=C(()=>h(E.value)),b=C(()=>l.value||!!(w.value&&$.value&&p.value>w.value)),v=H(!1),P=H(!1),A=C(()=>e.allowClear&&!s.value&&E.value),T=H(!1),j=H(""),J=()=>{L(),Je(()=>{c.value&&E.value!==c.value.value&&(c.value.value=E.value,y())})},U=(g,f=!0)=>{var k,q;w.value&&!$.value&&h(g)>w.value&&(g=(q=(k=e.wordSlice)==null?void 0:k.call(e,g,w.value))!=null?q:g.slice(0,w.value)),S.value=g,f&&t("update:modelValue",g),J()};let O=E.value;const N=(g,f)=>{var k,q;g!==O&&(O=g,t("change",g,f),(q=(k=u.value)==null?void 0:k.onChange)==null||q.call(k,f))},D=g=>{var f,k;P.value=!0,O=E.value,t("focus",g),(k=(f=u.value)==null?void 0:f.onFocus)==null||k.call(f,g)},V=g=>{var f,k;P.value=!1,t("blur",g),(k=(f=u.value)==null?void 0:f.onBlur)==null||k.call(f,g),N(E.value,g)},G=g=>{var f,k;const{value:q}=g.target;if(g.type==="compositionend"){if(T.value=!1,j.value="",w.value&&!$.value&&E.value.length>=w.value&&h(q)>w.value){J();return}t("input",q,g),U(q),(k=(f=u.value)==null?void 0:f.onInput)==null||k.call(f,g)}else T.value=!0},B=g=>{var f,k;const{value:q}=g.target;if(T.value)j.value=q;else{if(w.value&&!$.value&&E.value.length>=w.value&&h(q)>w.value&&g.inputType==="insertText"){J();return}t("input",q,g),U(q),(k=(f=u.value)==null?void 0:f.onInput)==null||k.call(f,g)}},I=g=>{U(""),N("",g),t("clear",g)};Ne(i,g=>{g!==E.value&&U(g??"",!1)});const Y=g=>Vn(n,zt),Q=g=>xn(n,zt),he=C(()=>[`${a}-wrapper`,{[`${a}-focus`]:P.value,[`${a}-disabled`]:s.value,[`${a}-error`]:b.value,[`${a}-scroll`]:v.value}]);let me;const Se=H(0),Ae=H(0),Ie=C(()=>!Be(e.autoSize)||!e.autoSize.minRows?0:e.autoSize.minRows*Se.value+Ae.value),Te=C(()=>!Be(e.autoSize)||!e.autoSize.maxRows?0:e.autoSize.maxRows*Se.value+Ae.value),we=()=>{const g=Dg(me);Se.value=Number.parseInt(g["line-height"]||0,10),Ae.value=Number.parseInt(g["border-width"]||0,10)*2+Number.parseInt(g["padding-top"]||0,10)+Number.parseInt(g["padding-bottom"]||0,10),_.value=g,Je(()=>{var f;const k=(f=m.value)==null?void 0:f.offsetHeight;let q=k??0,ne="hidden";Ie.value&&qTe.value&&(q=Te.value,ne="auto"),d.value={height:`${q}px`,resize:"none",overflow:ne}})};Ke(()=>{c.value&&(me=window.getComputedStyle(c.value),e.autoSize&&we()),re()});const Me=()=>{e.autoSize&&m.value&&we(),re()},xe=g=>{c.value&&g.target!==c.value&&(g.preventDefault(),c.value.focus())},re=()=>{c.value&&(c.value.scrollHeight>c.value.offsetHeight?v.value||(v.value=!0):v.value&&(v.value=!1))};return Ne(E,()=>{e.autoSize&&m.value&&we(),re()}),{prefixCls:a,wrapperCls:he,textareaRef:c,textareaStyle:d,mirrorRef:m,mirrorStyle:_,computedValue:E,showClearBtn:A,valueLength:p,computedMaxLength:w,mergedDisabled:s,getWrapperAttrs:Y,getTextareaAttrs:Q,handleInput:B,handleFocus:D,handleBlur:V,handleComposition:G,handleClear:I,handleResize:Me,handleMousedown:xe}},methods:{focus(){var e;(e=this.$refs.textareaRef)==null||e.focus()},blur(){var e;(e=this.$refs.textareaRef)==null||e.blur()}}}),jg=["disabled","value","placeholder"];function Vg(e,t,n,r,o,i){const a=ue("resize-observer"),s=ue("icon-close"),l=ue("icon-hover");return x(),ee("div",Fe(e.getWrapperAttrs(e.$attrs),{class:e.wrapperCls,onMousedown:t[7]||(t[7]=(...u)=>e.handleMousedown&&e.handleMousedown(...u))}),[e.autoSize?(x(),ee("div",{key:0,ref:"mirrorRef",class:K(`${e.prefixCls}-mirror`),style:$e(e.mirrorStyle)},Xe(`${e.computedValue} -`),7)):pe("v-if",!0),z(a,{onResize:e.handleResize},{default:ke(()=>[fe("textarea",Fe({ref:"textareaRef"},e.getTextareaAttrs(e.$attrs),{disabled:e.mergedDisabled,class:e.prefixCls,style:e.textareaStyle,value:e.computedValue,placeholder:e.placeholder,onInput:t[0]||(t[0]=(...u)=>e.handleInput&&e.handleInput(...u)),onFocus:t[1]||(t[1]=(...u)=>e.handleFocus&&e.handleFocus(...u)),onBlur:t[2]||(t[2]=(...u)=>e.handleBlur&&e.handleBlur(...u)),onCompositionstart:t[3]||(t[3]=(...u)=>e.handleComposition&&e.handleComposition(...u)),onCompositionupdate:t[4]||(t[4]=(...u)=>e.handleComposition&&e.handleComposition(...u)),onCompositionend:t[5]||(t[5]=(...u)=>e.handleComposition&&e.handleComposition(...u))}),null,16,jg)]),_:1},8,["onResize"]),se(e.$slots,"suffix"),e.computedMaxLength&&e.showWordLimit?(x(),ee("div",{key:1,class:K(`${e.prefixCls}-word-limit`)},Xe(e.valueLength)+"/"+Xe(e.computedMaxLength),3)):pe("v-if",!0),e.showClearBtn?(x(),ee("div",{key:2,class:K(`${e.prefixCls}-clear-btn`),onClick:t[6]||(t[6]=(...u)=>e.handleClear&&e.handleClear(...u))},[z(l,null,{default:ke(()=>[z(s)]),_:1})],2)):pe("v-if",!0)],16)}var Do=ce(Fg,[["render",Vg]]);const tS=Object.assign(Do,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Do.name,Do)}}),xg=te({name:"Message",components:{AIconHover:gt,IconInfoCircleFill:Yl,IconCheckCircleFill:Gi,IconExclamationCircleFill:Ki,IconCloseCircleFill:Yi,IconClose:qt,IconLoading:It},props:{type:{type:String,default:"info"},closable:{type:Boolean,default:!1},showIcon:{type:Boolean,default:!0},duration:{type:Number,default:3e3},resetOnUpdate:{type:Boolean,default:!1},resetOnHover:{type:Boolean,default:!1}},emits:["close"],setup(e,{emit:t}){const n=oe("message");let r=0;const o=()=>{t("close")},i=()=>{e.duration>0&&(r=window.setTimeout(o,e.duration))},a=()=>{r&&(window.clearTimeout(r),r=0)};return Ke(()=>{i()}),vn(()=>{e.resetOnUpdate&&(a(),i())}),Sr(()=>{a()}),{handleMouseEnter:()=>{e.resetOnHover&&a()},handleMouseLeave:()=>{e.resetOnHover&&i()},prefixCls:n,handleClose:o}}});function zg(e,t,n,r,o,i){const a=ue("icon-info-circle-fill"),s=ue("icon-check-circle-fill"),l=ue("icon-exclamation-circle-fill"),u=ue("icon-close-circle-fill"),c=ue("icon-loading"),d=ue("icon-close"),m=ue("a-icon-hover");return x(),ee("li",{role:"alert",class:K([e.prefixCls,`${e.prefixCls}-${e.type}`,{[`${e.prefixCls}-closable`]:e.closable}]),onMouseenter:t[1]||(t[1]=(..._)=>e.handleMouseEnter&&e.handleMouseEnter(..._)),onMouseleave:t[2]||(t[2]=(..._)=>e.handleMouseLeave&&e.handleMouseLeave(..._))},[e.showIcon&&!(e.type==="normal"&&!e.$slots.icon)?(x(),ee("span",{key:0,class:K(`${e.prefixCls}-icon`)},[se(e.$slots,"icon",{},()=>[e.type==="info"?(x(),ve(a,{key:0})):e.type==="success"?(x(),ve(s,{key:1})):e.type==="warning"?(x(),ve(l,{key:2})):e.type==="error"?(x(),ve(u,{key:3})):e.type==="loading"?(x(),ve(c,{key:4})):pe("v-if",!0)])],2)):pe("v-if",!0),fe("span",{class:K(`${e.prefixCls}-content`)},[se(e.$slots,"default")],2),e.closable?(x(),ee("span",{key:1,class:K(`${e.prefixCls}-close-btn`),onClick:t[0]||(t[0]=(..._)=>e.handleClose&&e.handleClose(..._))},[z(m,null,{default:ke(()=>[z(d)]),_:1})],2)):pe("v-if",!0)],34)}var Ug=ce(xg,[["render",zg]]);function Wg(e){return typeof e=="function"||Object.prototype.toString.call(e)==="[object Object]"&&!Ml(e)}var Hg=te({name:"MessageList",props:{messages:{type:Array,default:()=>[]},position:{type:String,default:"top"}},emits:["close","afterClose"],setup(e,t){const n=oe("message-list"),{zIndex:r}=Ql("message",{runOnMounted:!0});return()=>{let o;return z(Il,{class:[n,`${n}-${e.position}`],name:"fade-message",tag:"ul",style:{zIndex:r.value},onAfterLeave:()=>t.emit("afterClose")},Wg(o=e.messages.map(i=>{const a={default:Ia(i.content),icon:Ia(i.icon)};return z(Ug,{key:i.id,type:i.type,duration:i.duration,closable:i.closable,resetOnUpdate:i.resetOnUpdate,resetOnHover:i.resetOnHover,onClose:()=>t.emit("close",i.id)},a)}))?o:{default:()=>[o]})}}}),qg=Object.defineProperty,Gg=Object.defineProperties,Kg=Object.getOwnPropertyDescriptors,Ps=Object.getOwnPropertySymbols,Yg=Object.prototype.hasOwnProperty,Xg=Object.prototype.propertyIsEnumerable,Is=(e,t,n)=>t in e?qg(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n,vr=(e,t)=>{for(var n in t||(t={}))Yg.call(t,n)&&Is(e,n,t[n]);if(Ps)for(var n of Ps(t))Xg.call(t,n)&&Is(e,n,t[n]);return e},gu=(e,t)=>Gg(e,Kg(t));class Jg{constructor(t,n){this.messageCount=0,this.add=i=>{var a;this.messageCount++;const s=(a=i.id)!=null?a:`__arco_message_${this.messageCount}`;if(this.messageIds.has(s))return this.update(s,i);const l=ze(vr({id:s},i));return this.messages.value.push(l),this.messageIds.add(s),{close:()=>this.remove(s)}},this.update=(i,a)=>{for(let s=0;sthis.remove(i)}},this.remove=i=>{for(let a=0;a{this.messages.value.splice(0)},this.destroy=()=>{this.messages.value.length===0&&this.container&&(Oa(null,this.container),document.body.removeChild(this.container),this.container=null,un[this.position]=void 0)};const{position:r="top"}=t;this.container=Fd("message"),this.messageIds=new Set,this.messages=H([]),this.position=r;const o=z(Hg,{messages:this.messages.value,position:r,onClose:this.remove,onAfterClose:this.destroy});(n??Ms._context)&&(o.appContext=n??Ms._context),Oa(o,this.container),document.body.appendChild(this.container)}}const un={},yu=[...gf,"loading","normal"],rr=yu.reduce((e,t)=>(e[t]=(n,r)=>{Vt(n)&&(n={content:n});const o=vr({type:t},n),{position:i="top"}=o;return un[i]||(un[i]=new Jg(o,r)),un[i].add(o)},e),{});rr.clear=e=>{var t;e?(t=un[e])==null||t.clear():Object.values(un).forEach(n=>n==null?void 0:n.clear())};const Ms=gu(vr({},rr),{install:e=>{const t={clear:rr.clear};for(const n of yu)t[n]=(r,o=e._context)=>rr[n](r,o);e.config.globalProperties.$message=t},_context:null}),Zg=te({name:"IconCheck",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-check`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Qg=["stroke-width","stroke-linecap","stroke-linejoin"],ey=fe("path",{d:"M41.678 11.05 19.05 33.678 6.322 20.95"},null,-1),ty=[ey];function ny(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},ty,14,Qg)}var Fo=ce(Zg,[["render",ny]]);const ry=Object.assign(Fo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Fo.name,Fo)}}),oy=te({name:"SliderButton",components:{Tooltip:fu},inheritAttrs:!1,props:{direction:{type:String,default:"horizontal"},disabled:{type:Boolean,default:!1},min:{type:Number,required:!0},max:{type:Number,required:!0},formatTooltip:{type:Function},value:[String,Number],tooltipPosition:{type:String},showTooltip:{type:Boolean,default:!0}},emits:["movestart","moving","moveend"],setup(e,{emit:t}){const n=oe("slider-btn"),r=H(!1),o=d=>{e.disabled||(d.preventDefault(),r.value=!0,Ft(window,"mousemove",i),Ft(window,"mouseup",a),Ft(window,"contextmenu",a),t("movestart"))},i=d=>{t("moving",d.clientX,d.clientY)},a=()=>{r.value=!1,In(window,"mousemove",i),In(window,"mouseup",a),t("moveend")},s=C(()=>[n]),l=C(()=>{var d;return((d=e.tooltipPosition)!=null?d:e.direction==="vertical")?"right":"top"}),u=C(()=>{var d,m;return(m=(d=e.formatTooltip)==null?void 0:d.call(e,e.value))!=null?m:`${e.value}`}),c=C(()=>e.showTooltip?r.value?!0:void 0:!1);return{prefixCls:n,cls:s,tooltipContent:u,mergedTooltipPosition:l,popupVisible:c,handleMouseDown:o}}}),iy=["aria-disabled","aria-valuemax","aria-valuemin","aria-valuenow","aria-valuetext"];function ay(e,t,n,r,o,i){const a=ue("tooltip");return x(),ve(a,{"popup-visible":e.popupVisible,position:e.mergedTooltipPosition,content:e.tooltipContent},{default:ke(()=>[fe("div",Fe(e.$attrs,{tabindex:"0",role:"slider","aria-disabled":e.disabled,"aria-valuemax":e.max,"aria-valuemin":e.min,"aria-valuenow":e.value,"aria-valuetext":e.tooltipContent,class:e.cls,onMousedown:t[0]||(t[0]=(...s)=>e.handleMouseDown&&e.handleMouseDown(...s)),onClick:t[1]||(t[1]=cn(()=>{},["stop"]))}),null,16,iy)]),_:1},8,["popup-visible","position","content"])}var sy=ce(oy,[["render",ay]]);const Dt=(e,[t,n])=>{const r=Math.max((e-t)/(n-t),0);return`${Qt.round(r*100,2)}%`},Ar=(e,t)=>t==="vertical"?{bottom:e}:{left:e},ly=te({name:"SliderDots",props:{data:{type:Array,required:!0},min:{type:Number,required:!0},max:{type:Number,required:!0},direction:{type:String,default:"horizontal"}},setup(e){return{prefixCls:oe("slider"),getStyle:r=>Ar(Dt(r,[e.min,e.max]),e.direction)}}});function uy(e,t,n,r,o,i){return x(),ee("div",{class:K(`${e.prefixCls}-dots`)},[(x(!0),ee(rt,null,Fn(e.data,(a,s)=>(x(),ee("div",{key:s,class:K(`${e.prefixCls}-dot-wrapper`),style:$e(e.getStyle(a.key))},[fe("div",{class:K([`${e.prefixCls}-dot`,{[`${e.prefixCls}-dot-active`]:a.isActive}])},null,2)],6))),128))],2)}var cy=ce(ly,[["render",uy]]);const dy=te({name:"SliderMarks",props:{data:{type:Array,required:!0},min:{type:Number,required:!0},max:{type:Number,required:!0},direction:{type:String,default:"horizontal"}},setup(e){return{prefixCls:oe("slider"),getStyle:r=>Ar(Dt(r,[e.min,e.max]),e.direction)}}});function fy(e,t,n,r,o,i){return x(),ee("div",{class:K(`${e.prefixCls}-marks`)},[(x(!0),ee(rt,null,Fn(e.data,(a,s)=>(x(),ee("div",{key:s,"aria-hidden":"true",class:K(`${e.prefixCls}-mark`),style:$e(e.getStyle(a.key))},Xe(a.content),7))),128))],2)}var hy=ce(dy,[["render",fy]]);const py=te({name:"SliderTicks",props:{value:{type:Array,required:!0},step:{type:Number,required:!0},min:{type:Number,required:!0},max:{type:Number,required:!0},direction:{type:String,default:"horizontal"}},setup(e){const t=oe("slider"),n=C(()=>{const o=[],i=Math.floor((e.max-e.min)/e.step);for(let a=0;a<=i;a++){const s=Qt.plus(a*e.step,e.min);s<=e.min||s>=e.max||o.push({key:s,isActive:s>=e.value[0]&&s<=e.value[1]})}return o});return{prefixCls:t,steps:n,getStyle:o=>Ar(Dt(o,[e.min,e.max]),e.direction)}}});function my(e,t,n,r,o,i){return x(),ee("div",{class:K(`${e.prefixCls}-ticks`)},[(x(!0),ee(rt,null,Fn(e.steps,(a,s)=>(x(),ee("div",{key:s,class:K([`${e.prefixCls}-tick`,{[`${e.prefixCls}-tick-active`]:a.isActive}]),style:$e(e.getStyle(a.key))},null,6))),128))],2)}var vy=ce(py,[["render",my]]);const gy=te({name:"SliderInput",components:{InputNumber:Rg},props:{modelValue:{type:Array,required:!0},min:{type:Number},max:{type:Number},step:{type:Number},disabled:{type:Boolean},range:{type:Boolean}},emits:["startChange","endChange"],setup(e,{emit:t}){return{prefixCls:oe("slider")}}});function yy(e,t,n,r,o,i){const a=ue("input-number");return x(),ee("div",{class:K(`${e.prefixCls}-input`)},[e.range?(x(),ee(rt,{key:0},[z(a,{min:e.min,max:e.max,step:e.step,disabled:e.disabled,"model-value":e.modelValue[0],"hide-button":"",onChange:t[0]||(t[0]=s=>e.$emit("startChange",s))},null,8,["min","max","step","disabled","model-value"]),fe("div",{class:K(`${e.prefixCls}-input-hyphens`)},null,2)],64)):pe("v-if",!0),z(a,{min:e.min,max:e.max,step:e.step,disabled:e.disabled,"model-value":e.modelValue[1],"hide-button":"",onChange:t[1]||(t[1]=s=>e.$emit("endChange",s))},null,8,["min","max","step","disabled","model-value"])],2)}var by=ce(gy,[["render",yy]]);const _y=te({name:"Slider",components:{SliderButton:sy,SliderDots:cy,SliderMarks:hy,SliderTicks:vy,SliderInput:by},props:{modelValue:{type:[Number,Array],default:void 0},defaultValue:{type:[Number,Array],default:0},step:{type:Number,default:1},min:{type:Number,default:0},marks:{type:Object},max:{type:Number,default:100},direction:{type:String,default:"horizontal"},disabled:{type:Boolean,default:!1},showTicks:{type:Boolean,default:!1},showInput:{type:Boolean,default:!1},range:{type:Boolean,default:!1},formatTooltip:{type:Function},showTooltip:{type:Boolean,default:!0}},emits:{"update:modelValue":e=>!0,change:e=>!0},setup(e,{emit:t}){const n=oe("slider"),{mergedDisabled:r,eventHandlers:o}=yt({disabled:cr(e,"disabled")}),i=H(null),a=H(),s=H(We(e.defaultValue)?e.defaultValue[0]:0),l=H(We(e.defaultValue)?e.defaultValue[1]:e.defaultValue),u=()=>{var A,T;e.range?(t("update:modelValue",[s.value,l.value]),t("change",[s.value,l.value])):(t("update:modelValue",l.value),t("change",l.value)),(T=(A=o.value)==null?void 0:A.onChange)==null||T.call(A)},c=A=>{A=A??e.min,s.value=A,u()},d=A=>{A=A??e.min,l.value=A,u()},m=C(()=>{var A,T,j;return e.range?We(e.modelValue)?e.modelValue:[s.value,(A=e.modelValue)!=null?A:l.value]:vt(e.modelValue)?[s.value,l.value]:We(e.modelValue)?[(T=e.min)!=null?T:0,e.modelValue[1]]:[(j=e.min)!=null?j:0,e.modelValue]}),_=C(()=>Object.keys(e.marks||{}).map(A=>{var T;const j=Number(A);return{key:j,content:(T=e.marks)==null?void 0:T[j],isActive:j>=m.value[0]&&j<=m.value[1]}})),S=A=>Ar(Dt(A,[e.min,e.max]),e.direction),E=H(!1),L=()=>{E.value=!0,i.value&&(a.value=i.value.getBoundingClientRect())};function y(A,T){if(!a.value)return 0;const{left:j,top:J,width:U,height:O}=a.value,N=e.direction==="horizontal"?U:O,D=N*e.step/(e.max-e.min);let V=e.direction==="horizontal"?A-j:J+O-T;V<0&&(V=0),V>N&&(V=N);const G=Math.round(V/D);return Qt.plus(e.min,Qt.times(G,e.step))}const $=(A,T)=>{l.value=y(A,T),u()},w=A=>{if(r.value)return;const{clientX:T,clientY:j}=A;i.value&&(a.value=i.value.getBoundingClientRect()),l.value=y(T,j),u()};function h([A,T]){return A>T&&([A,T]=[T,A]),e.direction==="vertical"?{bottom:Dt(A,[e.min,e.max]),top:Dt(e.max+e.min-T,[e.min,e.max])}:{left:Dt(A,[e.min,e.max]),right:Dt(e.max+e.min-T,[e.min,e.max])}}const p=(A,T)=>{s.value=y(A,T),u()},b=()=>{E.value=!1},v=C(()=>[n,{[`${n}-vertical`]:e.direction==="vertical",[`${n}-with-marks`]:!!e.marks}]),P=C(()=>[`${n}-track`,{[`${n}-track-disabled`]:r.value,[`${n}-track-vertical`]:e.direction==="vertical"}]);return{prefixCls:n,cls:v,trackCls:P,trackRef:i,computedValue:m,mergedDisabled:r,markList:_,getBtnStyle:S,getBarStyle:h,handleClick:w,handleMoveStart:L,handleEndMoving:$,handleMoveEnd:b,handleStartMoving:p,handleStartChange:c,handleEndChange:d}}});function Cy(e,t,n,r,o,i){const a=ue("slider-ticks"),s=ue("slider-dots"),l=ue("slider-marks"),u=ue("slider-button"),c=ue("slider-input");return x(),ee("div",{class:K(e.cls)},[fe("div",{ref:"trackRef",class:K(e.trackCls),onClick:t[0]||(t[0]=(...d)=>e.handleClick&&e.handleClick(...d))},[fe("div",{class:K(`${e.prefixCls}-bar`),style:$e(e.getBarStyle(e.computedValue))},null,6),e.showTicks?(x(),ve(a,{key:0,value:e.computedValue,step:e.step,min:e.min,max:e.max,direction:e.direction},null,8,["value","step","min","max","direction"])):pe("v-if",!0),e.marks?(x(),ve(s,{key:1,data:e.markList,min:e.min,max:e.max,direction:e.direction},null,8,["data","min","max","direction"])):pe("v-if",!0),e.marks?(x(),ve(l,{key:2,data:e.markList,min:e.min,max:e.max,direction:e.direction},null,8,["data","min","max","direction"])):pe("v-if",!0),e.range?(x(),ve(u,{key:3,style:$e(e.getBtnStyle(e.computedValue[0])),value:e.computedValue[0],direction:e.direction,disabled:e.mergedDisabled,min:e.min,max:e.max,"format-tooltip":e.formatTooltip,"show-tooltip":e.showTooltip,onMovestart:e.handleMoveStart,onMoving:e.handleStartMoving,onMoveend:e.handleMoveEnd},null,8,["style","value","direction","disabled","min","max","format-tooltip","show-tooltip","onMovestart","onMoving","onMoveend"])):pe("v-if",!0),z(u,{style:$e(e.getBtnStyle(e.computedValue[1])),value:e.computedValue[1],direction:e.direction,disabled:e.mergedDisabled,min:e.min,max:e.max,"format-tooltip":e.formatTooltip,"show-tooltip":e.showTooltip,onMovestart:e.handleMoveStart,onMoving:e.handleEndMoving,onMoveend:e.handleMoveEnd},null,8,["style","value","direction","disabled","min","max","format-tooltip","show-tooltip","onMovestart","onMoving","onMoveend"])],2),e.showInput?(x(),ve(c,{key:0,"model-value":e.computedValue,min:e.min,max:e.max,step:e.step,range:e.range,disabled:e.disabled,onStartChange:e.handleStartChange,onEndChange:e.handleEndChange},null,8,["model-value","min","max","step","range","disabled","onStartChange","onEndChange"])):pe("v-if",!0)],2)}var jo=ce(_y,[["render",Cy]]);const nS=Object.assign(jo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+jo.name,jo)}});var Vo=te({name:"Space",props:{align:{type:String},direction:{type:String,default:"horizontal"},size:{type:[Number,String,Array],default:"small"},wrap:{type:Boolean},fill:{type:Boolean}},setup(e,{slots:t}){const n=oe("space"),r=C(()=>{var s;return(s=e.align)!=null?s:e.direction==="horizontal"?"center":""}),o=C(()=>[n,{[`${n}-${e.direction}`]:e.direction,[`${n}-align-${r.value}`]:r.value,[`${n}-wrap`]:e.wrap,[`${n}-fill`]:e.fill}]);function i(s){if(de(s))return s;switch(s){case"mini":return 4;case"small":return 8;case"medium":return 16;case"large":return 24;default:return 8}}const a=s=>{const l={},u=`${i(We(e.size)?e.size[0]:e.size)}px`,c=`${i(We(e.size)?e.size[1]:e.size)}px`;return s?e.wrap?{marginBottom:c}:{}:(e.direction==="horizontal"&&(l.marginRight=u),(e.direction==="vertical"||e.wrap)&&(l.marginBottom=c),l)};return()=>{var s;const l=Tn((s=t.default)==null?void 0:s.call(t),!0).filter(u=>u.type!==Jc);return z("div",{class:o.value},[l.map((u,c)=>{var d,m;const _=t.split&&c>0;return z(rt,{key:(d=u.key)!=null?d:`item-${c}`},[_&&z("div",{class:`${n}-item-split`,style:a(!1)},[(m=t.split)==null?void 0:m.call(t)]),z("div",{class:`${n}-item`,style:a(c===l.length-1)},[u])])})])}}});const rS=Object.assign(Vo,{install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+Vo.name,Vo)}}),bu=Symbol("ArcoSteps"),Sy=te({name:"Steps",props:{type:{type:String,default:"default"},direction:{type:String,default:"horizontal"},labelPlacement:{type:String,default:"horizontal"},current:{type:Number,default:void 0},defaultCurrent:{type:Number,default:1},status:{type:String,default:"process"},lineLess:{type:Boolean,default:!1},small:{type:Boolean,default:!1},changeable:{type:Boolean,default:!1}},emits:{"update:current":e=>!0,change:(e,t)=>!0},setup(e,{emit:t,slots:n}){const{type:r,lineLess:o}=Re(e),i=oe("steps"),a=H(e.defaultCurrent),s=C(()=>{var y;return(y=e.current)!=null?y:a.value}),l=C(()=>["navigation","arrow"].includes(e.type)?"horizontal":e.direction),u=C(()=>e.type==="dot"?l.value==="vertical"?"horizontal":"vertical":e.type==="navigation"?"horizontal":e.labelPlacement),c=y=>ys.value?"wait":e.status,d=(y,$)=>{e.changeable&&(a.value=y,t("update:current",y),t("change",y,$))},m=ze(new Map),_=C(()=>Array.from(m.values()).filter(y=>y.status==="error").map(y=>y.step)),S=(y,$)=>{m.set(y,$)},E=y=>{m.delete(y)},L=C(()=>[i,`${i}-${l.value}`,`${i}-label-${u.value}`,`${i}-mode-${r.value}`,{[`${i}-changeable`]:e.changeable,[`${i}-size-small`]:e.small&&e.type!=="dot",[`${i}-line-less`]:o.value}]);return $t(bu,ze({type:r,direction:l,labelPlacement:u,lineLess:o,current:s,errorSteps:_,getStatus:c,addItem:S,removeItem:E,onClick:d,parentCls:i})),{cls:L}}});function Ey(e,t,n,r,o,i){return x(),ee("div",{class:K(e.cls)},[se(e.$slots,"default")],2)}var xo=ce(Sy,[["render",Ey]]);const wy=te({name:"Step",components:{IconCheck:ry,IconClose:qt},props:{title:String,description:String,status:{type:String},disabled:{type:Boolean,default:!1}},setup(e){const t=oe("steps-item"),n=Wt(),r=oe("steps-icon"),o=et(bu,void 0),i=C(()=>{var S;return(S=o==null?void 0:o.type)!=null?S:"default"}),a=H(),{computedIndex:s}=iu({itemRef:a,selector:`.${t}`,parentClassName:o==null?void 0:o.parentCls}),l=C(()=>s.value+1),u=C(()=>{var S,E;return(E=(S=e.status)!=null?S:o==null?void 0:o.getStatus(l.value))!=null?E:"process"}),c=C(()=>{var S;return(S=o==null?void 0:o.errorSteps.includes(l.value+1))!=null?S:!1});n&&(o==null||o.addItem(n.uid,ze({step:l,status:u}))),Ht(()=>{n&&(o==null||o.removeItem(n.uid))});const d=C(()=>!(o!=null&&o.lineLess)&&((o==null?void 0:o.labelPlacement)==="vertical"||(o==null?void 0:o.direction)==="vertical")),m=S=>{e.disabled||o==null||o.onClick(l.value,S)},_=C(()=>[t,`${t}-${u.value}`,{[`${t}-active`]:l.value===(o==null?void 0:o.current),[`${t}-next-error`]:c.value,[`${t}-disabled`]:e.disabled}]);return{prefixCls:t,iconCls:r,cls:_,itemRef:a,showTail:d,stepNumber:l,computedStatus:u,type:i,handleClick:m}}});function ky(e,t,n,r,o,i){const a=ue("icon-check"),s=ue("icon-close");return x(),ee("div",{ref:"itemRef",class:K(e.cls),onClick:t[0]||(t[0]=(...l)=>e.handleClick&&e.handleClick(...l))},[e.showTail?(x(),ee("div",{key:0,class:K(`${e.prefixCls}-tail`)},null,2)):pe("v-if",!0),e.type!=="arrow"?(x(),ee("div",{key:1,class:K(`${e.prefixCls}-node`)},[se(e.$slots,"node",{step:e.stepNumber,status:e.computedStatus},()=>[e.type!=="dot"?(x(),ee("div",{key:0,class:K(e.iconCls)},[se(e.$slots,"icon",{step:e.stepNumber,status:e.computedStatus},()=>[e.computedStatus==="finish"?(x(),ve(a,{key:0})):e.computedStatus==="error"?(x(),ve(s,{key:1})):(x(),ee(rt,{key:2},[nt(Xe(e.stepNumber),1)],2112))])],2)):pe("v-if",!0)])],2)):pe("v-if",!0),fe("div",{class:K(`${e.prefixCls}-content`)},[fe("div",{class:K(`${e.prefixCls}-title`)},[se(e.$slots,"default",{},()=>[nt(Xe(e.title),1)])],2),e.description||e.$slots.description?(x(),ee("div",{key:0,class:K(`${e.prefixCls}-description`)},[se(e.$slots,"description",{},()=>[nt(Xe(e.description),1)])],2)):pe("v-if",!0)],2)],2)}var zo=ce(wy,[["render",ky]]);const oS=Object.assign(xo,{Step:zo,install:(e,t)=>{Ve(e,t);const n=je(t);e.component(n+xo.name,xo),e.component(n+zo.name,zo)}}),$y=te({name:"IconCopy",props:{size:{type:[Number,String]},strokeWidth:{type:Number,default:4},strokeLinecap:{type:String,default:"butt",validator:e=>["butt","round","square"].includes(e)},strokeLinejoin:{type:String,default:"miter",validator:e=>["arcs","bevel","miter","miter-clip","round"].includes(e)},rotate:Number,spin:Boolean},emits:{click:e=>!0},setup(e,{emit:t}){const n=oe("icon"),r=C(()=>[n,`${n}-copy`,{[`${n}-spin`]:e.spin}]),o=C(()=>{const a={};return e.size&&(a.fontSize=de(e.size)?`${e.size}px`:e.size),e.rotate&&(a.transform=`rotate(${e.rotate}deg)`),a});return{cls:r,innerStyle:o,onClick:a=>{t("click",a)}}}}),Oy=["stroke-width","stroke-linecap","stroke-linejoin"],Ly=fe("path",{d:"M20 6h18a2 2 0 0 1 2 2v22M8 16v24c0 1.105.891 2 1.996 2h20.007A1.99 1.99 0 0 0 32 40.008V15.997A1.997 1.997 0 0 0 30 14H10a2 2 0 0 0-2 2Z"},null,-1),Ty=[Ly];function Ay(e,t,n,r,o,i){return x(),ee("svg",{viewBox:"0 0 48 48",fill:"none",xmlns:"http://www.w3.org/2000/svg",stroke:"currentColor",class:K(e.cls),style:$e(e.innerStyle),"stroke-width":e.strokeWidth,"stroke-linecap":e.strokeLinecap,"stroke-linejoin":e.strokeLinejoin,onClick:t[0]||(t[0]=(...a)=>e.onClick&&e.onClick(...a))},Ty,14,Oy)}var Uo=ce($y,[["render",Ay]]);const iS=Object.assign(Uo,{install:(e,t)=>{var n;const r=(n=t==null?void 0:t.iconPrefix)!=null?n:"";e.component(r+Uo.name,Uo)}});var Ot={},_u={exports:{}},Pe={};/*! - * shared v9.2.2 - * (c) 2022 kazuya kawaguchi - * Released under the MIT License. - */Object.defineProperty(Pe,"__esModule",{value:!0});const Ny=typeof window<"u";let Py,Iy;const My=/\{([0-9a-zA-Z]+)\}/g;function Ry(e,...t){return t.length===1&&ra(t[0])&&(t=t[0]),(!t||!t.hasOwnProperty)&&(t={}),e.replace(My,(n,r)=>t.hasOwnProperty(r)?t[r]:"")}const By=typeof Symbol=="function"&&typeof Symbol.toStringTag=="symbol",Dy=e=>By?Symbol(e):e,Fy=(e,t,n)=>Cu({l:e,k:t,s:n}),Cu=e=>JSON.stringify(e).replace(/\u2028/g,"\\u2028").replace(/\u2029/g,"\\u2029").replace(/\u0027/g,"\\u0027"),jy=e=>typeof e=="number"&&isFinite(e),Vy=e=>Nr(e)==="[object Date]",xy=e=>Nr(e)==="[object RegExp]",zy=e=>ia(e)&&Object.keys(e).length===0;function Uy(e,t){typeof console<"u"&&(console.warn("[intlify] "+e),t&&console.warn(t.stack))}const Wy=Object.assign;let Rs;const Hy=()=>Rs||(Rs=typeof globalThis<"u"?globalThis:typeof self<"u"?self:typeof window<"u"?window:typeof di<"u"?di:{});function qy(e){return e.replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}const Gy=Object.prototype.hasOwnProperty;function Ky(e,t){return Gy.call(e,t)}const Su=Array.isArray,Si=e=>typeof e=="function",Yy=e=>typeof e=="string",Xy=e=>typeof e=="boolean",Jy=e=>typeof e=="symbol",ra=e=>e!==null&&typeof e=="object",Zy=e=>ra(e)&&Si(e.then)&&Si(e.catch),oa=Object.prototype.toString,Nr=e=>oa.call(e),ia=e=>Nr(e)==="[object Object]",Qy=e=>e==null?"":Su(e)||ia(e)&&e.toString===oa?JSON.stringify(e,null,2):String(e),Bs=2;function eb(e,t=0,n=e.length){const r=e.split(/\r?\n/);let o=0;const i=[];for(let a=0;a=t){for(let s=a-Bs;s<=a+Bs||n>o;s++){if(s<0||s>=r.length)continue;const l=s+1;i.push(`${l}${" ".repeat(3-String(l).length)}| ${r[s]}`);const u=r[s].length;if(s===a){const c=t-(o-u)+1,d=Math.max(1,n>o?u-c:n-t);i.push(" | "+" ".repeat(c)+"^".repeat(d))}else if(s>a){if(n>o){const c=Math.max(Math.min(n-o,u),1);i.push(" | "+"^".repeat(c))}o+=u+1}}break}return i.join(` -`)}function tb(){const e=new Map;return{events:e,on(n,r){const o=e.get(n);o&&o.push(r)||e.set(n,[r])},off(n,r){const o=e.get(n);o&&o.splice(o.indexOf(r)>>>0,1)},emit(n,r){(e.get(n)||[]).slice().map(o=>o(r)),(e.get("*")||[]).slice().map(o=>o(n,r))}}}Pe.assign=Wy;Pe.createEmitter=tb;Pe.escapeHtml=qy;Pe.format=Ry;Pe.friendlyJSONstringify=Cu;Pe.generateCodeFrame=eb;Pe.generateFormatCacheKey=Fy;Pe.getGlobalThis=Hy;Pe.hasOwn=Ky;Pe.inBrowser=Ny;Pe.isArray=Su;Pe.isBoolean=Xy;Pe.isDate=Vy;Pe.isEmptyObject=zy;Pe.isFunction=Si;Pe.isNumber=jy;Pe.isObject=ra;Pe.isPlainObject=ia;Pe.isPromise=Zy;Pe.isRegExp=xy;Pe.isString=Yy;Pe.isSymbol=Jy;Pe.makeSymbol=Dy;Pe.mark=Py;Pe.measure=Iy;Pe.objectToString=oa;Pe.toDisplayString=Qy;Pe.toTypeString=Nr;Pe.warn=Uy;_u.exports=Pe;var aa=_u.exports,Eu={exports:{}},ye={},wu={exports:{}},ht={},Pr={},sa={},Ir={},la={},Ds="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".split("");la.encode=function(e){if(0<=e&&e>1;return t?-n:n}Ir.encode=function(t){var n="",r,o=nb(t);do r=o&Ou,o>>>=ua,o>0&&(r|=Lu),n+=ku.encode(r);while(o>0);return n};Ir.decode=function(t,n,r){var o=t.length,i=0,a=0,s,l;do{if(n>=o)throw new Error("Expected more digits in base 64 VLQ value.");if(l=ku.decode(t.charCodeAt(n++)),l===-1)throw new Error("Invalid base64 digit: "+t.charAt(n-1));s=!!(l&Lu),l&=Ou,i=i+(l<=0;j--)A=P[j],A==="."?P.splice(j,1):A===".."?T++:T>0&&(A===""?(P.splice(j+1,T),T=0):(P.splice(j,2),T--));return p=P.join("/"),p===""&&(p=v?"/":"."),b?(b.path=p,i(b)):p}e.normalize=a;function s(h,p){h===""&&(h="."),p===""&&(p=".");var b=o(p),v=o(h);if(v&&(h=v.path||"/"),b&&!b.scheme)return v&&(b.scheme=v.scheme),i(b);if(b||p.match(r))return p;if(v&&!v.host&&!v.path)return v.host=p,i(v);var P=p.charAt(0)==="/"?p:a(h.replace(/\/+$/,"")+"/"+p);return v?(v.path=P,i(v)):P}e.join=s,e.isAbsolute=function(h){return h.charAt(0)==="/"||n.test(h)};function l(h,p){h===""&&(h="."),h=h.replace(/\/$/,"");for(var b=0;p.indexOf(h+"/")!==0;){var v=h.lastIndexOf("/");if(v<0||(h=h.slice(0,v),h.match(/^([^\/]+:\/)?\/*$/)))return p;++b}return Array(b+1).join("../")+p.substr(h.length+1)}e.relative=l;var u=function(){var h=Object.create(null);return!("__proto__"in h)}();function c(h){return h}function d(h){return _(h)?"$"+h:h}e.toSetString=u?c:d;function m(h){return _(h)?h.slice(1):h}e.fromSetString=u?c:m;function _(h){if(!h)return!1;var p=h.length;if(p<9||h.charCodeAt(p-1)!==95||h.charCodeAt(p-2)!==95||h.charCodeAt(p-3)!==111||h.charCodeAt(p-4)!==116||h.charCodeAt(p-5)!==111||h.charCodeAt(p-6)!==114||h.charCodeAt(p-7)!==112||h.charCodeAt(p-8)!==95||h.charCodeAt(p-9)!==95)return!1;for(var b=p-10;b>=0;b--)if(h.charCodeAt(b)!==36)return!1;return!0}function S(h,p,b){var v=L(h.source,p.source);return v!==0||(v=h.originalLine-p.originalLine,v!==0)||(v=h.originalColumn-p.originalColumn,v!==0||b)||(v=h.generatedColumn-p.generatedColumn,v!==0)||(v=h.generatedLine-p.generatedLine,v!==0)?v:L(h.name,p.name)}e.compareByOriginalPositions=S;function E(h,p,b){var v=h.generatedLine-p.generatedLine;return v!==0||(v=h.generatedColumn-p.generatedColumn,v!==0||b)||(v=L(h.source,p.source),v!==0)||(v=h.originalLine-p.originalLine,v!==0)||(v=h.originalColumn-p.originalColumn,v!==0)?v:L(h.name,p.name)}e.compareByGeneratedPositionsDeflated=E;function L(h,p){return h===p?0:h===null?1:p===null?-1:h>p?1:-1}function y(h,p){var b=h.generatedLine-p.generatedLine;return b!==0||(b=h.generatedColumn-p.generatedColumn,b!==0)||(b=L(h.source,p.source),b!==0)||(b=h.originalLine-p.originalLine,b!==0)||(b=h.originalColumn-p.originalColumn,b!==0)?b:L(h.name,p.name)}e.compareByGeneratedPositionsInflated=y;function $(h){return JSON.parse(h.replace(/^\)]}'[^\n]*\n/,""))}e.parseSourceMapInput=$;function w(h,p,b){if(p=p||"",h&&(h[h.length-1]!=="/"&&p[0]!=="/"&&(h+="/"),p=h+p),b){var v=o(b);if(!v)throw new Error("sourceMapURL could not be parsed");if(v.path){var P=v.path.lastIndexOf("/");P>=0&&(v.path=v.path.substring(0,P+1))}p=s(i(v),p)}return a(p)}e.computeSourceURL=w})(bn);var ca={},da=bn,fa=Object.prototype.hasOwnProperty,Zt=typeof Map<"u";function Pt(){this._array=[],this._set=Zt?new Map:Object.create(null)}Pt.fromArray=function(t,n){for(var r=new Pt,o=0,i=t.length;o=0)return n}else{var r=da.toSetString(t);if(fa.call(this._set,r))return this._set[r]}throw new Error('"'+t+'" is not in the set.')};Pt.prototype.at=function(t){if(t>=0&&tn||r==n&&i>=o||Au.compareByGeneratedPositionsInflated(e,t)<=0}function Mr(){this._array=[],this._sorted=!0,this._last={generatedLine:-1,generatedColumn:0}}Mr.prototype.unsortedForEach=function(t,n){this._array.forEach(t,n)};Mr.prototype.add=function(t){ob(this._last,t)?(this._last=t,this._array.push(t)):(this._sorted=!1,this._array.push(t))};Mr.prototype.toArray=function(){return this._sorted||(this._array.sort(Au.compareByGeneratedPositionsInflated),this._sorted=!0),this._array};Tu.MappingList=Mr;var wn=Ir,He=bn,gr=ca.ArraySet,ib=Tu.MappingList;function ft(e){e||(e={}),this._file=He.getArg(e,"file",null),this._sourceRoot=He.getArg(e,"sourceRoot",null),this._skipValidation=He.getArg(e,"skipValidation",!1),this._sources=new gr,this._names=new gr,this._mappings=new ib,this._sourcesContents=null}ft.prototype._version=3;ft.fromSourceMap=function(t){var n=t.sourceRoot,r=new ft({file:t.file,sourceRoot:n});return t.eachMapping(function(o){var i={generated:{line:o.generatedLine,column:o.generatedColumn}};o.source!=null&&(i.source=o.source,n!=null&&(i.source=He.relative(n,i.source)),i.original={line:o.originalLine,column:o.originalColumn},o.name!=null&&(i.name=o.name)),r.addMapping(i)}),t.sources.forEach(function(o){var i=o;n!==null&&(i=He.relative(n,o)),r._sources.has(i)||r._sources.add(i);var a=t.sourceContentFor(o);a!=null&&r.setSourceContent(o,a)}),r};ft.prototype.addMapping=function(t){var n=He.getArg(t,"generated"),r=He.getArg(t,"original",null),o=He.getArg(t,"source",null),i=He.getArg(t,"name",null);this._skipValidation||this._validateMapping(n,r,o,i),o!=null&&(o=String(o),this._sources.has(o)||this._sources.add(o)),i!=null&&(i=String(i),this._names.has(i)||this._names.add(i)),this._mappings.add({generatedLine:n.line,generatedColumn:n.column,originalLine:r!=null&&r.line,originalColumn:r!=null&&r.column,source:o,name:i})};ft.prototype.setSourceContent=function(t,n){var r=t;this._sourceRoot!=null&&(r=He.relative(this._sourceRoot,r)),n!=null?(this._sourcesContents||(this._sourcesContents=Object.create(null)),this._sourcesContents[He.toSetString(r)]=n):this._sourcesContents&&(delete this._sourcesContents[He.toSetString(r)],Object.keys(this._sourcesContents).length===0&&(this._sourcesContents=null))};ft.prototype.applySourceMap=function(t,n,r){var o=n;if(n==null){if(t.file==null)throw new Error(`SourceMapGenerator.prototype.applySourceMap requires either an explicit source file, or the source map's "file" property. Both were omitted.`);o=t.file}var i=this._sourceRoot;i!=null&&(o=He.relative(i,o));var a=new gr,s=new gr;this._mappings.unsortedForEach(function(l){if(l.source===o&&l.originalLine!=null){var u=t.originalPositionFor({line:l.originalLine,column:l.originalColumn});u.source!=null&&(l.source=u.source,r!=null&&(l.source=He.join(r,l.source)),i!=null&&(l.source=He.relative(i,l.source)),l.originalLine=u.line,l.originalColumn=u.column,u.name!=null&&(l.name=u.name))}var c=l.source;c!=null&&!a.has(c)&&a.add(c);var d=l.name;d!=null&&!s.has(d)&&s.add(d)},this),this._sources=a,this._names=s,t.sources.forEach(function(l){var u=t.sourceContentFor(l);u!=null&&(r!=null&&(l=He.join(r,l)),i!=null&&(l=He.relative(i,l)),this.setSourceContent(l,u))},this)};ft.prototype._validateMapping=function(t,n,r,o){if(n&&typeof n.line!="number"&&typeof n.column!="number")throw new Error("original.line and original.column are not numbers -- you probably meant to omit the original mapping entirely and only map the generated position. If so, pass null for the original mapping instead of an object with empty or null values.");if(!(t&&"line"in t&&"column"in t&&t.line>0&&t.column>=0&&!n&&!r&&!o)){if(t&&"line"in t&&"column"in t&&n&&"line"in n&&"column"in n&&t.line>0&&t.column>=0&&n.line>0&&n.column>=0&&r)return;throw new Error("Invalid mapping: "+JSON.stringify({generated:t,source:r,original:n,name:o}))}};ft.prototype._serializeMappings=function(){for(var t=0,n=1,r=0,o=0,i=0,a=0,s="",l,u,c,d,m=this._mappings.toArray(),_=0,S=m.length;_0){if(!He.compareByGeneratedPositionsInflated(u,m[_-1]))continue;l+=","}l+=wn.encode(u.generatedColumn-t),t=u.generatedColumn,u.source!=null&&(d=this._sources.indexOf(u.source),l+=wn.encode(d-a),a=d,l+=wn.encode(u.originalLine-1-o),o=u.originalLine-1,l+=wn.encode(u.originalColumn-r),r=u.originalColumn,u.name!=null&&(c=this._names.indexOf(u.name),l+=wn.encode(c-i),i=c)),s+=l}return s};ft.prototype._generateSourcesContent=function(t,n){return t.map(function(r){if(!this._sourcesContents)return null;n!=null&&(r=He.relative(n,r));var o=He.toSetString(r);return Object.prototype.hasOwnProperty.call(this._sourcesContents,o)?this._sourcesContents[o]:null},this)};ft.prototype.toJSON=function(){var t={version:this._version,sources:this._sources.toArray(),names:this._names.toArray(),mappings:this._serializeMappings()};return this._file!=null&&(t.file=this._file),this._sourceRoot!=null&&(t.sourceRoot=this._sourceRoot),this._sourcesContents&&(t.sourcesContent=this._generateSourcesContent(t.sources,t.sourceRoot)),t};ft.prototype.toString=function(){return JSON.stringify(this.toJSON())};sa.SourceMapGenerator=ft;var Rr={},Nu={};(function(e){e.GREATEST_LOWER_BOUND=1,e.LEAST_UPPER_BOUND=2;function t(n,r,o,i,a,s){var l=Math.floor((r-n)/2)+n,u=a(o,i[l],!0);return u===0?l:u>0?r-l>1?t(l,r,o,i,a,s):s==e.LEAST_UPPER_BOUND?r1?t(n,l,o,i,a,s):s==e.LEAST_UPPER_BOUND?l:n<0?-1:n}e.search=function(r,o,i,a){if(o.length===0)return-1;var s=t(-1,o.length,r,o,i,a||e.GREATEST_LOWER_BOUND);if(s<0)return-1;for(;s-1>=0&&i(o[s],o[s-1],!0)===0;)--s;return s}})(Nu);var Pu={};function Wo(e,t,n){var r=e[t];e[t]=e[n],e[n]=r}function ab(e,t){return Math.round(e+Math.random()*(t-e))}function Ei(e,t,n,r){if(n=0){var a=this._originalMappings[i];if(t.column===void 0)for(var s=a.originalLine;a&&a.originalLine===s;)o.push({line:ae.getArg(a,"generatedLine",null),column:ae.getArg(a,"generatedColumn",null),lastColumn:ae.getArg(a,"lastGeneratedColumn",null)}),a=this._originalMappings[++i];else for(var l=a.originalColumn;a&&a.originalLine===n&&a.originalColumn==l;)o.push({line:ae.getArg(a,"generatedLine",null),column:ae.getArg(a,"generatedColumn",null),lastColumn:ae.getArg(a,"lastGeneratedColumn",null)}),a=this._originalMappings[++i]}return o};Rr.SourceMapConsumer=De;function tt(e,t){var n=e;typeof e=="string"&&(n=ae.parseSourceMapInput(e));var r=ae.getArg(n,"version"),o=ae.getArg(n,"sources"),i=ae.getArg(n,"names",[]),a=ae.getArg(n,"sourceRoot",null),s=ae.getArg(n,"sourcesContent",null),l=ae.getArg(n,"mappings"),u=ae.getArg(n,"file",null);if(r!=this._version)throw new Error("Unsupported version: "+r);a&&(a=ae.normalize(a)),o=o.map(String).map(ae.normalize).map(function(c){return a&&ae.isAbsolute(a)&&ae.isAbsolute(c)?ae.relative(a,c):c}),this._names=fn.fromArray(i.map(String),!0),this._sources=fn.fromArray(o,!0),this._absoluteSources=this._sources.toArray().map(function(c){return ae.computeSourceURL(a,c,t)}),this.sourceRoot=a,this.sourcesContent=s,this._mappings=l,this._sourceMapURL=t,this.file=u}tt.prototype=Object.create(De.prototype);tt.prototype.consumer=De;tt.prototype._findSourceIndex=function(e){var t=e;if(this.sourceRoot!=null&&(t=ae.relative(this.sourceRoot,t)),this._sources.has(t))return this._sources.indexOf(t);var n;for(n=0;n1&&(E.source=s+y[1],s+=y[1],E.originalLine=i+y[2],i=E.originalLine,E.originalLine+=1,E.originalColumn=a+y[3],a=E.originalColumn,y.length>4&&(E.name=l+y[4],l+=y[4])),S.push(E),typeof E.originalLine=="number"&&_.push(E)}Dn(S,ae.compareByGeneratedPositionsDeflated),this.__generatedMappings=S,Dn(_,ae.compareByOriginalPositions),this.__originalMappings=_};tt.prototype._findMapping=function(t,n,r,o,i,a){if(t[r]<=0)throw new TypeError("Line must be greater than or equal to 1, got "+t[r]);if(t[o]<0)throw new TypeError("Column must be greater than or equal to 0, got "+t[o]);return ha.search(t,n,i,a)};tt.prototype.computeColumnSpans=function(){for(var t=0;t=0){var o=this._generatedMappings[r];if(o.generatedLine===n.generatedLine){var i=ae.getArg(o,"source",null);i!==null&&(i=this._sources.at(i),i=ae.computeSourceURL(this.sourceRoot,i,this._sourceMapURL));var a=ae.getArg(o,"name",null);return a!==null&&(a=this._names.at(a)),{source:i,line:ae.getArg(o,"originalLine",null),column:ae.getArg(o,"originalColumn",null),name:a}}}return{source:null,line:null,column:null,name:null}};tt.prototype.hasContentsOfAllSources=function(){return this.sourcesContent?this.sourcesContent.length>=this._sources.size()&&!this.sourcesContent.some(function(t){return t==null}):!1};tt.prototype.sourceContentFor=function(t,n){if(!this.sourcesContent)return null;var r=this._findSourceIndex(t);if(r>=0)return this.sourcesContent[r];var o=t;this.sourceRoot!=null&&(o=ae.relative(this.sourceRoot,o));var i;if(this.sourceRoot!=null&&(i=ae.urlParse(this.sourceRoot))){var a=o.replace(/^file:\/\//,"");if(i.scheme=="file"&&this._sources.has(a))return this.sourcesContent[this._sources.indexOf(a)];if((!i.path||i.path=="/")&&this._sources.has("/"+o))return this.sourcesContent[this._sources.indexOf("/"+o)]}if(n)return null;throw new Error('"'+o+'" is not in the SourceMap.')};tt.prototype.generatedPositionFor=function(t){var n=ae.getArg(t,"source");if(n=this._findSourceIndex(n),n<0)return{line:null,column:null,lastColumn:null};var r={source:n,originalLine:ae.getArg(t,"line"),originalColumn:ae.getArg(t,"column")},o=this._findMapping(r,this._originalMappings,"originalLine","originalColumn",ae.compareByOriginalPositions,ae.getArg(t,"bias",De.GREATEST_LOWER_BOUND));if(o>=0){var i=this._originalMappings[o];if(i.source===r.source)return{line:ae.getArg(i,"generatedLine",null),column:ae.getArg(i,"generatedColumn",null),lastColumn:ae.getArg(i,"lastGeneratedColumn",null)}}return{line:null,column:null,lastColumn:null}};Rr.BasicSourceMapConsumer=tt;function _t(e,t){var n=e;typeof e=="string"&&(n=ae.parseSourceMapInput(e));var r=ae.getArg(n,"version"),o=ae.getArg(n,"sections");if(r!=this._version)throw new Error("Unsupported version: "+r);this._sources=new fn,this._names=new fn;var i={line:-1,column:0};this._sections=o.map(function(a){if(a.url)throw new Error("Support for url field in sections not implemented.");var s=ae.getArg(a,"offset"),l=ae.getArg(s,"line"),u=ae.getArg(s,"column");if(l=0;n--)this.prepend(t[n]);else if(t[_n]||typeof t=="string")this.children.unshift(t);else throw new TypeError("Expected a SourceNode, string, or an array of SourceNodes and strings. Got "+t);return this};ut.prototype.walk=function(t){for(var n,r=0,o=this.children.length;r0){for(n=[],r=0;rt[v]===pb&&t[v+1]===ot,s=v=>t[v]===ot,l=v=>t[v]===vb,u=v=>t[v]===mb,c=v=>a(v)||s(v)||l(v)||u(v),d=()=>n,m=()=>r,_=()=>o,S=()=>i,E=v=>a(v)||l(v)||u(v)?ot:t[v],L=()=>E(n),y=()=>E(n+i);function $(){return i=0,c(n)&&(r++,o=0),a(n)&&n++,n++,o++,t[n]}function w(){return a(n+i)&&i++,i++,t[n+i]}function h(){n=0,r=1,o=1,i=0}function p(v=0){i=v}function b(){const v=n+i;for(;v!==n;)$();i=0}return{index:d,line:m,column:_,peekOffset:S,charAt:E,currentChar:L,currentPeek:y,next:$,peek:w,reset:h,resetPeek:p,skipToPeek:b}}const Bt=void 0,Fs="'",yb="tokenizer";function bb(e,t={}){const n=t.location!==!1,r=gb(e),o=()=>r.index(),i=()=>Bu(r.line(),r.column(),r.index()),a=i(),s=o(),l={currentType:14,offset:s,startLoc:a,endLoc:a,lastType:14,lastOffset:s,lastStartLoc:a,lastEndLoc:a,braceNest:0,inLinked:!1,text:""},u=()=>l,{onError:c}=t;function d(g,f,k,...q){const ne=u();if(f.column+=k,f.offset+=k,c){const be=_r(ne.startLoc,f),Ye=pa(g,be,{domain:yb,args:q});c(Ye)}}function m(g,f,k){g.endLoc=i(),g.currentType=f;const q={type:f};return n&&(q.loc=_r(g.startLoc,g.endLoc)),k!=null&&(q.value=k),q}const _=g=>m(g,14);function S(g,f){return g.currentChar()===f?(g.next(),f):(d(_e.EXPECTED_TOKEN,i(),0,f),"")}function E(g){let f="";for(;g.currentPeek()===At||g.currentPeek()===ot;)f+=g.currentPeek(),g.peek();return f}function L(g){const f=E(g);return g.skipToPeek(),f}function y(g){if(g===Bt)return!1;const f=g.charCodeAt(0);return f>=97&&f<=122||f>=65&&f<=90||f===95}function $(g){if(g===Bt)return!1;const f=g.charCodeAt(0);return f>=48&&f<=57}function w(g,f){const{currentType:k}=f;if(k!==2)return!1;E(g);const q=y(g.currentPeek());return g.resetPeek(),q}function h(g,f){const{currentType:k}=f;if(k!==2)return!1;E(g);const q=g.currentPeek()==="-"?g.peek():g.currentPeek(),ne=$(q);return g.resetPeek(),ne}function p(g,f){const{currentType:k}=f;if(k!==2)return!1;E(g);const q=g.currentPeek()===Fs;return g.resetPeek(),q}function b(g,f){const{currentType:k}=f;if(k!==8)return!1;E(g);const q=g.currentPeek()===".";return g.resetPeek(),q}function v(g,f){const{currentType:k}=f;if(k!==9)return!1;E(g);const q=y(g.currentPeek());return g.resetPeek(),q}function P(g,f){const{currentType:k}=f;if(!(k===8||k===12))return!1;E(g);const q=g.currentPeek()===":";return g.resetPeek(),q}function A(g,f){const{currentType:k}=f;if(k!==10)return!1;const q=()=>{const be=g.currentPeek();return be==="{"?y(g.peek()):be==="@"||be==="%"||be==="|"||be===":"||be==="."||be===At||!be?!1:be===ot?(g.peek(),q()):y(be)},ne=q();return g.resetPeek(),ne}function T(g){E(g);const f=g.currentPeek()==="|";return g.resetPeek(),f}function j(g){const f=E(g),k=g.currentPeek()==="%"&&g.peek()==="{";return g.resetPeek(),{isModulo:k,hasSpace:f.length>0}}function J(g,f=!0){const k=(ne=!1,be="",Ye=!1)=>{const Ze=g.currentPeek();return Ze==="{"?be==="%"?!1:ne:Ze==="@"||!Ze?be==="%"?!0:ne:Ze==="%"?(g.peek(),k(ne,"%",!0)):Ze==="|"?be==="%"||Ye?!0:!(be===At||be===ot):Ze===At?(g.peek(),k(!0,At,Ye)):Ze===ot?(g.peek(),k(!0,ot,Ye)):!0},q=k();return f&&g.resetPeek(),q}function U(g,f){const k=g.currentChar();return k===Bt?Bt:f(k)?(g.next(),k):null}function O(g){return U(g,k=>{const q=k.charCodeAt(0);return q>=97&&q<=122||q>=65&&q<=90||q>=48&&q<=57||q===95||q===36})}function N(g){return U(g,k=>{const q=k.charCodeAt(0);return q>=48&&q<=57})}function D(g){return U(g,k=>{const q=k.charCodeAt(0);return q>=48&&q<=57||q>=65&&q<=70||q>=97&&q<=102})}function V(g){let f="",k="";for(;f=N(g);)k+=f;return k}function G(g){L(g);const f=g.currentChar();return f!=="%"&&d(_e.EXPECTED_TOKEN,i(),0,f),g.next(),"%"}function B(g){let f="";for(;;){const k=g.currentChar();if(k==="{"||k==="}"||k==="@"||k==="|"||!k)break;if(k==="%")if(J(g))f+=k,g.next();else break;else if(k===At||k===ot)if(J(g))f+=k,g.next();else{if(T(g))break;f+=k,g.next()}else f+=k,g.next()}return f}function I(g){L(g);let f="",k="";for(;f=O(g);)k+=f;return g.currentChar()===Bt&&d(_e.UNTERMINATED_CLOSING_BRACE,i(),0),k}function Y(g){L(g);let f="";return g.currentChar()==="-"?(g.next(),f+=`-${V(g)}`):f+=V(g),g.currentChar()===Bt&&d(_e.UNTERMINATED_CLOSING_BRACE,i(),0),f}function Q(g){L(g),S(g,"'");let f="",k="";const q=be=>be!==Fs&&be!==ot;for(;f=U(g,q);)f==="\\"?k+=he(g):k+=f;const ne=g.currentChar();return ne===ot||ne===Bt?(d(_e.UNTERMINATED_SINGLE_QUOTE_IN_PLACEHOLDER,i(),0),ne===ot&&(g.next(),S(g,"'")),k):(S(g,"'"),k)}function he(g){const f=g.currentChar();switch(f){case"\\":case"'":return g.next(),`\\${f}`;case"u":return me(g,f,4);case"U":return me(g,f,6);default:return d(_e.UNKNOWN_ESCAPE_SEQUENCE,i(),0,f),""}}function me(g,f,k){S(g,f);let q="";for(let ne=0;nene!=="{"&&ne!=="}"&&ne!==At&&ne!==ot;for(;f=U(g,q);)k+=f;return k}function Ae(g){let f="",k="";for(;f=O(g);)k+=f;return k}function Ie(g){const f=(k=!1,q)=>{const ne=g.currentChar();return ne==="{"||ne==="%"||ne==="@"||ne==="|"||!ne||ne===At?q:ne===ot?(q+=ne,g.next(),f(k,q)):(q+=ne,g.next(),f(!0,q))};return f(!1,"")}function Te(g){L(g);const f=S(g,"|");return L(g),f}function we(g,f){let k=null;switch(g.currentChar()){case"{":return f.braceNest>=1&&d(_e.NOT_ALLOW_NEST_PLACEHOLDER,i(),0),g.next(),k=m(f,2,"{"),L(g),f.braceNest++,k;case"}":return f.braceNest>0&&f.currentType===2&&d(_e.EMPTY_PLACEHOLDER,i(),0),g.next(),k=m(f,3,"}"),f.braceNest--,f.braceNest>0&&L(g),f.inLinked&&f.braceNest===0&&(f.inLinked=!1),k;case"@":return f.braceNest>0&&d(_e.UNTERMINATED_CLOSING_BRACE,i(),0),k=Me(g,f)||_(f),f.braceNest=0,k;default:let ne=!0,be=!0,Ye=!0;if(T(g))return f.braceNest>0&&d(_e.UNTERMINATED_CLOSING_BRACE,i(),0),k=m(f,1,Te(g)),f.braceNest=0,f.inLinked=!1,k;if(f.braceNest>0&&(f.currentType===5||f.currentType===6||f.currentType===7))return d(_e.UNTERMINATED_CLOSING_BRACE,i(),0),f.braceNest=0,xe(g,f);if(ne=w(g,f))return k=m(f,5,I(g)),L(g),k;if(be=h(g,f))return k=m(f,6,Y(g)),L(g),k;if(Ye=p(g,f))return k=m(f,7,Q(g)),L(g),k;if(!ne&&!be&&!Ye)return k=m(f,13,Se(g)),d(_e.INVALID_TOKEN_IN_PLACEHOLDER,i(),0,k.value),L(g),k;break}return k}function Me(g,f){const{currentType:k}=f;let q=null;const ne=g.currentChar();switch((k===8||k===9||k===12||k===10)&&(ne===ot||ne===At)&&d(_e.INVALID_LINKED_FORMAT,i(),0),ne){case"@":return g.next(),q=m(f,8,"@"),f.inLinked=!0,q;case".":return L(g),g.next(),m(f,9,".");case":":return L(g),g.next(),m(f,10,":");default:return T(g)?(q=m(f,1,Te(g)),f.braceNest=0,f.inLinked=!1,q):b(g,f)||P(g,f)?(L(g),Me(g,f)):v(g,f)?(L(g),m(f,12,Ae(g))):A(g,f)?(L(g),ne==="{"?we(g,f)||q:m(f,11,Ie(g))):(k===8&&d(_e.INVALID_LINKED_FORMAT,i(),0),f.braceNest=0,f.inLinked=!1,xe(g,f))}}function xe(g,f){let k={type:14};if(f.braceNest>0)return we(g,f)||_(f);if(f.inLinked)return Me(g,f)||_(f);switch(g.currentChar()){case"{":return we(g,f)||_(f);case"}":return d(_e.UNBALANCED_CLOSING_BRACE,i(),0),g.next(),m(f,3,"}");case"@":return Me(g,f)||_(f);default:if(T(g))return k=m(f,1,Te(g)),f.braceNest=0,f.inLinked=!1,k;const{isModulo:ne,hasSpace:be}=j(g);if(ne)return be?m(f,0,B(g)):m(f,4,G(g));if(J(g))return m(f,0,B(g));break}return k}function re(){const{currentType:g,offset:f,startLoc:k,endLoc:q}=l;return l.lastType=g,l.lastOffset=f,l.lastStartLoc=k,l.lastEndLoc=q,l.offset=o(),l.startLoc=i(),r.currentChar()===Bt?m(l,14):xe(r,l)}return{nextToken:re,currentOffset:o,currentPosition:i,context:u}}const Du="parser",_b=/(?:\\\\|\\'|\\u([0-9a-fA-F]{4})|\\U([0-9a-fA-F]{6}))/g;function Cb(e,t,n){switch(e){case"\\\\":return"\\";case"\\'":return"'";default:{const r=parseInt(t||n,16);return r<=55295||r>=57344?String.fromCodePoint(r):"�"}}}function Fu(e={}){const t=e.location!==!1,{onError:n}=e;function r(y,$,w,h,...p){const b=y.currentPosition();if(b.offset+=h,b.column+=h,n){const v=_r(w,b),P=pa($,v,{domain:Du,args:p});n(P)}}function o(y,$,w){const h={type:y,start:$,end:$};return t&&(h.loc={start:w,end:w}),h}function i(y,$,w,h){y.end=$,h&&(y.type=h),t&&y.loc&&(y.loc.end=w)}function a(y,$){const w=y.context(),h=o(3,w.offset,w.startLoc);return h.value=$,i(h,y.currentOffset(),y.currentPosition()),h}function s(y,$){const w=y.context(),{lastOffset:h,lastStartLoc:p}=w,b=o(5,h,p);return b.index=parseInt($,10),y.nextToken(),i(b,y.currentOffset(),y.currentPosition()),b}function l(y,$){const w=y.context(),{lastOffset:h,lastStartLoc:p}=w,b=o(4,h,p);return b.key=$,y.nextToken(),i(b,y.currentOffset(),y.currentPosition()),b}function u(y,$){const w=y.context(),{lastOffset:h,lastStartLoc:p}=w,b=o(9,h,p);return b.value=$.replace(_b,Cb),y.nextToken(),i(b,y.currentOffset(),y.currentPosition()),b}function c(y){const $=y.nextToken(),w=y.context(),{lastOffset:h,lastStartLoc:p}=w,b=o(8,h,p);return $.type!==12?(r(y,_e.UNEXPECTED_EMPTY_LINKED_MODIFIER,w.lastStartLoc,0),b.value="",i(b,h,p),{nextConsumeToken:$,node:b}):($.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,w.lastStartLoc,0,Ct($)),b.value=$.value||"",i(b,y.currentOffset(),y.currentPosition()),{node:b})}function d(y,$){const w=y.context(),h=o(7,w.offset,w.startLoc);return h.value=$,i(h,y.currentOffset(),y.currentPosition()),h}function m(y){const $=y.context(),w=o(6,$.offset,$.startLoc);let h=y.nextToken();if(h.type===9){const p=c(y);w.modifier=p.node,h=p.nextConsumeToken||y.nextToken()}switch(h.type!==10&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(h)),h=y.nextToken(),h.type===2&&(h=y.nextToken()),h.type){case 11:h.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(h)),w.key=d(y,h.value||"");break;case 5:h.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(h)),w.key=l(y,h.value||"");break;case 6:h.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(h)),w.key=s(y,h.value||"");break;case 7:h.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(h)),w.key=u(y,h.value||"");break;default:r(y,_e.UNEXPECTED_EMPTY_LINKED_KEY,$.lastStartLoc,0);const p=y.context(),b=o(7,p.offset,p.startLoc);return b.value="",i(b,p.offset,p.startLoc),w.key=b,i(w,p.offset,p.startLoc),{nextConsumeToken:h,node:w}}return i(w,y.currentOffset(),y.currentPosition()),{node:w}}function _(y){const $=y.context(),w=$.currentType===1?y.currentOffset():$.offset,h=$.currentType===1?$.endLoc:$.startLoc,p=o(2,w,h);p.items=[];let b=null;do{const A=b||y.nextToken();switch(b=null,A.type){case 0:A.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(A)),p.items.push(a(y,A.value||""));break;case 6:A.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(A)),p.items.push(s(y,A.value||""));break;case 5:A.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(A)),p.items.push(l(y,A.value||""));break;case 7:A.value==null&&r(y,_e.UNEXPECTED_LEXICAL_ANALYSIS,$.lastStartLoc,0,Ct(A)),p.items.push(u(y,A.value||""));break;case 8:const T=m(y);p.items.push(T.node),b=T.nextConsumeToken||null;break}}while($.currentType!==14&&$.currentType!==1);const v=$.currentType===1?$.lastOffset:y.currentOffset(),P=$.currentType===1?$.lastEndLoc:y.currentPosition();return i(p,v,P),p}function S(y,$,w,h){const p=y.context();let b=h.items.length===0;const v=o(1,$,w);v.cases=[],v.cases.push(h);do{const P=_(y);b||(b=P.items.length===0),v.cases.push(P)}while(p.currentType!==14);return b&&r(y,_e.MUST_HAVE_MESSAGES_IN_PLURAL,w,0),i(v,y.currentOffset(),y.currentPosition()),v}function E(y){const $=y.context(),{offset:w,startLoc:h}=$,p=_(y);return $.currentType===14?p:S(y,w,h,p)}function L(y){const $=bb(y,br.assign({},e)),w=$.context(),h=o(0,w.offset,w.startLoc);return t&&h.loc&&(h.loc.source=y),h.body=E($),w.currentType!==14&&r($,_e.UNEXPECTED_LEXICAL_ANALYSIS,w.lastStartLoc,0,y[w.offset]||""),i(h,$.currentOffset(),$.currentPosition()),h}return{parse:L}}function Ct(e){if(e.type===14)return"EOF";const t=(e.value||"").replace(/\r?\n/gu,"\\n");return t.length>10?t.slice(0,9)+"…":t}function Sb(e,t={}){const n={ast:e,helpers:new Set};return{context:()=>n,helper:i=>(n.helpers.add(i),i)}}function js(e,t){for(let n=0;na;function l(L,y){a.code+=L,a.map&&(y&&y.loc&&y.loc!==Ru&&E(y.loc.start,Ab(y)),Nb(a,L))}function u(L,y=!0){const $=y?o:"";l(i?$+" ".repeat(L):$)}function c(L=!0){const y=++a.indentLevel;L&&u(y)}function d(L=!0){const y=--a.indentLevel;L&&u(y)}function m(){u(a.indentLevel)}const _=L=>`_${L}`,S=()=>a.needIndent;function E(L,y){a.map.addMapping({name:y,source:a.filename,original:{line:L.line,column:L.column-1},generated:{line:a.line,column:a.column-1}})}return n&&(a.map=new db.SourceMapGenerator,a.map.setSourceContent(r,a.source)),{context:s,push:l,indent:c,deindent:d,newline:m,helper:_,needIndent:S}}function kb(e,t){const{helper:n}=e;e.push(`${n("linked")}(`),hn(e,t.key),t.modifier?(e.push(", "),hn(e,t.modifier),e.push(", _type")):e.push(", undefined, _type"),e.push(")")}function $b(e,t){const{helper:n,needIndent:r}=e;e.push(`${n("normalize")}([`),e.indent(r());const o=t.items.length;for(let i=0;i1){e.push(`${n("plural")}([`),e.indent(r());const o=t.cases.length;for(let i=0;i{const n=br.isString(t.mode)?t.mode:"normal",r=br.isString(t.filename)?t.filename:"message.intl",o=!!t.sourceMap,i=t.breakLineCode!=null?t.breakLineCode:n==="arrow"?";":` -`,a=t.needIndent?t.needIndent:n!=="arrow",s=e.helpers||[],l=wb(e,{mode:n,filename:r,sourceMap:o,breakLineCode:i,needIndent:a});l.push(n==="normal"?"function __msg__ (ctx) {":"(ctx) => {"),l.indent(a),s.length>0&&(l.push(`const { ${s.map(d=>`${d}: _${d}`).join(", ")} } = ctx`),l.newline()),l.push("return "),hn(l,e),l.deindent(a),l.push("}");const{code:u,map:c}=l.context();return{ast:e,code:u,map:c?c.toJSON():void 0}};function Ab(e){switch(e.type){case 3:case 9:case 8:case 7:return e.value;case 5:return e.index.toString();case 4:return e.key;default:return}}function Nb(e,t,n=t.length){let r=0,o=-1;for(let i=0;i{a===void 0?a=s:a+=s},m[1]=()=>{a!==void 0&&(t.push(a),a=void 0)},m[2]=()=>{m[0](),o++},m[3]=()=>{if(o>0)o--,r=4,m[0]();else{if(o=0,a===void 0||(a=Vb(a),a===!1))return!1;m[1]()}};function _(){const S=e[n+1];if(r===5&&S==="'"||r===6&&S==='"')return n++,s="\\"+S,m[0](),!0}for(;r!==null;)if(n++,i=e[n],!(i==="\\"&&_())){if(l=jb(i),d=Gt[r],u=d[l]||d.l||8,u===8||(r=u[0],u[1]!==void 0&&(c=m[u[1]],c&&(s=i,c()===!1))))return;if(r===7)return t}}const Vs=new Map;function zu(e,t){return F.isObject(e)?e[t]:null}function xb(e,t){if(!F.isObject(e))return null;let n=Vs.get(t);if(n||(n=xu(t),n&&Vs.set(t,n)),!n)return null;const r=n.length;let o=e,i=0;for(;ie,Ub=e=>"",Uu="text",Wb=e=>e.length===0?"":e.join(""),Hb=F.toDisplayString;function xs(e,t){return e=Math.abs(e),t===2?e?e>1?1:0:1:e?Math.min(e,2):0}function qb(e){const t=F.isNumber(e.pluralIndex)?e.pluralIndex:-1;return e.named&&(F.isNumber(e.named.count)||F.isNumber(e.named.n))?F.isNumber(e.named.count)?e.named.count:F.isNumber(e.named.n)?e.named.n:t:t}function Gb(e,t){t.count||(t.count=e),t.n||(t.n=e)}function Wu(e={}){const t=e.locale,n=qb(e),r=F.isObject(e.pluralRules)&&F.isString(t)&&F.isFunction(e.pluralRules[t])?e.pluralRules[t]:xs,o=F.isObject(e.pluralRules)&&F.isString(t)&&F.isFunction(e.pluralRules[t])?xs:void 0,i=y=>y[r(n,y.length,o)],a=e.list||[],s=y=>a[y],l=e.named||{};F.isNumber(e.pluralIndex)&&Gb(n,l);const u=y=>l[y];function c(y){const $=F.isFunction(e.messages)?e.messages(y):F.isObject(e.messages)?e.messages[y]:!1;return $||(e.parent?e.parent.message(y):Ub)}const d=y=>e.modifiers?e.modifiers[y]:zb,m=F.isPlainObject(e.processor)&&F.isFunction(e.processor.normalize)?e.processor.normalize:Wb,_=F.isPlainObject(e.processor)&&F.isFunction(e.processor.interpolate)?e.processor.interpolate:Hb,S=F.isPlainObject(e.processor)&&F.isString(e.processor.type)?e.processor.type:Uu,L={list:s,named:u,plural:i,linked:(y,...$)=>{const[w,h]=$;let p="text",b="";$.length===1?F.isObject(w)?(b=w.modifier||b,p=w.type||p):F.isString(w)&&(b=w||b):$.length===2&&(F.isString(w)&&(b=w||b),F.isString(h)&&(p=h||p));let v=c(y)(L);return p==="vnode"&&F.isArray(v)&&b&&(v=v[0]),b?d(b)(v,p):v},message:c,type:S,interpolate:_,normalize:m};return L}let mn=null;function Kb(e){mn=e}function Yb(){return mn}function Xb(e,t,n){mn&&mn.emit(Vu.IntlifyDevToolsHooks.I18nInit,{timestamp:Date.now(),i18n:e,version:t,meta:n})}const Jb=Zb(Vu.IntlifyDevToolsHooks.FunctionTranslate);function Zb(e){return t=>mn&&mn.emit(e,t)}const Xt={NOT_FOUND_KEY:1,FALLBACK_TO_TRANSLATE:2,CANNOT_FORMAT_NUMBER:3,FALLBACK_TO_NUMBER_FORMAT:4,CANNOT_FORMAT_DATE:5,FALLBACK_TO_DATE_FORMAT:6,__EXTEND_POINT__:7},Qb={[Xt.NOT_FOUND_KEY]:"Not found '{key}' key in '{locale}' locale messages.",[Xt.FALLBACK_TO_TRANSLATE]:"Fall back to translate '{key}' key with '{target}' locale.",[Xt.CANNOT_FORMAT_NUMBER]:"Cannot format a number value due to not supported Intl.NumberFormat.",[Xt.FALLBACK_TO_NUMBER_FORMAT]:"Fall back to number format '{key}' key with '{target}' locale.",[Xt.CANNOT_FORMAT_DATE]:"Cannot format a date value due to not supported Intl.DateTimeFormat.",[Xt.FALLBACK_TO_DATE_FORMAT]:"Fall back to datetime format '{key}' key with '{target}' locale."};function e_(e,...t){return F.format(Qb[e],...t)}function Hu(e,t,n){return[...new Set([n,...F.isArray(t)?t:F.isObject(t)?Object.keys(t):F.isString(t)?[t]:[n]])]}function t_(e,t,n){const r=F.isString(n)?n:ga,o=e;o.__localeChainCache||(o.__localeChainCache=new Map);let i=o.__localeChainCache.get(r);if(!i){i=[];let a=[n];for(;F.isArray(a);)a=zs(i,a,t);const s=F.isArray(t)||!F.isPlainObject(t)?t:t.default?t.default:null;a=F.isString(s)?[s]:s,F.isArray(a)&&zs(i,a,!1),o.__localeChainCache.set(r,i)}return i}function zs(e,t,n){let r=!0;for(let o=0;o`${e.charAt(0).toLocaleUpperCase()}${e.substr(1)}`;function i_(){return{upper:(e,t)=>t==="text"&&F.isString(e)?e.toUpperCase():t==="vnode"&&F.isObject(e)&&"__v_isVNode"in e?e.children.toUpperCase():e,lower:(e,t)=>t==="text"&&F.isString(e)?e.toLowerCase():t==="vnode"&&F.isObject(e)&&"__v_isVNode"in e?e.children.toLowerCase():e,capitalize:(e,t)=>t==="text"&&F.isString(e)?Us(e):t==="vnode"&&F.isObject(e)&&"__v_isVNode"in e?Us(e.children):e}}let Gu;function a_(e){Gu=e}let Ku;function s_(e){Ku=e}let Yu;function l_(e){Yu=e}let Xu=null;const u_=e=>{Xu=e},c_=()=>Xu;let Ju=null;const d_=e=>{Ju=e},f_=()=>Ju;let Ws=0;function h_(e={}){const t=F.isString(e.version)?e.version:qu,n=F.isString(e.locale)?e.locale:ga,r=F.isArray(e.fallbackLocale)||F.isPlainObject(e.fallbackLocale)||F.isString(e.fallbackLocale)||e.fallbackLocale===!1?e.fallbackLocale:n,o=F.isPlainObject(e.messages)?e.messages:{[n]:{}},i=F.isPlainObject(e.datetimeFormats)?e.datetimeFormats:{[n]:{}},a=F.isPlainObject(e.numberFormats)?e.numberFormats:{[n]:{}},s=F.assign({},e.modifiers||{},i_()),l=e.pluralRules||{},u=F.isFunction(e.missing)?e.missing:null,c=F.isBoolean(e.missingWarn)||F.isRegExp(e.missingWarn)?e.missingWarn:!0,d=F.isBoolean(e.fallbackWarn)||F.isRegExp(e.fallbackWarn)?e.fallbackWarn:!0,m=!!e.fallbackFormat,_=!!e.unresolving,S=F.isFunction(e.postTranslation)?e.postTranslation:null,E=F.isPlainObject(e.processor)?e.processor:null,L=F.isBoolean(e.warnHtmlMessage)?e.warnHtmlMessage:!0,y=!!e.escapeParameter,$=F.isFunction(e.messageCompiler)?e.messageCompiler:Gu,w=F.isFunction(e.messageResolver)?e.messageResolver:Ku||zu,h=F.isFunction(e.localeFallbacker)?e.localeFallbacker:Yu||Hu,p=F.isObject(e.fallbackContext)?e.fallbackContext:void 0,b=F.isFunction(e.onWarn)?e.onWarn:F.warn,v=e,P=F.isObject(v.__datetimeFormatters)?v.__datetimeFormatters:new Map,A=F.isObject(v.__numberFormatters)?v.__numberFormatters:new Map,T=F.isObject(v.__meta)?v.__meta:{};Ws++;const j={version:t,cid:Ws,locale:n,fallbackLocale:r,messages:o,modifiers:s,pluralRules:l,missing:u,missingWarn:c,fallbackWarn:d,fallbackFormat:m,unresolving:_,postTranslation:S,processor:E,warnHtmlMessage:L,escapeParameter:y,messageCompiler:$,messageResolver:w,localeFallbacker:h,fallbackContext:p,onWarn:b,__meta:T};return j.datetimeFormats=i,j.numberFormats=a,j.__datetimeFormatters=P,j.__numberFormatters=A,j}function p_(e,t){return e instanceof RegExp?e.test(t):e}function m_(e,t){return e instanceof RegExp?e.test(t):e}function Dr(e,t,n,r,o){const{missing:i,onWarn:a}=e;if(i!==null){const s=i(e,n,t,o);return F.isString(s)?s:t}else return t}function v_(e,t,n){const r=e;r.__localeChainCache=new Map,e.localeFallbacker(e,n,t)}const g_=e=>e;let wi=Object.create(null);function y_(){wi=Object.create(null)}function b_(e,t={}){{const r=(t.onCacheKey||g_)(e),o=wi[r];if(o)return o;let i=!1;const a=t.onError||pn.defaultOnError;t.onError=u=>{i=!0,a(u)};const{code:s}=pn.baseCompile(e,t),l=new Function(`return ${s}`)();return i?l:wi[r]=l}}let Zu=pn.CompileErrorCodes.__EXTEND_POINT__;const Ho=()=>++Zu,St={INVALID_ARGUMENT:Zu,INVALID_DATE_ARGUMENT:Ho(),INVALID_ISO_DATE_ARGUMENT:Ho(),__EXTEND_POINT__:Ho()};function Jt(e){return pn.createCompileError(e,null,void 0)}St.INVALID_ARGUMENT+"",St.INVALID_DATE_ARGUMENT+"",St.INVALID_ISO_DATE_ARGUMENT+"";const Hs=()=>"",jt=e=>F.isFunction(e);function __(e,...t){const{fallbackFormat:n,postTranslation:r,unresolving:o,messageCompiler:i,fallbackLocale:a,messages:s}=e,[l,u]=tc(...t),c=F.isBoolean(u.missingWarn)?u.missingWarn:e.missingWarn,d=F.isBoolean(u.fallbackWarn)?u.fallbackWarn:e.fallbackWarn,m=F.isBoolean(u.escapeParameter)?u.escapeParameter:e.escapeParameter,_=!!u.resolvedMessage,S=F.isString(u.default)||F.isBoolean(u.default)?F.isBoolean(u.default)?i?l:()=>l:u.default:n?i?l:()=>l:"",E=n||S!=="",L=F.isString(u.locale)?u.locale:e.locale;m&&C_(u);let[y,$,w]=_?[l,L,s[L]||{}]:Qu(e,l,L,a,d,c),h=y,p=l;if(!_&&!(F.isString(h)||jt(h))&&E&&(h=S,p=h),!_&&(!(F.isString(h)||jt(h))||!F.isString($)))return o?Br:l;let b=!1;const v=()=>{b=!0},P=jt(h)?h:ec(e,l,$,h,p,v);if(b)return h;const A=w_(e,$,w,u),T=Wu(A),j=S_(e,P,T);return r?r(j,l):j}function C_(e){F.isArray(e.list)?e.list=e.list.map(t=>F.isString(t)?F.escapeHtml(t):t):F.isObject(e.named)&&Object.keys(e.named).forEach(t=>{F.isString(e.named[t])&&(e.named[t]=F.escapeHtml(e.named[t]))})}function Qu(e,t,n,r,o,i){const{messages:a,onWarn:s,messageResolver:l,localeFallbacker:u}=e,c=u(e,r,n);let d={},m,_=null;const S="translate";for(let E=0;Er;return u.locale=n,u.key=t,u}const l=a(r,E_(e,n,o,r,s,i));return l.locale=n,l.key=t,l.source=r,l}function S_(e,t,n){return t(n)}function tc(...e){const[t,n,r]=e,o={};if(!F.isString(t)&&!F.isNumber(t)&&!jt(t))throw Jt(St.INVALID_ARGUMENT);const i=F.isNumber(t)?String(t):(jt(t),t);return F.isNumber(n)?o.plural=n:F.isString(n)?o.default=n:F.isPlainObject(n)&&!F.isEmptyObject(n)?o.named=n:F.isArray(n)&&(o.list=n),F.isNumber(r)?o.plural=r:F.isString(r)?o.default=r:F.isPlainObject(r)&&F.assign(o,r),[i,o]}function E_(e,t,n,r,o,i){return{warnHtmlMessage:o,onError:a=>{throw i&&i(a),a},onCacheKey:a=>F.generateFormatCacheKey(t,n,a)}}function w_(e,t,n,r){const{modifiers:o,pluralRules:i,messageResolver:a,fallbackLocale:s,fallbackWarn:l,missingWarn:u,fallbackContext:c}=e,m={locale:t,modifiers:o,pluralRules:i,messages:_=>{let S=a(n,_);if(S==null&&c){const[,,E]=Qu(c,_,t,s,l,u);S=a(E,_)}if(F.isString(S)){let E=!1;const y=ec(e,_,t,S,_,()=>{E=!0});return E?Hs:y}else return jt(S)?S:Hs}};return e.processor&&(m.processor=e.processor),r.list&&(m.list=r.list),r.named&&(m.named=r.named),F.isNumber(r.plural)&&(m.pluralIndex=r.plural),m}function k_(e,...t){const{datetimeFormats:n,unresolving:r,fallbackLocale:o,onWarn:i,localeFallbacker:a}=e,{__datetimeFormatters:s}=e,[l,u,c,d]=rc(...t),m=F.isBoolean(c.missingWarn)?c.missingWarn:e.missingWarn;F.isBoolean(c.fallbackWarn)?c.fallbackWarn:e.fallbackWarn;const _=!!c.part,S=F.isString(c.locale)?c.locale:e.locale,E=a(e,o,S);if(!F.isString(l)||l==="")return new Intl.DateTimeFormat(S,d).format(u);let L={},y,$=null;const w="datetime format";for(let b=0;b{nc.includes(l)?a[l]=n[l]:i[l]=n[l]}),F.isString(r)?i.locale=r:F.isPlainObject(r)&&(a=r),F.isPlainObject(o)&&(a=o),[i.key||"",s,i,a]}function $_(e,t,n){const r=e;for(const o in n){const i=`${t}__${o}`;r.__datetimeFormatters.has(i)&&r.__datetimeFormatters.delete(i)}}function O_(e,...t){const{numberFormats:n,unresolving:r,fallbackLocale:o,onWarn:i,localeFallbacker:a}=e,{__numberFormatters:s}=e,[l,u,c,d]=ic(...t),m=F.isBoolean(c.missingWarn)?c.missingWarn:e.missingWarn;F.isBoolean(c.fallbackWarn)?c.fallbackWarn:e.fallbackWarn;const _=!!c.part,S=F.isString(c.locale)?c.locale:e.locale,E=a(e,o,S);if(!F.isString(l)||l==="")return new Intl.NumberFormat(S,d).format(u);let L={},y,$=null;const w="number format";for(let b=0;b{oc.includes(l)?a[l]=n[l]:i[l]=n[l]}),F.isString(r)?i.locale=r:F.isPlainObject(r)&&(a=r),F.isPlainObject(o)&&(a=o),[i.key||"",s,i,a]}function L_(e,t,n){const r=e;for(const o in n){const i=`${t}__${o}`;r.__numberFormatters.has(i)&&r.__numberFormatters.delete(i)}}ye.CompileErrorCodes=pn.CompileErrorCodes;ye.createCompileError=pn.createCompileError;ye.CoreErrorCodes=St;ye.CoreWarnCodes=Xt;ye.DATETIME_FORMAT_OPTIONS_KEYS=nc;ye.DEFAULT_LOCALE=ga;ye.DEFAULT_MESSAGE_DATA_TYPE=Uu;ye.MISSING_RESOLVE_VALUE=o_;ye.NOT_REOSLVED=Br;ye.NUMBER_FORMAT_OPTIONS_KEYS=oc;ye.VERSION=qu;ye.clearCompileCache=y_;ye.clearDateTimeFormat=$_;ye.clearNumberFormat=L_;ye.compileToFunction=b_;ye.createCoreContext=h_;ye.createCoreError=Jt;ye.createMessageContext=Wu;ye.datetime=k_;ye.fallbackWithLocaleChain=t_;ye.fallbackWithSimple=Hu;ye.getAdditionalMeta=c_;ye.getDevToolsHook=Yb;ye.getFallbackContext=f_;ye.getWarnMessage=e_;ye.handleMissing=Dr;ye.initI18nDevTools=Xb;ye.isMessageFunction=jt;ye.isTranslateFallbackWarn=p_;ye.isTranslateMissingWarn=m_;ye.number=O_;ye.parse=xu;ye.parseDateTimeArgs=rc;ye.parseNumberArgs=ic;ye.parseTranslateArgs=tc;ye.registerLocaleFallbacker=l_;ye.registerMessageCompiler=a_;ye.registerMessageResolver=s_;ye.resolveValue=xb;ye.resolveWithKeyValue=zu;ye.setAdditionalMeta=u_;ye.setDevToolsHook=Kb;ye.setFallbackContext=d_;ye.translate=__;ye.translateDevTools=Jb;ye.updateFallbackLocale=v_;Eu.exports=ye;var T_=Eu.exports;const A_=td(Zc);/*! - * vue-i18n v9.2.2 - * (c) 2022 kazuya kawaguchi - * Released under the MIT License. - */Object.defineProperty(Ot,"__esModule",{value:!0});var M=aa,ge=T_,Ee=A_;const ac="9.2.2";let sc=ge.CoreWarnCodes.__EXTEND_POINT__;const tn=()=>++sc,Qe={FALLBACK_TO_ROOT:sc,NOT_SUPPORTED_PRESERVE:tn(),NOT_SUPPORTED_FORMATTER:tn(),NOT_SUPPORTED_PRESERVE_DIRECTIVE:tn(),NOT_SUPPORTED_GET_CHOICE_INDEX:tn(),COMPONENT_NAME_LEGACY_COMPATIBLE:tn(),NOT_FOUND_PARENT_SCOPE:tn()},N_={[Qe.FALLBACK_TO_ROOT]:"Fall back to {type} '{key}' with root locale.",[Qe.NOT_SUPPORTED_PRESERVE]:"Not supported 'preserve'.",[Qe.NOT_SUPPORTED_FORMATTER]:"Not supported 'formatter'.",[Qe.NOT_SUPPORTED_PRESERVE_DIRECTIVE]:"Not supported 'preserveDirectiveContent'.",[Qe.NOT_SUPPORTED_GET_CHOICE_INDEX]:"Not supported 'getChoiceIndex'.",[Qe.COMPONENT_NAME_LEGACY_COMPATIBLE]:"Component name legacy compatible: '{name}' -> 'i18n'",[Qe.NOT_FOUND_PARENT_SCOPE]:"Not found parent scope. use the global scope."};function mt(e,...t){return M.format(N_[e],...t)}let lc=ge.CompileErrorCodes.__EXTEND_POINT__;const at=()=>++lc,Oe={UNEXPECTED_RETURN_TYPE:lc,INVALID_ARGUMENT:at(),MUST_BE_CALL_SETUP_TOP:at(),NOT_INSLALLED:at(),NOT_AVAILABLE_IN_LEGACY_MODE:at(),REQUIRED_VALUE:at(),INVALID_VALUE:at(),CANNOT_SETUP_VUE_DEVTOOLS_PLUGIN:at(),NOT_INSLALLED_WITH_PROVIDE:at(),UNEXPECTED_ERROR:at(),NOT_COMPATIBLE_LEGACY_VUE_I18N:at(),BRIDGE_SUPPORT_VUE_2_ONLY:at(),MUST_DEFINE_I18N_OPTION_IN_ALLOW_COMPOSITION:at(),NOT_AVAILABLE_COMPOSITION_IN_LEGACY:at(),__EXTEND_POINT__:at()};function Ge(e,...t){return ge.createCompileError(e,null,{messages:P_,args:t})}const P_={[Oe.UNEXPECTED_RETURN_TYPE]:"Unexpected return type in composer",[Oe.INVALID_ARGUMENT]:"Invalid argument",[Oe.MUST_BE_CALL_SETUP_TOP]:"Must be called at the top of a `setup` function",[Oe.NOT_INSLALLED]:"Need to install with `app.use` function",[Oe.UNEXPECTED_ERROR]:"Unexpected error",[Oe.NOT_AVAILABLE_IN_LEGACY_MODE]:"Not available in legacy mode",[Oe.REQUIRED_VALUE]:"Required in value: {0}",[Oe.INVALID_VALUE]:"Invalid value",[Oe.CANNOT_SETUP_VUE_DEVTOOLS_PLUGIN]:"Cannot setup vue-devtools plugin",[Oe.NOT_INSLALLED_WITH_PROVIDE]:"Need to install with `provide` function",[Oe.NOT_COMPATIBLE_LEGACY_VUE_I18N]:"Not compatible legacy VueI18n.",[Oe.BRIDGE_SUPPORT_VUE_2_ONLY]:"vue-i18n-bridge support Vue 2.x only",[Oe.MUST_DEFINE_I18N_OPTION_IN_ALLOW_COMPOSITION]:"Must define ‘i18n’ option or custom block in Composition API with using local scope in Legacy API mode",[Oe.NOT_AVAILABLE_COMPOSITION_IN_LEGACY]:"Not available Compostion API in Legacy API mode. Please make sure that the legacy API mode is working properly"},ki=M.makeSymbol("__transrateVNode"),$i=M.makeSymbol("__datetimeParts"),Oi=M.makeSymbol("__numberParts"),Li=M.makeSymbol("__enableEmitter"),Ti=M.makeSymbol("__disableEmitter"),uc=M.makeSymbol("__setPluralRules");M.makeSymbol("__intlifyMeta");const cc=M.makeSymbol("__injectWithOption"),I_="__VUE_I18N_BRIDGE__";function Ai(e){if(!M.isObject(e))return e;for(const t in e)if(M.hasOwn(e,t))if(!t.includes("."))M.isObject(e[t])&&Ai(e[t]);else{const n=t.split("."),r=n.length-1;let o=e;for(let i=0;i{if("locale"in s&&"resource"in s){const{locale:l,resource:u}=s;l?(a[l]=a[l]||{},Nn(u,a[l])):Nn(u,a)}else M.isString(s)&&Nn(JSON.parse(s),a)}),o==null&&i)for(const s in a)M.hasOwn(a,s)&&Ai(a[s]);return a}const Yn=e=>!M.isObject(e)||M.isArray(e);function Nn(e,t){if(Yn(e)||Yn(t))throw Ge(Oe.INVALID_VALUE);for(const n in e)M.hasOwn(e,n)&&(Yn(e[n])||Yn(t[n])?t[n]=e[n]:Nn(e[n],t[n]))}function dc(e){return e.type}function fc(e,t,n){let r=M.isObject(t.messages)?t.messages:{};"__i18nGlobal"in n&&(r=Fr(e.locale.value,{messages:r,__i18n:n.__i18nGlobal}));const o=Object.keys(r);o.length&&o.forEach(i=>{e.mergeLocaleMessage(i,r[i])});{if(M.isObject(t.datetimeFormats)){const i=Object.keys(t.datetimeFormats);i.length&&i.forEach(a=>{e.mergeDateTimeFormat(a,t.datetimeFormats[a])})}if(M.isObject(t.numberFormats)){const i=Object.keys(t.numberFormats);i.length&&i.forEach(a=>{e.mergeNumberFormat(a,t.numberFormats[a])})}}}function qs(e){return Ee.createVNode(Ee.Text,null,e,0)}const Gs="__INTLIFY_META__";let Ks=0;function Ys(e){return(t,n,r,o)=>e(n,r,Ee.getCurrentInstance()||void 0,o)}const M_=()=>{const e=Ee.getCurrentInstance();let t=null;return e&&(t=dc(e)[Gs])?{[Gs]:t}:null};function ya(e={},t){const{__root:n}=e,r=n===void 0;let o=M.isBoolean(e.inheritLocale)?e.inheritLocale:!0;const i=Ee.ref(n&&o?n.locale.value:M.isString(e.locale)?e.locale:ge.DEFAULT_LOCALE),a=Ee.ref(n&&o?n.fallbackLocale.value:M.isString(e.fallbackLocale)||M.isArray(e.fallbackLocale)||M.isPlainObject(e.fallbackLocale)||e.fallbackLocale===!1?e.fallbackLocale:i.value),s=Ee.ref(Fr(i.value,e)),l=Ee.ref(M.isPlainObject(e.datetimeFormats)?e.datetimeFormats:{[i.value]:{}}),u=Ee.ref(M.isPlainObject(e.numberFormats)?e.numberFormats:{[i.value]:{}});let c=n?n.missingWarn:M.isBoolean(e.missingWarn)||M.isRegExp(e.missingWarn)?e.missingWarn:!0,d=n?n.fallbackWarn:M.isBoolean(e.fallbackWarn)||M.isRegExp(e.fallbackWarn)?e.fallbackWarn:!0,m=n?n.fallbackRoot:M.isBoolean(e.fallbackRoot)?e.fallbackRoot:!0,_=!!e.fallbackFormat,S=M.isFunction(e.missing)?e.missing:null,E=M.isFunction(e.missing)?Ys(e.missing):null,L=M.isFunction(e.postTranslation)?e.postTranslation:null,y=n?n.warnHtmlMessage:M.isBoolean(e.warnHtmlMessage)?e.warnHtmlMessage:!0,$=!!e.escapeParameter;const w=n?n.modifiers:M.isPlainObject(e.modifiers)?e.modifiers:{};let h=e.pluralRules||n&&n.pluralRules,p;p=(()=>{r&&ge.setFallbackContext(null);const R={version:ac,locale:i.value,fallbackLocale:a.value,messages:s.value,modifiers:w,pluralRules:h,missing:E===null?void 0:E,missingWarn:c,fallbackWarn:d,fallbackFormat:_,unresolving:!0,postTranslation:L===null?void 0:L,warnHtmlMessage:y,escapeParameter:$,messageResolver:e.messageResolver,__meta:{framework:"vue"}};R.datetimeFormats=l.value,R.numberFormats=u.value,R.__datetimeFormatters=M.isPlainObject(p)?p.__datetimeFormatters:void 0,R.__numberFormatters=M.isPlainObject(p)?p.__numberFormatters:void 0,R.__v_emitter=M.isPlainObject(p)?p.__v_emitter:void 0;const X=ge.createCoreContext(R);return r&&ge.setFallbackContext(X),X})(),ge.updateFallbackLocale(p,i.value,a.value);function v(){return[i.value,a.value,s.value,l.value,u.value]}const P=Ee.computed({get:()=>i.value,set:R=>{i.value=R,p.locale=i.value}}),A=Ee.computed({get:()=>a.value,set:R=>{a.value=R,p.fallbackLocale=a.value,ge.updateFallbackLocale(p,i.value,R)}}),T=Ee.computed(()=>s.value),j=Ee.computed(()=>l.value),J=Ee.computed(()=>u.value);function U(){return M.isFunction(L)?L:null}function O(R){L=R,p.postTranslation=R}function N(){return S}function D(R){R!==null&&(E=Ys(R)),S=R,p.missing=E}function V(R,X){return R!=="translate"||!X.resolvedMessage}const G=(R,X,Ue,W,ie,le)=>{v();let Ce;try{ge.setAdditionalMeta(M_()),r||(p.fallbackContext=n?ge.getFallbackContext():void 0),Ce=R(p)}finally{ge.setAdditionalMeta(null),r||(p.fallbackContext=void 0)}if(M.isNumber(Ce)&&Ce===ge.NOT_REOSLVED){const[st,Tt]=X();if(n&&M.isString(st)&&V(Ue,Tt)){m&&(ge.isTranslateFallbackWarn(d,st)||ge.isTranslateMissingWarn(c,st))&&M.warn(mt(Qe.FALLBACK_TO_ROOT,{key:st,type:Ue}));{const{__v_emitter:$a}=p;$a&&m&&$a.emit("fallback",{type:Ue,key:st,to:"global",groupId:`${Ue}:${st}`})}}return n&&m?W(n):ie(st)}else{if(le(Ce))return Ce;throw Ge(Oe.UNEXPECTED_RETURN_TYPE)}};function B(...R){return G(X=>Reflect.apply(ge.translate,null,[X,...R]),()=>ge.parseTranslateArgs(...R),"translate",X=>Reflect.apply(X.t,X,[...R]),X=>X,X=>M.isString(X))}function I(...R){const[X,Ue,W]=R;if(W&&!M.isObject(W))throw Ge(Oe.INVALID_ARGUMENT);return B(X,Ue,M.assign({resolvedMessage:!0},W||{}))}function Y(...R){return G(X=>Reflect.apply(ge.datetime,null,[X,...R]),()=>ge.parseDateTimeArgs(...R),"datetime format",X=>Reflect.apply(X.d,X,[...R]),()=>ge.MISSING_RESOLVE_VALUE,X=>M.isString(X))}function Q(...R){return G(X=>Reflect.apply(ge.number,null,[X,...R]),()=>ge.parseNumberArgs(...R),"number format",X=>Reflect.apply(X.n,X,[...R]),()=>ge.MISSING_RESOLVE_VALUE,X=>M.isString(X))}function he(R){return R.map(X=>M.isString(X)||M.isNumber(X)||M.isBoolean(X)?qs(String(X)):X)}const Se={normalize:he,interpolate:R=>R,type:"vnode"};function Ae(...R){return G(X=>{let Ue;const W=X;try{W.processor=Se,Ue=Reflect.apply(ge.translate,null,[W,...R])}finally{W.processor=null}return Ue},()=>ge.parseTranslateArgs(...R),"translate",X=>X[ki](...R),X=>[qs(X)],X=>M.isArray(X))}function Ie(...R){return G(X=>Reflect.apply(ge.number,null,[X,...R]),()=>ge.parseNumberArgs(...R),"number format",X=>X[Oi](...R),()=>[],X=>M.isString(X)||M.isArray(X))}function Te(...R){return G(X=>Reflect.apply(ge.datetime,null,[X,...R]),()=>ge.parseDateTimeArgs(...R),"datetime format",X=>X[$i](...R),()=>[],X=>M.isString(X)||M.isArray(X))}function we(R){h=R,p.pluralRules=h}function Me(R,X){const Ue=M.isString(X)?X:i.value,W=g(Ue);return p.messageResolver(W,R)!==null}function xe(R){let X=null;const Ue=ge.fallbackWithLocaleChain(p,a.value,i.value);for(let W=0;W{o&&(i.value=R,p.locale=R,ge.updateFallbackLocale(p,i.value,a.value))}),Ee.watch(n.fallbackLocale,R=>{o&&(a.value=R,p.fallbackLocale=R,ge.updateFallbackLocale(p,i.value,a.value))}));const Z={id:Ks,locale:P,fallbackLocale:A,get inheritLocale(){return o},set inheritLocale(R){o=R,R&&n&&(i.value=n.locale.value,a.value=n.fallbackLocale.value,ge.updateFallbackLocale(p,i.value,a.value))},get availableLocales(){return Object.keys(s.value).sort()},messages:T,get modifiers(){return w},get pluralRules(){return h||{}},get isGlobal(){return r},get missingWarn(){return c},set missingWarn(R){c=R,p.missingWarn=c},get fallbackWarn(){return d},set fallbackWarn(R){d=R,p.fallbackWarn=d},get fallbackRoot(){return m},set fallbackRoot(R){m=R},get fallbackFormat(){return _},set fallbackFormat(R){_=R,p.fallbackFormat=_},get warnHtmlMessage(){return y},set warnHtmlMessage(R){y=R,p.warnHtmlMessage=R},get escapeParameter(){return $},set escapeParameter(R){$=R,p.escapeParameter=R},t:B,getLocaleMessage:g,setLocaleMessage:f,mergeLocaleMessage:k,getPostTranslationHandler:U,setPostTranslationHandler:O,getMissingHandler:N,setMissingHandler:D,[uc]:we};return Z.datetimeFormats=j,Z.numberFormats=J,Z.rt=I,Z.te=Me,Z.tm=re,Z.d=Y,Z.n=Q,Z.getDateTimeFormat=q,Z.setDateTimeFormat=ne,Z.mergeDateTimeFormat=be,Z.getNumberFormat=Ye,Z.setNumberFormat=Ze,Z.mergeNumberFormat=Sn,Z[cc]=e.__injectWithOption,Z[ki]=Ae,Z[$i]=Te,Z[Oi]=Ie,Z[Li]=R=>{p.__v_emitter=R},Z[Ti]=()=>{p.__v_emitter=void 0},Z}function R_(e){const t=M.isString(e.locale)?e.locale:ge.DEFAULT_LOCALE,n=M.isString(e.fallbackLocale)||M.isArray(e.fallbackLocale)||M.isPlainObject(e.fallbackLocale)||e.fallbackLocale===!1?e.fallbackLocale:t,r=M.isFunction(e.missing)?e.missing:void 0,o=M.isBoolean(e.silentTranslationWarn)||M.isRegExp(e.silentTranslationWarn)?!e.silentTranslationWarn:!0,i=M.isBoolean(e.silentFallbackWarn)||M.isRegExp(e.silentFallbackWarn)?!e.silentFallbackWarn:!0,a=M.isBoolean(e.fallbackRoot)?e.fallbackRoot:!0,s=!!e.formatFallbackMessages,l=M.isPlainObject(e.modifiers)?e.modifiers:{},u=e.pluralizationRules,c=M.isFunction(e.postTranslation)?e.postTranslation:void 0,d=M.isString(e.warnHtmlInMessage)?e.warnHtmlInMessage!=="off":!0,m=!!e.escapeParameterHtml,_=M.isBoolean(e.sync)?e.sync:!0;e.formatter&&M.warn(mt(Qe.NOT_SUPPORTED_FORMATTER)),e.preserveDirectiveContent&&M.warn(mt(Qe.NOT_SUPPORTED_PRESERVE_DIRECTIVE));let S=e.messages;if(M.isPlainObject(e.sharedMessages)){const p=e.sharedMessages;S=Object.keys(p).reduce((v,P)=>{const A=v[P]||(v[P]={});return M.assign(A,p[P]),v},S||{})}const{__i18n:E,__root:L,__injectWithOption:y}=e,$=e.datetimeFormats,w=e.numberFormats,h=e.flatJson;return{locale:t,fallbackLocale:n,messages:S,flatJson:h,datetimeFormats:$,numberFormats:w,missing:r,missingWarn:o,fallbackWarn:i,fallbackRoot:a,fallbackFormat:s,modifiers:l,pluralRules:u,postTranslation:c,warnHtmlMessage:d,escapeParameter:m,messageResolver:e.messageResolver,inheritLocale:_,__i18n:E,__root:L,__injectWithOption:y}}function Ni(e={},t){{const n=ya(R_(e)),r={id:n.id,get locale(){return n.locale.value},set locale(o){n.locale.value=o},get fallbackLocale(){return n.fallbackLocale.value},set fallbackLocale(o){n.fallbackLocale.value=o},get messages(){return n.messages.value},get datetimeFormats(){return n.datetimeFormats.value},get numberFormats(){return n.numberFormats.value},get availableLocales(){return n.availableLocales},get formatter(){return M.warn(mt(Qe.NOT_SUPPORTED_FORMATTER)),{interpolate(){return[]}}},set formatter(o){M.warn(mt(Qe.NOT_SUPPORTED_FORMATTER))},get missing(){return n.getMissingHandler()},set missing(o){n.setMissingHandler(o)},get silentTranslationWarn(){return M.isBoolean(n.missingWarn)?!n.missingWarn:n.missingWarn},set silentTranslationWarn(o){n.missingWarn=M.isBoolean(o)?!o:o},get silentFallbackWarn(){return M.isBoolean(n.fallbackWarn)?!n.fallbackWarn:n.fallbackWarn},set silentFallbackWarn(o){n.fallbackWarn=M.isBoolean(o)?!o:o},get modifiers(){return n.modifiers},get formatFallbackMessages(){return n.fallbackFormat},set formatFallbackMessages(o){n.fallbackFormat=o},get postTranslation(){return n.getPostTranslationHandler()},set postTranslation(o){n.setPostTranslationHandler(o)},get sync(){return n.inheritLocale},set sync(o){n.inheritLocale=o},get warnHtmlInMessage(){return n.warnHtmlMessage?"warn":"off"},set warnHtmlInMessage(o){n.warnHtmlMessage=o!=="off"},get escapeParameterHtml(){return n.escapeParameter},set escapeParameterHtml(o){n.escapeParameter=o},get preserveDirectiveContent(){return M.warn(mt(Qe.NOT_SUPPORTED_PRESERVE_DIRECTIVE)),!0},set preserveDirectiveContent(o){M.warn(mt(Qe.NOT_SUPPORTED_PRESERVE_DIRECTIVE))},get pluralizationRules(){return n.pluralRules||{}},__composer:n,t(...o){const[i,a,s]=o,l={};let u=null,c=null;if(!M.isString(i))throw Ge(Oe.INVALID_ARGUMENT);const d=i;return M.isString(a)?l.locale=a:M.isArray(a)?u=a:M.isPlainObject(a)&&(c=a),M.isArray(s)?u=s:M.isPlainObject(s)&&(c=s),Reflect.apply(n.t,n,[d,u||c||{},l])},rt(...o){return Reflect.apply(n.rt,n,[...o])},tc(...o){const[i,a,s]=o,l={plural:1};let u=null,c=null;if(!M.isString(i))throw Ge(Oe.INVALID_ARGUMENT);const d=i;return M.isString(a)?l.locale=a:M.isNumber(a)?l.plural=a:M.isArray(a)?u=a:M.isPlainObject(a)&&(c=a),M.isString(s)?l.locale=s:M.isArray(s)?u=s:M.isPlainObject(s)&&(c=s),Reflect.apply(n.t,n,[d,u||c||{},l])},te(o,i){return n.te(o,i)},tm(o){return n.tm(o)},getLocaleMessage(o){return n.getLocaleMessage(o)},setLocaleMessage(o,i){n.setLocaleMessage(o,i)},mergeLocaleMessage(o,i){n.mergeLocaleMessage(o,i)},d(...o){return Reflect.apply(n.d,n,[...o])},getDateTimeFormat(o){return n.getDateTimeFormat(o)},setDateTimeFormat(o,i){n.setDateTimeFormat(o,i)},mergeDateTimeFormat(o,i){n.mergeDateTimeFormat(o,i)},n(...o){return Reflect.apply(n.n,n,[...o])},getNumberFormat(o){return n.getNumberFormat(o)},setNumberFormat(o,i){n.setNumberFormat(o,i)},mergeNumberFormat(o,i){n.mergeNumberFormat(o,i)},getChoiceIndex(o,i){return M.warn(mt(Qe.NOT_SUPPORTED_GET_CHOICE_INDEX)),-1},__onComponentInstanceCreated(o){const{componentInstanceCreatedListener:i}=e;i&&i(o,r)}};return r.__enableEmitter=o=>{const i=n;i[Li]&&i[Li](o)},r.__disableEmitter=()=>{const o=n;o[Ti]&&o[Ti]()},r}}const ba={tag:{type:[String,Object]},locale:{type:String},scope:{type:String,validator:e=>e==="parent"||e==="global",default:"parent"},i18n:{type:Object}};function B_({slots:e},t){return t.length===1&&t[0]==="default"?(e.default?e.default():[]).reduce((r,o)=>r=[...r,...M.isArray(o.children)?o.children:[o]],[]):t.reduce((n,r)=>{const o=e[r];return o&&(n[r]=o()),n},{})}function hc(e){return Ee.Fragment}const or={name:"i18n-t",props:M.assign({keypath:{type:String,required:!0},plural:{type:[Number,String],validator:e=>M.isNumber(e)||!isNaN(e)}},ba),setup(e,t){const{slots:n,attrs:r}=t,o=e.i18n||jr({useScope:e.scope,__useComponent:!0});return()=>{const i=Object.keys(n).filter(d=>d!=="_"),a={};e.locale&&(a.locale=e.locale),e.plural!==void 0&&(a.plural=M.isString(e.plural)?+e.plural:e.plural);const s=B_(t,i),l=o[ki](e.keypath,s,a),u=M.assign({},r),c=M.isString(e.tag)||M.isObject(e.tag)?e.tag:hc();return Ee.h(c,u,l)}}};function D_(e){return M.isArray(e)&&!M.isString(e[0])}function pc(e,t,n,r){const{slots:o,attrs:i}=t;return()=>{const a={part:!0};let s={};e.locale&&(a.locale=e.locale),M.isString(e.format)?a.key=e.format:M.isObject(e.format)&&(M.isString(e.format.key)&&(a.key=e.format.key),s=Object.keys(e.format).reduce((m,_)=>n.includes(_)?M.assign({},m,{[_]:e.format[_]}):m,{}));const l=r(e.value,a,s);let u=[a.key];M.isArray(l)?u=l.map((m,_)=>{const S=o[m.type],E=S?S({[m.type]:m.value,index:_,parts:l}):[m.value];return D_(E)&&(E[0].key=`${m.type}-${_}`),E}):M.isString(l)&&(u=[l]);const c=M.assign({},i),d=M.isString(e.tag)||M.isObject(e.tag)?e.tag:hc();return Ee.h(d,c,u)}}const Pi={name:"i18n-n",props:M.assign({value:{type:Number,required:!0},format:{type:[String,Object]}},ba),setup(e,t){const n=e.i18n||jr({useScope:"parent",__useComponent:!0});return pc(e,t,ge.NUMBER_FORMAT_OPTIONS_KEYS,(...r)=>n[Oi](...r))}},Ii={name:"i18n-d",props:M.assign({value:{type:[Number,Date],required:!0},format:{type:[String,Object]}},ba),setup(e,t){const n=e.i18n||jr({useScope:"parent",__useComponent:!0});return pc(e,t,ge.DATETIME_FORMAT_OPTIONS_KEYS,(...r)=>n[$i](...r))}};function F_(e,t){const n=e;if(e.mode==="composition")return n.__getInstance(t)||e.global;{const r=n.__getInstance(t);return r!=null?r.__composer:e.global.__composer}}function mc(e){const t=a=>{const{instance:s,modifiers:l,value:u}=a;if(!s||!s.$)throw Ge(Oe.UNEXPECTED_ERROR);const c=F_(e,s.$);l.preserve&&M.warn(mt(Qe.NOT_SUPPORTED_PRESERVE));const d=Xs(u);return[Reflect.apply(c.t,c,[...Js(d)]),c]};return{created:(a,s)=>{const[l,u]=t(s);M.inBrowser&&e.global===u&&(a.__i18nWatcher=Ee.watch(u.locale,()=>{s.instance&&s.instance.$forceUpdate()})),a.__composer=u,a.textContent=l},unmounted:a=>{M.inBrowser&&a.__i18nWatcher&&(a.__i18nWatcher(),a.__i18nWatcher=void 0,delete a.__i18nWatcher),a.__composer&&(a.__composer=void 0,delete a.__composer)},beforeUpdate:(a,{value:s})=>{if(a.__composer){const l=a.__composer,u=Xs(s);a.textContent=Reflect.apply(l.t,l,[...Js(u)])}},getSSRProps:a=>{const[s]=t(a);return{textContent:s}}}}function Xs(e){if(M.isString(e))return{path:e};if(M.isPlainObject(e)){if(!("path"in e))throw Ge(Oe.REQUIRED_VALUE,"path");return e}else throw Ge(Oe.INVALID_VALUE)}function Js(e){const{path:t,locale:n,args:r,choice:o,plural:i}=e,a={},s=r||{};return M.isString(n)&&(a.locale=n),M.isNumber(o)&&(a.plural=o),M.isNumber(i)&&(a.plural=i),[t,s,a]}function j_(e,t,...n){const r=M.isPlainObject(n[0])?n[0]:{},o=!!r.useI18nComponentName,i=M.isBoolean(r.globalInstall)?r.globalInstall:!0;i&&o&&M.warn(mt(Qe.COMPONENT_NAME_LEGACY_COMPATIBLE,{name:or.name})),i&&(e.component(o?"i18n":or.name,or),e.component(Pi.name,Pi),e.component(Ii.name,Ii)),e.directive("t",mc(t))}function V_(e,t,n){return{beforeCreate(){const r=Ee.getCurrentInstance();if(!r)throw Ge(Oe.UNEXPECTED_ERROR);const o=this.$options;if(o.i18n){const i=o.i18n;o.__i18n&&(i.__i18n=o.__i18n),i.__root=t,this===this.$root?this.$i18n=Zs(e,i):(i.__injectWithOption=!0,this.$i18n=Ni(i))}else o.__i18n?this===this.$root?this.$i18n=Zs(e,o):this.$i18n=Ni({__i18n:o.__i18n,__injectWithOption:!0,__root:t}):this.$i18n=e;o.__i18nGlobal&&fc(t,o,o),e.__onComponentInstanceCreated(this.$i18n),n.__setInstance(r,this.$i18n),this.$t=(...i)=>this.$i18n.t(...i),this.$rt=(...i)=>this.$i18n.rt(...i),this.$tc=(...i)=>this.$i18n.tc(...i),this.$te=(i,a)=>this.$i18n.te(i,a),this.$d=(...i)=>this.$i18n.d(...i),this.$n=(...i)=>this.$i18n.n(...i),this.$tm=i=>this.$i18n.tm(i)},mounted(){},unmounted(){const r=Ee.getCurrentInstance();if(!r)throw Ge(Oe.UNEXPECTED_ERROR);delete this.$t,delete this.$rt,delete this.$tc,delete this.$te,delete this.$d,delete this.$n,delete this.$tm,n.__deleteInstance(r),delete this.$i18n}}}function Zs(e,t){e.locale=t.locale||e.locale,e.fallbackLocale=t.fallbackLocale||e.fallbackLocale,e.missing=t.missing||e.missing,e.silentTranslationWarn=t.silentTranslationWarn||e.silentFallbackWarn,e.silentFallbackWarn=t.silentFallbackWarn||e.silentFallbackWarn,e.formatFallbackMessages=t.formatFallbackMessages||e.formatFallbackMessages,e.postTranslation=t.postTranslation||e.postTranslation,e.warnHtmlInMessage=t.warnHtmlInMessage||e.warnHtmlInMessage,e.escapeParameterHtml=t.escapeParameterHtml||e.escapeParameterHtml,e.sync=t.sync||e.sync,e.__composer[uc](t.pluralizationRules||e.pluralizationRules);const n=Fr(e.locale,{messages:t.messages,__i18n:t.__i18n});return Object.keys(n).forEach(r=>e.mergeLocaleMessage(r,n[r])),t.datetimeFormats&&Object.keys(t.datetimeFormats).forEach(r=>e.mergeDateTimeFormat(r,t.datetimeFormats[r])),t.numberFormats&&Object.keys(t.numberFormats).forEach(r=>e.mergeNumberFormat(r,t.numberFormats[r])),e}const vc=M.makeSymbol("global-vue-i18n");function x_(e={},t){const n=M.isBoolean(e.legacy)?e.legacy:!0,r=M.isBoolean(e.globalInjection)?e.globalInjection:!0,o=n?!!e.allowComposition:!0,i=new Map,[a,s]=U_(e,n),l=M.makeSymbol("vue-i18n");function u(m){return i.get(m)||null}function c(m,_){i.set(m,_)}function d(m){i.delete(m)}{const m={get mode(){return n?"legacy":"composition"},get allowComposition(){return o},async install(_,...S){_.__VUE_I18N_SYMBOL__=l,_.provide(_.__VUE_I18N_SYMBOL__,m),!n&&r&&Z_(_,m.global),j_(_,m,...S),n&&_.mixin(V_(s,s.__composer,m));const E=_.unmount;_.unmount=()=>{m.dispose(),E()}},get global(){return s},dispose(){a.stop()},__instances:i,__getInstance:u,__setInstance:c,__deleteInstance:d};return m}}function jr(e={}){const t=Ee.getCurrentInstance();if(t==null)throw Ge(Oe.MUST_BE_CALL_SETUP_TOP);if(!t.isCE&&t.appContext.app!=null&&!t.appContext.app.__VUE_I18N_SYMBOL__)throw Ge(Oe.NOT_INSLALLED);const n=W_(t),r=q_(n),o=dc(t),i=H_(e,o);if(n.mode==="legacy"&&!e.__useComponent){if(!n.allowComposition)throw Ge(Oe.NOT_AVAILABLE_IN_LEGACY_MODE);return Y_(t,i,r,e)}if(i==="global")return fc(r,e,o),r;if(i==="parent"){let l=G_(n,t,e.__useComponent);return l==null&&(M.warn(mt(Qe.NOT_FOUND_PARENT_SCOPE)),l=r),l}const a=n;let s=a.__getInstance(t);if(s==null){const l=M.assign({},e);"__i18n"in o&&(l.__i18n=o.__i18n),r&&(l.__root=r),s=ya(l),K_(a,t),a.__setInstance(t,s)}return s}const z_=e=>{if(!(I_ in e))throw Ge(Oe.NOT_COMPATIBLE_LEGACY_VUE_I18N);return e};function U_(e,t,n){const r=Ee.effectScope();{const o=t?r.run(()=>Ni(e)):r.run(()=>ya(e));if(o==null)throw Ge(Oe.UNEXPECTED_ERROR);return[r,o]}}function W_(e){{const t=Ee.inject(e.isCE?vc:e.appContext.app.__VUE_I18N_SYMBOL__);if(!t)throw Ge(e.isCE?Oe.NOT_INSLALLED_WITH_PROVIDE:Oe.UNEXPECTED_ERROR);return t}}function H_(e,t){return M.isEmptyObject(e)?"__i18n"in t?"local":"global":e.useScope?e.useScope:"local"}function q_(e){return e.mode==="composition"?e.global:e.global.__composer}function G_(e,t,n=!1){let r=null;const o=t.root;let i=t.parent;for(;i!=null;){const a=e;if(e.mode==="composition")r=a.__getInstance(i);else{const s=a.__getInstance(i);s!=null&&(r=s.__composer,n&&r&&!r[cc]&&(r=null))}if(r!=null||o===i)break;i=i.parent}return r}function K_(e,t,n){Ee.onMounted(()=>{},t),Ee.onUnmounted(()=>{e.__deleteInstance(t)},t)}function Y_(e,t,n,r={}){const o=t==="local",i=Ee.shallowRef(null);if(o&&e.proxy&&!(e.proxy.$options.i18n||e.proxy.$options.__i18n))throw Ge(Oe.MUST_DEFINE_I18N_OPTION_IN_ALLOW_COMPOSITION);const a=M.isBoolean(r.inheritLocale)?r.inheritLocale:!0,s=Ee.ref(o&&a?n.locale.value:M.isString(r.locale)?r.locale:ge.DEFAULT_LOCALE),l=Ee.ref(o&&a?n.fallbackLocale.value:M.isString(r.fallbackLocale)||M.isArray(r.fallbackLocale)||M.isPlainObject(r.fallbackLocale)||r.fallbackLocale===!1?r.fallbackLocale:s.value),u=Ee.ref(Fr(s.value,r)),c=Ee.ref(M.isPlainObject(r.datetimeFormats)?r.datetimeFormats:{[s.value]:{}}),d=Ee.ref(M.isPlainObject(r.numberFormats)?r.numberFormats:{[s.value]:{}}),m=o?n.missingWarn:M.isBoolean(r.missingWarn)||M.isRegExp(r.missingWarn)?r.missingWarn:!0,_=o?n.fallbackWarn:M.isBoolean(r.fallbackWarn)||M.isRegExp(r.fallbackWarn)?r.fallbackWarn:!0,S=o?n.fallbackRoot:M.isBoolean(r.fallbackRoot)?r.fallbackRoot:!0,E=!!r.fallbackFormat,L=M.isFunction(r.missing)?r.missing:null,y=M.isFunction(r.postTranslation)?r.postTranslation:null,$=o?n.warnHtmlMessage:M.isBoolean(r.warnHtmlMessage)?r.warnHtmlMessage:!0,w=!!r.escapeParameter,h=o?n.modifiers:M.isPlainObject(r.modifiers)?r.modifiers:{},p=r.pluralRules||o&&n.pluralRules;function b(){return[s.value,l.value,u.value,c.value,d.value]}const v=Ee.computed({get:()=>i.value?i.value.locale.value:s.value,set:f=>{i.value&&(i.value.locale.value=f),s.value=f}}),P=Ee.computed({get:()=>i.value?i.value.fallbackLocale.value:l.value,set:f=>{i.value&&(i.value.fallbackLocale.value=f),l.value=f}}),A=Ee.computed(()=>i.value?i.value.messages.value:u.value),T=Ee.computed(()=>c.value),j=Ee.computed(()=>d.value);function J(){return i.value?i.value.getPostTranslationHandler():y}function U(f){i.value&&i.value.setPostTranslationHandler(f)}function O(){return i.value?i.value.getMissingHandler():L}function N(f){i.value&&i.value.setMissingHandler(f)}function D(f){return b(),f()}function V(...f){return i.value?D(()=>Reflect.apply(i.value.t,null,[...f])):D(()=>"")}function G(...f){return i.value?Reflect.apply(i.value.rt,null,[...f]):""}function B(...f){return i.value?D(()=>Reflect.apply(i.value.d,null,[...f])):D(()=>"")}function I(...f){return i.value?D(()=>Reflect.apply(i.value.n,null,[...f])):D(()=>"")}function Y(f){return i.value?i.value.tm(f):{}}function Q(f,k){return i.value?i.value.te(f,k):!1}function he(f){return i.value?i.value.getLocaleMessage(f):{}}function me(f,k){i.value&&(i.value.setLocaleMessage(f,k),u.value[f]=k)}function Se(f,k){i.value&&i.value.mergeLocaleMessage(f,k)}function Ae(f){return i.value?i.value.getDateTimeFormat(f):{}}function Ie(f,k){i.value&&(i.value.setDateTimeFormat(f,k),c.value[f]=k)}function Te(f,k){i.value&&i.value.mergeDateTimeFormat(f,k)}function we(f){return i.value?i.value.getNumberFormat(f):{}}function Me(f,k){i.value&&(i.value.setNumberFormat(f,k),d.value[f]=k)}function xe(f,k){i.value&&i.value.mergeNumberFormat(f,k)}const re={get id(){return i.value?i.value.id:-1},locale:v,fallbackLocale:P,messages:A,datetimeFormats:T,numberFormats:j,get inheritLocale(){return i.value?i.value.inheritLocale:a},set inheritLocale(f){i.value&&(i.value.inheritLocale=f)},get availableLocales(){return i.value?i.value.availableLocales:Object.keys(u.value)},get modifiers(){return i.value?i.value.modifiers:h},get pluralRules(){return i.value?i.value.pluralRules:p},get isGlobal(){return i.value?i.value.isGlobal:!1},get missingWarn(){return i.value?i.value.missingWarn:m},set missingWarn(f){i.value&&(i.value.missingWarn=f)},get fallbackWarn(){return i.value?i.value.fallbackWarn:_},set fallbackWarn(f){i.value&&(i.value.missingWarn=f)},get fallbackRoot(){return i.value?i.value.fallbackRoot:S},set fallbackRoot(f){i.value&&(i.value.fallbackRoot=f)},get fallbackFormat(){return i.value?i.value.fallbackFormat:E},set fallbackFormat(f){i.value&&(i.value.fallbackFormat=f)},get warnHtmlMessage(){return i.value?i.value.warnHtmlMessage:$},set warnHtmlMessage(f){i.value&&(i.value.warnHtmlMessage=f)},get escapeParameter(){return i.value?i.value.escapeParameter:w},set escapeParameter(f){i.value&&(i.value.escapeParameter=f)},t:V,getPostTranslationHandler:J,setPostTranslationHandler:U,getMissingHandler:O,setMissingHandler:N,rt:G,d:B,n:I,tm:Y,te:Q,getLocaleMessage:he,setLocaleMessage:me,mergeLocaleMessage:Se,getDateTimeFormat:Ae,setDateTimeFormat:Ie,mergeDateTimeFormat:Te,getNumberFormat:we,setNumberFormat:Me,mergeNumberFormat:xe};function g(f){f.locale.value=s.value,f.fallbackLocale.value=l.value,Object.keys(u.value).forEach(k=>{f.mergeLocaleMessage(k,u.value[k])}),Object.keys(c.value).forEach(k=>{f.mergeDateTimeFormat(k,c.value[k])}),Object.keys(d.value).forEach(k=>{f.mergeNumberFormat(k,d.value[k])}),f.escapeParameter=w,f.fallbackFormat=E,f.fallbackRoot=S,f.fallbackWarn=_,f.missingWarn=m,f.warnHtmlMessage=$}return Ee.onBeforeMount(()=>{if(e.proxy==null||e.proxy.$i18n==null)throw Ge(Oe.NOT_AVAILABLE_COMPOSITION_IN_LEGACY);const f=i.value=e.proxy.$i18n.__composer;t==="global"?(s.value=f.locale.value,l.value=f.fallbackLocale.value,u.value=f.messages.value,c.value=f.datetimeFormats.value,d.value=f.numberFormats.value):o&&g(f)}),re}const X_=["locale","fallbackLocale","availableLocales"],J_=["t","rt","d","n","tm"];function Z_(e,t){const n=Object.create(null);X_.forEach(r=>{const o=Object.getOwnPropertyDescriptor(t,r);if(!o)throw Ge(Oe.UNEXPECTED_ERROR);const i=Ee.isRef(o.value)?{get(){return o.value.value},set(a){o.value.value=a}}:{get(){return o.get&&o.get()}};Object.defineProperty(n,r,i)}),e.config.globalProperties.$i18n=n,J_.forEach(r=>{const o=Object.getOwnPropertyDescriptor(t,r);if(!o||!o.value)throw Ge(Oe.UNEXPECTED_ERROR);Object.defineProperty(e.config.globalProperties,`$${r}`,o)})}ge.registerMessageCompiler(ge.compileToFunction);ge.registerMessageResolver(ge.resolveValue);ge.registerLocaleFallbacker(ge.fallbackWithLocaleChain);{const e=M.getGlobalThis();e.__INTLIFY__=!0,ge.setDevToolsHook(e.__INTLIFY_DEVTOOLS_GLOBAL_HOOK__)}Ot.DatetimeFormat=Ii;Ot.I18nInjectionKey=vc;Ot.NumberFormat=Pi;Ot.Translation=or;Ot.VERSION=ac;Ot.castToVueI18n=z_;var aS=Ot.createI18n=x_;Ot.useI18n=jr;Ot.vTDirective=mc;const Xn={formatYear:"YYYY",formatMonth:"MMM YYYY",today:"Today",view:{month:"Month",year:"Year",week:"Week",day:"Day"},month:{long:{January:"January",February:"February",March:"March",April:"April",May:"May",June:"June",July:"July",August:"August",September:"September",October:"October",November:"November",December:"December"},short:{January:"Jan",February:"Feb",March:"Mar",April:"Apr",May:"May",June:"Jun",July:"Jul",August:"Aug",September:"Sept",October:"Oct",November:"Nov",December:"Dec"}},week:{long:{self:"Week",monday:"Monday",tuesday:"Tuesday",wednesday:"Wednesday",thursday:"Thursday",friday:"Friday",saturday:"Saturday",sunday:"Sunday"},short:{self:"Week",monday:"Mon",tuesday:"Tue",wednesday:"Wed",thursday:"Thu",friday:"Fri",saturday:"Sat",sunday:"Sun"}}},sS={locale:"en-US",empty:{description:"No Data"},drawer:{okText:"Ok",cancelText:"Cancel"},popconfirm:{okText:"Ok",cancelText:"Cancel"},modal:{okText:"Ok",cancelText:"Cancel"},pagination:{goto:"Goto",page:"Page",countPerPage:" / Page",total:"Total: {0}"},table:{okText:"Ok",resetText:"Reset"},upload:{start:"Start",cancel:"Cancel",delete:"Delete",retry:"Click to retry",buttonText:"Upload",preview:"Preview",drag:"Click or drag file to this area to upload",dragHover:"Release to upload",error:"Upload Error"},calendar:Xn,datePicker:{view:Xn.view,month:Xn.month,week:Xn.week,placeholder:{date:"Please select date",week:"Please select week",month:"Please select month",year:"Please select year",quarter:"Please select quarter",time:"Please select time"},rangePlaceholder:{date:["Start date","End date"],week:["Start week","End week"],month:["Start month","End month"],year:["Start year","End year"],quarter:["Start quarter","End quarter"],time:["Start time","End time"]},selectTime:"Select time",today:"Today",now:"Now",ok:"Ok"},image:{loading:"loading"},imagePreview:{fullScreen:"Full Screen",rotateRight:"Rotate Right",rotateLeft:"Rotate Left",zoomIn:"Zoom In",zoomOut:"Zoom Out",originalSize:"Original Size"},typography:{copied:"Copied",copy:"Copy",expand:"Expand",collapse:"Collapse",edit:"Edit"}};var Q_=!1,Qs;const e0=typeof window<"u";e0&&((Qs=window==null?void 0:window.navigator)!=null&&Qs.userAgent)&&/iP(ad|hone|od)/.test(window.navigator.userAgent);function lS(e,t=!0){Wt()?Ke(e):t?e():Je(e)}function uS(e){Wt()&&Sr(e)}const kt=Object.create(null);kt.open="0";kt.close="1";kt.ping="2";kt.pong="3";kt.message="4";kt.upgrade="5";kt.noop="6";const ir=Object.create(null);Object.keys(kt).forEach(e=>{ir[kt[e]]=e});const t0={type:"error",data:"parser error"},n0=typeof Blob=="function"||typeof Blob<"u"&&Object.prototype.toString.call(Blob)==="[object BlobConstructor]",r0=typeof ArrayBuffer=="function",o0=e=>typeof ArrayBuffer.isView=="function"?ArrayBuffer.isView(e):e&&e.buffer instanceof ArrayBuffer,gc=({type:e,data:t},n,r)=>n0&&t instanceof Blob?n?r(t):el(t,r):r0&&(t instanceof ArrayBuffer||o0(t))?n?r(t):el(new Blob([t]),r):r(kt[e]+(t||"")),el=(e,t)=>{const n=new FileReader;return n.onload=function(){const r=n.result.split(",")[1];t("b"+(r||""))},n.readAsDataURL(e)},tl="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",Ln=typeof Uint8Array>"u"?[]:new Uint8Array(256);for(let e=0;e{let t=e.length*.75,n=e.length,r,o=0,i,a,s,l;e[e.length-1]==="="&&(t--,e[e.length-2]==="="&&t--);const u=new ArrayBuffer(t),c=new Uint8Array(u);for(r=0;r>4,c[o++]=(a&15)<<4|s>>2,c[o++]=(s&3)<<6|l&63;return u},a0=typeof ArrayBuffer=="function",yc=(e,t)=>{if(typeof e!="string")return{type:"message",data:bc(e,t)};const n=e.charAt(0);return n==="b"?{type:"message",data:s0(e.substring(1),t)}:ir[n]?e.length>1?{type:ir[n],data:e.substring(1)}:{type:ir[n]}:t0},s0=(e,t)=>{if(a0){const n=i0(e);return bc(n,t)}else return{base64:!0,data:e}},bc=(e,t)=>{switch(t){case"blob":return e instanceof ArrayBuffer?new Blob([e]):e;case"arraybuffer":default:return e}},_c=String.fromCharCode(30),l0=(e,t)=>{const n=e.length,r=new Array(n);let o=0;e.forEach((i,a)=>{gc(i,!1,s=>{r[a]=s,++o===n&&t(r.join(_c))})})},u0=(e,t)=>{const n=e.split(_c),r=[];for(let o=0;otypeof self<"u"?self:typeof window<"u"?window:Function("return this")())();function Sc(e,...t){return t.reduce((n,r)=>(e.hasOwnProperty(r)&&(n[r]=e[r]),n),{})}const d0=dt.setTimeout,f0=dt.clearTimeout;function Vr(e,t){t.useNativeTimers?(e.setTimeoutFn=d0.bind(dt),e.clearTimeoutFn=f0.bind(dt)):(e.setTimeoutFn=dt.setTimeout.bind(dt),e.clearTimeoutFn=dt.clearTimeout.bind(dt))}const h0=1.33;function p0(e){return typeof e=="string"?m0(e):Math.ceil((e.byteLength||e.size)*h0)}function m0(e){let t=0,n=0;for(let r=0,o=e.length;r=57344?n+=3:(r++,n+=4);return n}class v0 extends Error{constructor(t,n,r){super(t),this.description=n,this.context=r,this.type="TransportError"}}class Ec extends qe{constructor(t){super(),this.writable=!1,Vr(this,t),this.opts=t,this.query=t.query,this.socket=t.socket}onError(t,n,r){return super.emitReserved("error",new v0(t,n,r)),this}open(){return this.readyState="opening",this.doOpen(),this}close(){return(this.readyState==="opening"||this.readyState==="open")&&(this.doClose(),this.onClose()),this}send(t){this.readyState==="open"&&this.write(t)}onOpen(){this.readyState="open",this.writable=!0,super.emitReserved("open")}onData(t){const n=yc(t,this.socket.binaryType);this.onPacket(n)}onPacket(t){super.emitReserved("packet",t)}onClose(t){this.readyState="closed",super.emitReserved("close",t)}pause(t){}}const wc="0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_".split(""),Mi=64,g0={};let nl=0,Jn=0,rl;function ol(e){let t="";do t=wc[e%Mi]+t,e=Math.floor(e/Mi);while(e>0);return t}function kc(){const e=ol(+new Date);return e!==rl?(nl=0,rl=e):e+"."+ol(nl++)}for(;Jn{this.readyState="paused",t()};if(this.polling||!this.writable){let r=0;this.polling&&(r++,this.once("pollComplete",function(){--r||n()})),this.writable||(r++,this.once("drain",function(){--r||n()}))}else n()}poll(){this.polling=!0,this.doPoll(),this.emitReserved("poll")}onData(t){const n=r=>{if(this.readyState==="opening"&&r.type==="open"&&this.onOpen(),r.type==="close")return this.onClose({description:"transport closed by the server"}),!1;this.onPacket(r)};u0(t,this.socket.binaryType).forEach(n),this.readyState!=="closed"&&(this.polling=!1,this.emitReserved("pollComplete"),this.readyState==="open"&&this.poll())}doClose(){const t=()=>{this.write([{type:"close"}])};this.readyState==="open"?t():this.once("open",t)}write(t){this.writable=!1,l0(t,n=>{this.doWrite(n,()=>{this.writable=!0,this.emitReserved("drain")})})}uri(){let t=this.query||{};const n=this.opts.secure?"https":"http";let r="";this.opts.timestampRequests!==!1&&(t[this.opts.timestampParam]=kc()),!this.supportsBinary&&!t.sid&&(t.b64=1),this.opts.port&&(n==="https"&&Number(this.opts.port)!==443||n==="http"&&Number(this.opts.port)!==80)&&(r=":"+this.opts.port);const o=$c(t),i=this.opts.hostname.indexOf(":")!==-1;return n+"://"+(i?"["+this.opts.hostname+"]":this.opts.hostname)+r+this.opts.path+(o.length?"?"+o:"")}request(t={}){return Object.assign(t,{xd:this.xd,xs:this.xs},this.opts),new Et(this.uri(),t)}doWrite(t,n){const r=this.request({method:"POST",data:t});r.on("success",n),r.on("error",(o,i)=>{this.onError("xhr post error",o,i)})}doPoll(){const t=this.request();t.on("data",this.onData.bind(this)),t.on("error",(n,r)=>{this.onError("xhr poll error",n,r)}),this.pollXhr=t}}class Et extends qe{constructor(t,n){super(),Vr(this,n),this.opts=n,this.method=n.method||"GET",this.uri=t,this.async=n.async!==!1,this.data=n.data!==void 0?n.data:null,this.create()}create(){const t=Sc(this.opts,"agent","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","autoUnref");t.xdomain=!!this.opts.xd,t.xscheme=!!this.opts.xs;const n=this.xhr=new Lc(t);try{n.open(this.method,this.uri,this.async);try{if(this.opts.extraHeaders){n.setDisableHeaderCheck&&n.setDisableHeaderCheck(!0);for(let r in this.opts.extraHeaders)this.opts.extraHeaders.hasOwnProperty(r)&&n.setRequestHeader(r,this.opts.extraHeaders[r])}}catch{}if(this.method==="POST")try{n.setRequestHeader("Content-type","text/plain;charset=UTF-8")}catch{}try{n.setRequestHeader("Accept","*/*")}catch{}"withCredentials"in n&&(n.withCredentials=this.opts.withCredentials),this.opts.requestTimeout&&(n.timeout=this.opts.requestTimeout),n.onreadystatechange=()=>{n.readyState===4&&(n.status===200||n.status===1223?this.onLoad():this.setTimeoutFn(()=>{this.onError(typeof n.status=="number"?n.status:0)},0))},n.send(this.data)}catch(r){this.setTimeoutFn(()=>{this.onError(r)},0);return}typeof document<"u"&&(this.index=Et.requestsCount++,Et.requests[this.index]=this)}onError(t){this.emitReserved("error",t,this.xhr),this.cleanup(!0)}cleanup(t){if(!(typeof this.xhr>"u"||this.xhr===null)){if(this.xhr.onreadystatechange=_0,t)try{this.xhr.abort()}catch{}typeof document<"u"&&delete Et.requests[this.index],this.xhr=null}}onLoad(){const t=this.xhr.responseText;t!==null&&(this.emitReserved("data",t),this.emitReserved("success"),this.cleanup())}abort(){this.cleanup()}}Et.requestsCount=0;Et.requests={};if(typeof document<"u"){if(typeof attachEvent=="function")attachEvent("onunload",il);else if(typeof addEventListener=="function"){const e="onpagehide"in dt?"pagehide":"unload";addEventListener(e,il,!1)}}function il(){for(let e in Et.requests)Et.requests.hasOwnProperty(e)&&Et.requests[e].abort()}const Tc=(()=>typeof Promise=="function"&&typeof Promise.resolve=="function"?t=>Promise.resolve().then(t):(t,n)=>n(t,0))(),Zn=dt.WebSocket||dt.MozWebSocket,al=!0,E0="arraybuffer",sl=typeof navigator<"u"&&typeof navigator.product=="string"&&navigator.product.toLowerCase()==="reactnative";class w0 extends Ec{constructor(t){super(t),this.supportsBinary=!t.forceBase64}get name(){return"websocket"}doOpen(){if(!this.check())return;const t=this.uri(),n=this.opts.protocols,r=sl?{}:Sc(this.opts,"agent","perMessageDeflate","pfx","key","passphrase","cert","ca","ciphers","rejectUnauthorized","localAddress","protocolVersion","origin","maxPayload","family","checkServerIdentity");this.opts.extraHeaders&&(r.headers=this.opts.extraHeaders);try{this.ws=al&&!sl?n?new Zn(t,n):new Zn(t):new Zn(t,n,r)}catch(o){return this.emitReserved("error",o)}this.ws.binaryType=this.socket.binaryType||E0,this.addEventListeners()}addEventListeners(){this.ws.onopen=()=>{this.opts.autoUnref&&this.ws._socket.unref(),this.onOpen()},this.ws.onclose=t=>this.onClose({description:"websocket connection closed",context:t}),this.ws.onmessage=t=>this.onData(t.data),this.ws.onerror=t=>this.onError("websocket error",t)}write(t){this.writable=!1;for(let n=0;n{const a={};try{al&&this.ws.send(i)}catch{}o&&Tc(()=>{this.writable=!0,this.emitReserved("drain")},this.setTimeoutFn)})}}doClose(){typeof this.ws<"u"&&(this.ws.close(),this.ws=null)}uri(){let t=this.query||{};const n=this.opts.secure?"wss":"ws";let r="";this.opts.port&&(n==="wss"&&Number(this.opts.port)!==443||n==="ws"&&Number(this.opts.port)!==80)&&(r=":"+this.opts.port),this.opts.timestampRequests&&(t[this.opts.timestampParam]=kc()),this.supportsBinary||(t.b64=1);const o=$c(t),i=this.opts.hostname.indexOf(":")!==-1;return n+"://"+(i?"["+this.opts.hostname+"]":this.opts.hostname)+r+this.opts.path+(o.length?"?"+o:"")}check(){return!!Zn}}const k0={websocket:w0,polling:S0},$0=/^(?:(?![^:@\/?#]+:[^:@\/]*@)(http|https|ws|wss):\/\/)?((?:(([^:@\/?#]*)(?::([^:@\/?#]*))?)?@)?((?:[a-f0-9]{0,4}:){2,7}[a-f0-9]{0,4}|[^:\/?#]*)(?::(\d*))?)(((\/(?:[^?#](?![^?#\/]*\.[^?#\/.]+(?:[?#]|$)))*\/?)?([^?#\/]*))(?:\?([^#]*))?(?:#(.*))?)/,O0=["source","protocol","authority","userInfo","user","password","host","port","relative","path","directory","file","query","anchor"];function Ri(e){const t=e,n=e.indexOf("["),r=e.indexOf("]");n!=-1&&r!=-1&&(e=e.substring(0,n)+e.substring(n,r).replace(/:/g,";")+e.substring(r,e.length));let o=$0.exec(e||""),i={},a=14;for(;a--;)i[O0[a]]=o[a]||"";return n!=-1&&r!=-1&&(i.source=t,i.host=i.host.substring(1,i.host.length-1).replace(/;/g,":"),i.authority=i.authority.replace("[","").replace("]","").replace(/;/g,":"),i.ipv6uri=!0),i.pathNames=L0(i,i.path),i.queryKey=T0(i,i.query),i}function L0(e,t){const n=/\/{2,9}/g,r=t.replace(n,"/").split("/");return(t.slice(0,1)=="/"||t.length===0)&&r.splice(0,1),t.slice(-1)=="/"&&r.splice(r.length-1,1),r}function T0(e,t){const n={};return t.replace(/(?:^|&)([^&=]*)=?([^&]*)/g,function(r,o,i){o&&(n[o]=i)}),n}let Ac=class on extends qe{constructor(t,n={}){super(),this.writeBuffer=[],t&&typeof t=="object"&&(n=t,t=null),t?(t=Ri(t),n.hostname=t.host,n.secure=t.protocol==="https"||t.protocol==="wss",n.port=t.port,t.query&&(n.query=t.query)):n.host&&(n.hostname=Ri(n.host).host),Vr(this,n),this.secure=n.secure!=null?n.secure:typeof location<"u"&&location.protocol==="https:",n.hostname&&!n.port&&(n.port=this.secure?"443":"80"),this.hostname=n.hostname||(typeof location<"u"?location.hostname:"localhost"),this.port=n.port||(typeof location<"u"&&location.port?location.port:this.secure?"443":"80"),this.transports=n.transports||["polling","websocket"],this.writeBuffer=[],this.prevBufferLen=0,this.opts=Object.assign({path:"/engine.io",agent:!1,withCredentials:!1,upgrade:!0,timestampParam:"t",rememberUpgrade:!1,addTrailingSlash:!0,rejectUnauthorized:!0,perMessageDeflate:{threshold:1024},transportOptions:{},closeOnBeforeunload:!0},n),this.opts.path=this.opts.path.replace(/\/$/,"")+(this.opts.addTrailingSlash?"/":""),typeof this.opts.query=="string"&&(this.opts.query=y0(this.opts.query)),this.id=null,this.upgrades=null,this.pingInterval=null,this.pingTimeout=null,this.pingTimeoutTimer=null,typeof addEventListener=="function"&&(this.opts.closeOnBeforeunload&&(this.beforeunloadEventListener=()=>{this.transport&&(this.transport.removeAllListeners(),this.transport.close())},addEventListener("beforeunload",this.beforeunloadEventListener,!1)),this.hostname!=="localhost"&&(this.offlineEventListener=()=>{this.onClose("transport close",{description:"network connection lost"})},addEventListener("offline",this.offlineEventListener,!1))),this.open()}createTransport(t){const n=Object.assign({},this.opts.query);n.EIO=Cc,n.transport=t,this.id&&(n.sid=this.id);const r=Object.assign({},this.opts.transportOptions[t],this.opts,{query:n,socket:this,hostname:this.hostname,secure:this.secure,port:this.port});return new k0[t](r)}open(){let t;if(this.opts.rememberUpgrade&&on.priorWebsocketSuccess&&this.transports.indexOf("websocket")!==-1)t="websocket";else if(this.transports.length===0){this.setTimeoutFn(()=>{this.emitReserved("error","No transports available")},0);return}else t=this.transports[0];this.readyState="opening";try{t=this.createTransport(t)}catch{this.transports.shift(),this.open();return}t.open(),this.setTransport(t)}setTransport(t){this.transport&&this.transport.removeAllListeners(),this.transport=t,t.on("drain",this.onDrain.bind(this)).on("packet",this.onPacket.bind(this)).on("error",this.onError.bind(this)).on("close",n=>this.onClose("transport close",n))}probe(t){let n=this.createTransport(t),r=!1;on.priorWebsocketSuccess=!1;const o=()=>{r||(n.send([{type:"ping",data:"probe"}]),n.once("packet",d=>{if(!r)if(d.type==="pong"&&d.data==="probe"){if(this.upgrading=!0,this.emitReserved("upgrading",n),!n)return;on.priorWebsocketSuccess=n.name==="websocket",this.transport.pause(()=>{r||this.readyState!=="closed"&&(c(),this.setTransport(n),n.send([{type:"upgrade"}]),this.emitReserved("upgrade",n),n=null,this.upgrading=!1,this.flush())})}else{const m=new Error("probe error");m.transport=n.name,this.emitReserved("upgradeError",m)}}))};function i(){r||(r=!0,c(),n.close(),n=null)}const a=d=>{const m=new Error("probe error: "+d);m.transport=n.name,i(),this.emitReserved("upgradeError",m)};function s(){a("transport closed")}function l(){a("socket closed")}function u(d){n&&d.name!==n.name&&i()}const c=()=>{n.removeListener("open",o),n.removeListener("error",a),n.removeListener("close",s),this.off("close",l),this.off("upgrading",u)};n.once("open",o),n.once("error",a),n.once("close",s),this.once("close",l),this.once("upgrading",u),n.open()}onOpen(){if(this.readyState="open",on.priorWebsocketSuccess=this.transport.name==="websocket",this.emitReserved("open"),this.flush(),this.readyState==="open"&&this.opts.upgrade){let t=0;const n=this.upgrades.length;for(;t{this.onClose("ping timeout")},this.pingInterval+this.pingTimeout),this.opts.autoUnref&&this.pingTimeoutTimer.unref()}onDrain(){this.writeBuffer.splice(0,this.prevBufferLen),this.prevBufferLen=0,this.writeBuffer.length===0?this.emitReserved("drain"):this.flush()}flush(){if(this.readyState!=="closed"&&this.transport.writable&&!this.upgrading&&this.writeBuffer.length){const t=this.getWritablePackets();this.transport.send(t),this.prevBufferLen=t.length,this.emitReserved("flush")}}getWritablePackets(){if(!(this.maxPayload&&this.transport.name==="polling"&&this.writeBuffer.length>1))return this.writeBuffer;let n=1;for(let r=0;r0&&n>this.maxPayload)return this.writeBuffer.slice(0,r);n+=2}return this.writeBuffer}write(t,n,r){return this.sendPacket("message",t,n,r),this}send(t,n,r){return this.sendPacket("message",t,n,r),this}sendPacket(t,n,r,o){if(typeof n=="function"&&(o=n,n=void 0),typeof r=="function"&&(o=r,r=null),this.readyState==="closing"||this.readyState==="closed")return;r=r||{},r.compress=r.compress!==!1;const i={type:t,data:n,options:r};this.emitReserved("packetCreate",i),this.writeBuffer.push(i),o&&this.once("flush",o),this.flush()}close(){const t=()=>{this.onClose("forced close"),this.transport.close()},n=()=>{this.off("upgrade",n),this.off("upgradeError",n),t()},r=()=>{this.once("upgrade",n),this.once("upgradeError",n)};return(this.readyState==="opening"||this.readyState==="open")&&(this.readyState="closing",this.writeBuffer.length?this.once("drain",()=>{this.upgrading?r():t()}):this.upgrading?r():t()),this}onError(t){on.priorWebsocketSuccess=!1,this.emitReserved("error",t),this.onClose("transport error",t)}onClose(t,n){(this.readyState==="opening"||this.readyState==="open"||this.readyState==="closing")&&(this.clearTimeoutFn(this.pingTimeoutTimer),this.transport.removeAllListeners("close"),this.transport.close(),this.transport.removeAllListeners(),typeof removeEventListener=="function"&&(removeEventListener("beforeunload",this.beforeunloadEventListener,!1),removeEventListener("offline",this.offlineEventListener,!1)),this.readyState="closed",this.id=null,this.emitReserved("close",t,n),this.writeBuffer=[],this.prevBufferLen=0)}filterUpgrades(t){const n=[];let r=0;const o=t.length;for(;rtypeof ArrayBuffer.isView=="function"?ArrayBuffer.isView(e):e.buffer instanceof ArrayBuffer,Nc=Object.prototype.toString,I0=typeof Blob=="function"||typeof Blob<"u"&&Nc.call(Blob)==="[object BlobConstructor]",M0=typeof File=="function"||typeof File<"u"&&Nc.call(File)==="[object FileConstructor]";function _a(e){return N0&&(e instanceof ArrayBuffer||P0(e))||I0&&e instanceof Blob||M0&&e instanceof File}function ar(e,t){if(!e||typeof e!="object")return!1;if(Array.isArray(e)){for(let n=0,r=e.length;n=0&&e.num0;case Le.ACK:case Le.BINARY_ACK:return Array.isArray(n)}}destroy(){this.reconstructor&&(this.reconstructor.finishedReconstruction(),this.reconstructor=null)}}class j0{constructor(t){this.packet=t,this.buffers=[],this.reconPack=t}takeBinaryData(t){if(this.buffers.push(t),this.buffers.length===this.reconPack.attachments){const n=B0(this.reconPack,this.buffers);return this.finishedReconstruction(),n}return null}finishedReconstruction(){this.reconPack=null,this.buffers=[]}}const V0=Object.freeze(Object.defineProperty({__proto__:null,Decoder:Ca,Encoder:F0,get PacketType(){return Le},protocol:D0},Symbol.toStringTag,{value:"Module"}));function pt(e,t,n){return e.on(t,n),function(){e.off(t,n)}}const x0=Object.freeze({connect:1,connect_error:1,disconnect:1,disconnecting:1,newListener:1,removeListener:1});class Pc extends qe{constructor(t,n,r){super(),this.connected=!1,this.recovered=!1,this.receiveBuffer=[],this.sendBuffer=[],this._queue=[],this._queueSeq=0,this.ids=0,this.acks={},this.flags={},this.io=t,this.nsp=n,r&&r.auth&&(this.auth=r.auth),this._opts=Object.assign({},r),this.io._autoConnect&&this.open()}get disconnected(){return!this.connected}subEvents(){if(this.subs)return;const t=this.io;this.subs=[pt(t,"open",this.onopen.bind(this)),pt(t,"packet",this.onpacket.bind(this)),pt(t,"error",this.onerror.bind(this)),pt(t,"close",this.onclose.bind(this))]}get active(){return!!this.subs}connect(){return this.connected?this:(this.subEvents(),this.io._reconnecting||this.io.open(),this.io._readyState==="open"&&this.onopen(),this)}open(){return this.connect()}send(...t){return t.unshift("message"),this.emit.apply(this,t),this}emit(t,...n){if(x0.hasOwnProperty(t))throw new Error('"'+t.toString()+'" is a reserved event name');if(n.unshift(t),this._opts.retries&&!this.flags.fromQueue&&!this.flags.volatile)return this._addToQueue(n),this;const r={type:Le.EVENT,data:n};if(r.options={},r.options.compress=this.flags.compress!==!1,typeof n[n.length-1]=="function"){const a=this.ids++,s=n.pop();this._registerAckCallback(a,s),r.id=a}const o=this.io.engine&&this.io.engine.transport&&this.io.engine.transport.writable;return this.flags.volatile&&(!o||!this.connected)||(this.connected?(this.notifyOutgoingListeners(r),this.packet(r)):this.sendBuffer.push(r)),this.flags={},this}_registerAckCallback(t,n){var r;const o=(r=this.flags.timeout)!==null&&r!==void 0?r:this._opts.ackTimeout;if(o===void 0){this.acks[t]=n;return}const i=this.io.setTimeoutFn(()=>{delete this.acks[t];for(let a=0;a{this.io.clearTimeoutFn(i),n.apply(this,[null,...a])}}emitWithAck(t,...n){const r=this.flags.timeout!==void 0||this._opts.ackTimeout!==void 0;return new Promise((o,i)=>{n.push((a,s)=>r?a?i(a):o(s):o(a)),this.emit(t,...n)})}_addToQueue(t){let n;typeof t[t.length-1]=="function"&&(n=t.pop());const r={id:this._queueSeq++,tryCount:0,pending:!1,args:t,flags:Object.assign({fromQueue:!0},this.flags)};t.push((o,...i)=>r!==this._queue[0]?void 0:(o!==null?r.tryCount>this._opts.retries&&(this._queue.shift(),n&&n(o)):(this._queue.shift(),n&&n(null,...i)),r.pending=!1,this._drainQueue())),this._queue.push(r),this._drainQueue()}_drainQueue(t=!1){if(!this.connected||this._queue.length===0)return;const n=this._queue[0];n.pending&&!t||(n.pending=!0,n.tryCount++,this.flags=n.flags,this.emit.apply(this,n.args))}packet(t){t.nsp=this.nsp,this.io._packet(t)}onopen(){typeof this.auth=="function"?this.auth(t=>{this._sendConnectPacket(t)}):this._sendConnectPacket(this.auth)}_sendConnectPacket(t){this.packet({type:Le.CONNECT,data:this._pid?Object.assign({pid:this._pid,offset:this._lastOffset},t):t})}onerror(t){this.connected||this.emitReserved("connect_error",t)}onclose(t,n){this.connected=!1,delete this.id,this.emitReserved("disconnect",t,n)}onpacket(t){if(t.nsp===this.nsp)switch(t.type){case Le.CONNECT:t.data&&t.data.sid?this.onconnect(t.data.sid,t.data.pid):this.emitReserved("connect_error",new Error("It seems you are trying to reach a Socket.IO server in v2.x with a v3.x client, but they are not compatible (more information here: https://socket.io/docs/v3/migrating-from-2-x-to-3-0/)"));break;case Le.EVENT:case Le.BINARY_EVENT:this.onevent(t);break;case Le.ACK:case Le.BINARY_ACK:this.onack(t);break;case Le.DISCONNECT:this.ondisconnect();break;case Le.CONNECT_ERROR:this.destroy();const r=new Error(t.data.message);r.data=t.data.data,this.emitReserved("connect_error",r);break}}onevent(t){const n=t.data||[];t.id!=null&&n.push(this.ack(t.id)),this.connected?this.emitEvent(n):this.receiveBuffer.push(Object.freeze(n))}emitEvent(t){if(this._anyListeners&&this._anyListeners.length){const n=this._anyListeners.slice();for(const r of n)r.apply(this,t)}super.emit.apply(this,t),this._pid&&t.length&&typeof t[t.length-1]=="string"&&(this._lastOffset=t[t.length-1])}ack(t){const n=this;let r=!1;return function(...o){r||(r=!0,n.packet({type:Le.ACK,id:t,data:o}))}}onack(t){const n=this.acks[t.id];typeof n=="function"&&(n.apply(this,t.data),delete this.acks[t.id])}onconnect(t,n){this.id=t,this.recovered=n&&this._pid===n,this._pid=n,this.connected=!0,this.emitBuffered(),this.emitReserved("connect"),this._drainQueue(!0)}emitBuffered(){this.receiveBuffer.forEach(t=>this.emitEvent(t)),this.receiveBuffer=[],this.sendBuffer.forEach(t=>{this.notifyOutgoingListeners(t),this.packet(t)}),this.sendBuffer=[]}ondisconnect(){this.destroy(),this.onclose("io server disconnect")}destroy(){this.subs&&(this.subs.forEach(t=>t()),this.subs=void 0),this.io._destroy(this)}disconnect(){return this.connected&&this.packet({type:Le.DISCONNECT}),this.destroy(),this.connected&&this.onclose("io client disconnect"),this}close(){return this.disconnect()}compress(t){return this.flags.compress=t,this}get volatile(){return this.flags.volatile=!0,this}timeout(t){return this.flags.timeout=t,this}onAny(t){return this._anyListeners=this._anyListeners||[],this._anyListeners.push(t),this}prependAny(t){return this._anyListeners=this._anyListeners||[],this._anyListeners.unshift(t),this}offAny(t){if(!this._anyListeners)return this;if(t){const n=this._anyListeners;for(let r=0;r0&&e.jitter<=1?e.jitter:0,this.attempts=0}Cn.prototype.duration=function(){var e=this.ms*Math.pow(this.factor,this.attempts++);if(this.jitter){var t=Math.random(),n=Math.floor(t*this.jitter*e);e=Math.floor(t*10)&1?e+n:e-n}return Math.min(e,this.max)|0};Cn.prototype.reset=function(){this.attempts=0};Cn.prototype.setMin=function(e){this.ms=e};Cn.prototype.setMax=function(e){this.max=e};Cn.prototype.setJitter=function(e){this.jitter=e};class Fi extends qe{constructor(t,n){var r;super(),this.nsps={},this.subs=[],t&&typeof t=="object"&&(n=t,t=void 0),n=n||{},n.path=n.path||"/socket.io",this.opts=n,Vr(this,n),this.reconnection(n.reconnection!==!1),this.reconnectionAttempts(n.reconnectionAttempts||1/0),this.reconnectionDelay(n.reconnectionDelay||1e3),this.reconnectionDelayMax(n.reconnectionDelayMax||5e3),this.randomizationFactor((r=n.randomizationFactor)!==null&&r!==void 0?r:.5),this.backoff=new Cn({min:this.reconnectionDelay(),max:this.reconnectionDelayMax(),jitter:this.randomizationFactor()}),this.timeout(n.timeout==null?2e4:n.timeout),this._readyState="closed",this.uri=t;const o=n.parser||V0;this.encoder=new o.Encoder,this.decoder=new o.Decoder,this._autoConnect=n.autoConnect!==!1,this._autoConnect&&this.open()}reconnection(t){return arguments.length?(this._reconnection=!!t,this):this._reconnection}reconnectionAttempts(t){return t===void 0?this._reconnectionAttempts:(this._reconnectionAttempts=t,this)}reconnectionDelay(t){var n;return t===void 0?this._reconnectionDelay:(this._reconnectionDelay=t,(n=this.backoff)===null||n===void 0||n.setMin(t),this)}randomizationFactor(t){var n;return t===void 0?this._randomizationFactor:(this._randomizationFactor=t,(n=this.backoff)===null||n===void 0||n.setJitter(t),this)}reconnectionDelayMax(t){var n;return t===void 0?this._reconnectionDelayMax:(this._reconnectionDelayMax=t,(n=this.backoff)===null||n===void 0||n.setMax(t),this)}timeout(t){return arguments.length?(this._timeout=t,this):this._timeout}maybeReconnectOnOpen(){!this._reconnecting&&this._reconnection&&this.backoff.attempts===0&&this.reconnect()}open(t){if(~this._readyState.indexOf("open"))return this;this.engine=new Ac(this.uri,this.opts);const n=this.engine,r=this;this._readyState="opening",this.skipReconnect=!1;const o=pt(n,"open",function(){r.onopen(),t&&t()}),i=pt(n,"error",a=>{r.cleanup(),r._readyState="closed",this.emitReserved("error",a),t?t(a):r.maybeReconnectOnOpen()});if(this._timeout!==!1){const a=this._timeout;a===0&&o();const s=this.setTimeoutFn(()=>{o(),n.close(),n.emit("error",new Error("timeout"))},a);this.opts.autoUnref&&s.unref(),this.subs.push(function(){clearTimeout(s)})}return this.subs.push(o),this.subs.push(i),this}connect(t){return this.open(t)}onopen(){this.cleanup(),this._readyState="open",this.emitReserved("open");const t=this.engine;this.subs.push(pt(t,"ping",this.onping.bind(this)),pt(t,"data",this.ondata.bind(this)),pt(t,"error",this.onerror.bind(this)),pt(t,"close",this.onclose.bind(this)),pt(this.decoder,"decoded",this.ondecoded.bind(this)))}onping(){this.emitReserved("ping")}ondata(t){try{this.decoder.add(t)}catch(n){this.onclose("parse error",n)}}ondecoded(t){Tc(()=>{this.emitReserved("packet",t)},this.setTimeoutFn)}onerror(t){this.emitReserved("error",t)}socket(t,n){let r=this.nsps[t];return r?this._autoConnect&&!r.active&&r.connect():(r=new Pc(this,t,n),this.nsps[t]=r),r}_destroy(t){const n=Object.keys(this.nsps);for(const r of n)if(this.nsps[r].active)return;this._close()}_packet(t){const n=this.encoder.encode(t);for(let r=0;rt()),this.subs.length=0,this.decoder.destroy()}_close(){this.skipReconnect=!0,this._reconnecting=!1,this.onclose("forced close"),this.engine&&this.engine.close()}disconnect(){return this._close()}onclose(t,n){this.cleanup(),this.backoff.reset(),this._readyState="closed",this.emitReserved("close",t,n),this._reconnection&&!this.skipReconnect&&this.reconnect()}reconnect(){if(this._reconnecting||this.skipReconnect)return this;const t=this;if(this.backoff.attempts>=this._reconnectionAttempts)this.backoff.reset(),this.emitReserved("reconnect_failed"),this._reconnecting=!1;else{const n=this.backoff.duration();this._reconnecting=!0;const r=this.setTimeoutFn(()=>{t.skipReconnect||(this.emitReserved("reconnect_attempt",t.backoff.attempts),!t.skipReconnect&&t.open(o=>{o?(t._reconnecting=!1,t.reconnect(),this.emitReserved("reconnect_error",o)):t.onreconnect()}))},n);this.opts.autoUnref&&r.unref(),this.subs.push(function(){clearTimeout(r)})}}onreconnect(){const t=this.backoff.attempts;this._reconnecting=!1,this.backoff.reset(),this.emitReserved("reconnect",t)}}const kn={};function qo(e,t){typeof e=="object"&&(t=e,e=void 0),t=t||{};const n=A0(e,t.path||"/socket.io"),r=n.source,o=n.id,i=n.path,a=kn[o]&&i in kn[o].nsps,s=t.forceNew||t["force new connection"]||t.multiplex===!1||a;let l;return s?l=new Fi(r,t):(kn[o]||(kn[o]=new Fi(r,t)),l=kn[o]),n.query&&!t.query&&(t.query=n.queryKey),l.socket(n.path,t)}Object.assign(qo,{Manager:Fi,Socket:Pc,io:qo,connect:qo});var Sa={exports:{}},Ic=function(t,n){return function(){for(var o=new Array(arguments.length),i=0;i"u"}function U0(e){return e!==null&&!ji(e)&&e.constructor!==null&&!ji(e.constructor)&&typeof e.constructor.isBuffer=="function"&&e.constructor.isBuffer(e)}function W0(e){return en.call(e)==="[object ArrayBuffer]"}function H0(e){return typeof FormData<"u"&&e instanceof FormData}function q0(e){var t;return typeof ArrayBuffer<"u"&&ArrayBuffer.isView?t=ArrayBuffer.isView(e):t=e&&e.buffer&&e.buffer instanceof ArrayBuffer,t}function G0(e){return typeof e=="string"}function K0(e){return typeof e=="number"}function Mc(e){return e!==null&&typeof e=="object"}function sr(e){if(en.call(e)!=="[object Object]")return!1;var t=Object.getPrototypeOf(e);return t===null||t===Object.prototype}function Y0(e){return en.call(e)==="[object Date]"}function X0(e){return en.call(e)==="[object File]"}function J0(e){return en.call(e)==="[object Blob]"}function Rc(e){return en.call(e)==="[object Function]"}function Z0(e){return Mc(e)&&Rc(e.pipe)}function Q0(e){return typeof URLSearchParams<"u"&&e instanceof URLSearchParams}function eC(e){return e.trim?e.trim():e.replace(/^\s+|\s+$/g,"")}function tC(){return typeof navigator<"u"&&(navigator.product==="ReactNative"||navigator.product==="NativeScript"||navigator.product==="NS")?!1:typeof window<"u"&&typeof document<"u"}function wa(e,t){if(!(e===null||typeof e>"u"))if(typeof e!="object"&&(e=[e]),Ea(e))for(var n=0,r=e.length;n"u"||(nn.isArray(l)?u=u+"[]":l=[l],nn.forEach(l,function(d){nn.isDate(d)?d=d.toISOString():nn.isObject(d)&&(d=JSON.stringify(d)),i.push(ll(u)+"="+ll(d))}))}),o=i.join("&")}if(o){var a=t.indexOf("#");a!==-1&&(t=t.slice(0,a)),t+=(t.indexOf("?")===-1?"?":"&")+o}return t},oC=ct;function xr(){this.handlers=[]}xr.prototype.use=function(t,n,r){return this.handlers.push({fulfilled:t,rejected:n,synchronous:r?r.synchronous:!1,runWhen:r?r.runWhen:null}),this.handlers.length-1};xr.prototype.eject=function(t){this.handlers[t]&&(this.handlers[t]=null)};xr.prototype.forEach=function(t){oC.forEach(this.handlers,function(r){r!==null&&t(r)})};var iC=xr,aC=ct,sC=function(t,n){aC.forEach(t,function(o,i){i!==n&&i.toUpperCase()===n.toUpperCase()&&(t[n]=o,delete t[i])})},Dc=function(t,n,r,o,i){return t.config=n,r&&(t.code=r),t.request=o,t.response=i,t.isAxiosError=!0,t.toJSON=function(){return{message:this.message,name:this.name,description:this.description,number:this.number,fileName:this.fileName,lineNumber:this.lineNumber,columnNumber:this.columnNumber,stack:this.stack,config:this.config,code:this.code,status:this.response&&this.response.status?this.response.status:null}},t},Go,ul;function Fc(){if(ul)return Go;ul=1;var e=Dc;return Go=function(n,r,o,i,a){var s=new Error(n);return e(s,r,o,i,a)},Go}var Ko,cl;function lC(){if(cl)return Ko;cl=1;var e=Fc();return Ko=function(n,r,o){var i=o.config.validateStatus;!o.status||!i||i(o.status)?n(o):r(e("Request failed with status code "+o.status,o.config,null,o.request,o))},Ko}var Yo,dl;function uC(){if(dl)return Yo;dl=1;var e=ct;return Yo=e.isStandardBrowserEnv()?function(){return{write:function(r,o,i,a,s,l){var u=[];u.push(r+"="+encodeURIComponent(o)),e.isNumber(i)&&u.push("expires="+new Date(i).toGMTString()),e.isString(a)&&u.push("path="+a),e.isString(s)&&u.push("domain="+s),l===!0&&u.push("secure"),document.cookie=u.join("; ")},read:function(r){var o=document.cookie.match(new RegExp("(^|;\\s*)("+r+")=([^;]*)"));return o?decodeURIComponent(o[3]):null},remove:function(r){this.write(r,"",Date.now()-864e5)}}}():function(){return{write:function(){},read:function(){return null},remove:function(){}}}(),Yo}var Xo,fl;function cC(){return fl||(fl=1,Xo=function(t){return/^([a-z][a-z\d\+\-\.]*:)?\/\//i.test(t)}),Xo}var Jo,hl;function dC(){return hl||(hl=1,Jo=function(t,n){return n?t.replace(/\/+$/,"")+"/"+n.replace(/^\/+/,""):t}),Jo}var Zo,pl;function fC(){if(pl)return Zo;pl=1;var e=cC(),t=dC();return Zo=function(r,o){return r&&!e(o)?t(r,o):o},Zo}var Qo,ml;function hC(){if(ml)return Qo;ml=1;var e=ct,t=["age","authorization","content-length","content-type","etag","expires","from","host","if-modified-since","if-unmodified-since","last-modified","location","max-forwards","proxy-authorization","referer","retry-after","user-agent"];return Qo=function(r){var o={},i,a,s;return r&&e.forEach(r.split(` -`),function(u){if(s=u.indexOf(":"),i=e.trim(u.substr(0,s)).toLowerCase(),a=e.trim(u.substr(s+1)),i){if(o[i]&&t.indexOf(i)>=0)return;i==="set-cookie"?o[i]=(o[i]?o[i]:[]).concat([a]):o[i]=o[i]?o[i]+", "+a:a}}),o},Qo}var ei,vl;function pC(){if(vl)return ei;vl=1;var e=ct;return ei=e.isStandardBrowserEnv()?function(){var n=/(msie|trident)/i.test(navigator.userAgent),r=document.createElement("a"),o;function i(a){var s=a;return n&&(r.setAttribute("href",s),s=r.href),r.setAttribute("href",s),{href:r.href,protocol:r.protocol?r.protocol.replace(/:$/,""):"",host:r.host,search:r.search?r.search.replace(/^\?/,""):"",hash:r.hash?r.hash.replace(/^#/,""):"",hostname:r.hostname,port:r.port,pathname:r.pathname.charAt(0)==="/"?r.pathname:"/"+r.pathname}}return o=i(window.location.href),function(s){var l=e.isString(s)?i(s):s;return l.protocol===o.protocol&&l.host===o.host}}():function(){return function(){return!0}}(),ei}var ti,gl;function zr(){if(gl)return ti;gl=1;function e(t){this.message=t}return e.prototype.toString=function(){return"Cancel"+(this.message?": "+this.message:"")},e.prototype.__CANCEL__=!0,ti=e,ti}var ni,yl;function bl(){if(yl)return ni;yl=1;var e=ct,t=lC(),n=uC(),r=Bc,o=fC(),i=hC(),a=pC(),s=Fc(),l=Ur(),u=zr();return ni=function(d){return new Promise(function(_,S){var E=d.data,L=d.headers,y=d.responseType,$;function w(){d.cancelToken&&d.cancelToken.unsubscribe($),d.signal&&d.signal.removeEventListener("abort",$)}e.isFormData(E)&&delete L["Content-Type"];var h=new XMLHttpRequest;if(d.auth){var p=d.auth.username||"",b=d.auth.password?unescape(encodeURIComponent(d.auth.password)):"";L.Authorization="Basic "+btoa(p+":"+b)}var v=o(d.baseURL,d.url);h.open(d.method.toUpperCase(),r(v,d.params,d.paramsSerializer),!0),h.timeout=d.timeout;function P(){if(h){var T="getAllResponseHeaders"in h?i(h.getAllResponseHeaders()):null,j=!y||y==="text"||y==="json"?h.responseText:h.response,J={data:j,status:h.status,statusText:h.statusText,headers:T,config:d,request:h};t(function(O){_(O),w()},function(O){S(O),w()},J),h=null}}if("onloadend"in h?h.onloadend=P:h.onreadystatechange=function(){!h||h.readyState!==4||h.status===0&&!(h.responseURL&&h.responseURL.indexOf("file:")===0)||setTimeout(P)},h.onabort=function(){h&&(S(s("Request aborted",d,"ECONNABORTED",h)),h=null)},h.onerror=function(){S(s("Network Error",d,null,h)),h=null},h.ontimeout=function(){var j=d.timeout?"timeout of "+d.timeout+"ms exceeded":"timeout exceeded",J=d.transitional||l.transitional;d.timeoutErrorMessage&&(j=d.timeoutErrorMessage),S(s(j,d,J.clarifyTimeoutError?"ETIMEDOUT":"ECONNABORTED",h)),h=null},e.isStandardBrowserEnv()){var A=(d.withCredentials||a(v))&&d.xsrfCookieName?n.read(d.xsrfCookieName):void 0;A&&(L[d.xsrfHeaderName]=A)}"setRequestHeader"in h&&e.forEach(L,function(j,J){typeof E>"u"&&J.toLowerCase()==="content-type"?delete L[J]:h.setRequestHeader(J,j)}),e.isUndefined(d.withCredentials)||(h.withCredentials=!!d.withCredentials),y&&y!=="json"&&(h.responseType=d.responseType),typeof d.onDownloadProgress=="function"&&h.addEventListener("progress",d.onDownloadProgress),typeof d.onUploadProgress=="function"&&h.upload&&h.upload.addEventListener("progress",d.onUploadProgress),(d.cancelToken||d.signal)&&($=function(T){h&&(S(!T||T&&T.type?new u("canceled"):T),h.abort(),h=null)},d.cancelToken&&d.cancelToken.subscribe($),d.signal&&(d.signal.aborted?$():d.signal.addEventListener("abort",$))),E||(E=null),h.send(E)})},ni}var ri,_l;function Ur(){if(_l)return ri;_l=1;var e=ct,t=sC,n=Dc,r={"Content-Type":"application/x-www-form-urlencoded"};function o(l,u){!e.isUndefined(l)&&e.isUndefined(l["Content-Type"])&&(l["Content-Type"]=u)}function i(){var l;return(typeof XMLHttpRequest<"u"||typeof process<"u"&&Object.prototype.toString.call(process)==="[object process]")&&(l=bl()),l}function a(l,u,c){if(e.isString(l))try{return(u||JSON.parse)(l),e.trim(l)}catch(d){if(d.name!=="SyntaxError")throw d}return(c||JSON.stringify)(l)}var s={transitional:{silentJSONParsing:!0,forcedJSONParsing:!0,clarifyTimeoutError:!1},adapter:i(),transformRequest:[function(u,c){return t(c,"Accept"),t(c,"Content-Type"),e.isFormData(u)||e.isArrayBuffer(u)||e.isBuffer(u)||e.isStream(u)||e.isFile(u)||e.isBlob(u)?u:e.isArrayBufferView(u)?u.buffer:e.isURLSearchParams(u)?(o(c,"application/x-www-form-urlencoded;charset=utf-8"),u.toString()):e.isObject(u)||c&&c["Content-Type"]==="application/json"?(o(c,"application/json"),a(u)):u}],transformResponse:[function(u){var c=this.transitional||s.transitional,d=c&&c.silentJSONParsing,m=c&&c.forcedJSONParsing,_=!d&&this.responseType==="json";if(_||m&&e.isString(u)&&u.length)try{return JSON.parse(u)}catch(S){if(_)throw S.name==="SyntaxError"?n(S,this,"E_JSON_PARSE"):S}return u}],timeout:0,xsrfCookieName:"XSRF-TOKEN",xsrfHeaderName:"X-XSRF-TOKEN",maxContentLength:-1,maxBodyLength:-1,validateStatus:function(u){return u>=200&&u<300},headers:{common:{Accept:"application/json, text/plain, */*"}}};return e.forEach(["delete","get","head"],function(u){s.headers[u]={}}),e.forEach(["post","put","patch"],function(u){s.headers[u]=e.merge(r)}),ri=s,ri}var mC=ct,vC=Ur(),gC=function(t,n,r){var o=this||vC;return mC.forEach(r,function(a){t=a.call(o,t,n)}),t},oi,Cl;function jc(){return Cl||(Cl=1,oi=function(t){return!!(t&&t.__CANCEL__)}),oi}var Sl=ct,ii=gC,yC=jc(),bC=Ur(),_C=zr();function ai(e){if(e.cancelToken&&e.cancelToken.throwIfRequested(),e.signal&&e.signal.aborted)throw new _C("canceled")}var CC=function(t){ai(t),t.headers=t.headers||{},t.data=ii.call(t,t.data,t.headers,t.transformRequest),t.headers=Sl.merge(t.headers.common||{},t.headers[t.method]||{},t.headers),Sl.forEach(["delete","get","head","post","put","patch","common"],function(o){delete t.headers[o]});var n=t.adapter||bC.adapter;return n(t).then(function(o){return ai(t),o.data=ii.call(t,o.data,o.headers,t.transformResponse),o},function(o){return yC(o)||(ai(t),o&&o.response&&(o.response.data=ii.call(t,o.response.data,o.response.headers,t.transformResponse))),Promise.reject(o)})},lt=ct,Vc=function(t,n){n=n||{};var r={};function o(c,d){return lt.isPlainObject(c)&<.isPlainObject(d)?lt.merge(c,d):lt.isPlainObject(d)?lt.merge({},d):lt.isArray(d)?d.slice():d}function i(c){if(lt.isUndefined(n[c])){if(!lt.isUndefined(t[c]))return o(void 0,t[c])}else return o(t[c],n[c])}function a(c){if(!lt.isUndefined(n[c]))return o(void 0,n[c])}function s(c){if(lt.isUndefined(n[c])){if(!lt.isUndefined(t[c]))return o(void 0,t[c])}else return o(void 0,n[c])}function l(c){if(c in n)return o(t[c],n[c]);if(c in t)return o(void 0,t[c])}var u={url:a,method:a,data:a,baseURL:s,transformRequest:s,transformResponse:s,paramsSerializer:s,timeout:s,timeoutMessage:s,withCredentials:s,adapter:s,responseType:s,xsrfCookieName:s,xsrfHeaderName:s,onUploadProgress:s,onDownloadProgress:s,decompress:s,maxContentLength:s,maxBodyLength:s,transport:s,httpAgent:s,httpsAgent:s,cancelToken:s,socketPath:s,responseEncoding:s,validateStatus:l};return lt.forEach(Object.keys(t).concat(Object.keys(n)),function(d){var m=u[d]||i,_=m(d);lt.isUndefined(_)&&m!==l||(r[d]=_)}),r},si,El;function xc(){return El||(El=1,si={version:"0.24.0"}),si}var SC=xc().version,ka={};["object","boolean","number","function","string","symbol"].forEach(function(e,t){ka[e]=function(r){return typeof r===e||"a"+(t<1?"n ":" ")+e}});var wl={};ka.transitional=function(t,n,r){function o(i,a){return"[Axios v"+SC+"] Transitional option '"+i+"'"+a+(r?". "+r:"")}return function(i,a,s){if(t===!1)throw new Error(o(a," has been removed"+(n?" in "+n:"")));return n&&!wl[a]&&(wl[a]=!0,console.warn(o(a," has been deprecated since v"+n+" and will be removed in the near future"))),t?t(i,a,s):!0}};function EC(e,t,n){if(typeof e!="object")throw new TypeError("options must be an object");for(var r=Object.keys(e),o=r.length;o-- >0;){var i=r[o],a=t[i];if(a){var s=e[i],l=s===void 0||a(s,i,e);if(l!==!0)throw new TypeError("option "+i+" must be "+l);continue}if(n!==!0)throw Error("Unknown option "+i)}}var wC={assertOptions:EC,validators:ka},zc=ct,kC=Bc,kl=iC,$l=CC,Wr=Vc,Uc=wC,rn=Uc.validators;function zn(e){this.defaults=e,this.interceptors={request:new kl,response:new kl}}zn.prototype.request=function(t){typeof t=="string"?(t=arguments[1]||{},t.url=arguments[0]):t=t||{},t=Wr(this.defaults,t),t.method?t.method=t.method.toLowerCase():this.defaults.method?t.method=this.defaults.method.toLowerCase():t.method="get";var n=t.transitional;n!==void 0&&Uc.assertOptions(n,{silentJSONParsing:rn.transitional(rn.boolean),forcedJSONParsing:rn.transitional(rn.boolean),clarifyTimeoutError:rn.transitional(rn.boolean)},!1);var r=[],o=!0;this.interceptors.request.forEach(function(m){typeof m.runWhen=="function"&&m.runWhen(t)===!1||(o=o&&m.synchronous,r.unshift(m.fulfilled,m.rejected))});var i=[];this.interceptors.response.forEach(function(m){i.push(m.fulfilled,m.rejected)});var a;if(!o){var s=[$l,void 0];for(Array.prototype.unshift.apply(s,r),s=s.concat(i),a=Promise.resolve(t);s.length;)a=a.then(s.shift(),s.shift());return a}for(var l=t;r.length;){var u=r.shift(),c=r.shift();try{l=u(l)}catch(d){c(d);break}}try{a=$l(l)}catch(d){return Promise.reject(d)}for(;i.length;)a=a.then(i.shift(),i.shift());return a};zn.prototype.getUri=function(t){return t=Wr(this.defaults,t),kC(t.url,t.params,t.paramsSerializer).replace(/^\?/,"")};zc.forEach(["delete","get","head","options"],function(t){zn.prototype[t]=function(n,r){return this.request(Wr(r||{},{method:t,url:n,data:(r||{}).data}))}});zc.forEach(["post","put","patch"],function(t){zn.prototype[t]=function(n,r,o){return this.request(Wr(o||{},{method:t,url:n,data:r}))}});var $C=zn,li,Ol;function OC(){if(Ol)return li;Ol=1;var e=zr();function t(n){if(typeof n!="function")throw new TypeError("executor must be a function.");var r;this.promise=new Promise(function(a){r=a});var o=this;this.promise.then(function(i){if(o._listeners){var a,s=o._listeners.length;for(a=0;a{t.contains(ur(o))||n(o)};return{mousemove:r,touchstart:r}}else if(e==="clickoutside"){let r=!1;const o=a=>{r=!t.contains(ur(a))},i=a=>{r&&(t.contains(ur(a))||n(a))};return{mousedown:o,mouseup:i,touchstart:o,touchend:i}}return console.error(`[evtd/create-trap-handler]: name \`${e}\` is invalid. This could be a bug of evtd.`),{}}function Hc(e,t,n){const r=RC[e];let o=r.get(t);o===void 0&&r.set(t,o=new WeakMap);let i=o.get(n);return i===void 0&&o.set(n,i=BC(e,t,n)),i}function DC(e,t,n,r){if(e==="mousemoveoutside"||e==="clickoutside"){const o=Hc(e,t,n);return Object.keys(o).forEach(i=>{VC(i,document,o[i],r)}),!0}return!1}function FC(e,t,n,r){if(e==="mousemoveoutside"||e==="clickoutside"){const o=Hc(e,t,n);return Object.keys(o).forEach(i=>{xC(i,document,o[i],r)}),!0}return!1}function jC(){if(typeof window>"u")return{on:()=>{},off:()=>{}};const e=new WeakMap,t=new WeakMap;function n(){e.set(this,!0)}function r(){e.set(this,!0),t.set(this,!0)}function o(v,P,A){const T=v[P];return v[P]=function(){return A.apply(v,arguments),T.apply(v,arguments)},v}function i(v,P){v[P]=Event.prototype[P]}const a=new WeakMap,s=Object.getOwnPropertyDescriptor(Event.prototype,"currentTarget");function l(){var v;return(v=a.get(this))!==null&&v!==void 0?v:null}function u(v,P){s!==void 0&&Object.defineProperty(v,"currentTarget",{configurable:!0,enumerable:!0,get:P??s.get})}const c={bubble:{},capture:{}},d={};function m(){const v=function(P){const{type:A,eventPhase:T,bubbles:j}=P,J=ur(P);if(T===2)return;const U=T===1?"capture":"bubble";let O=J;const N=[];for(;O===null&&(O=window),N.push(O),O!==window;)O=O.parentNode||null;const D=c.capture[A],V=c.bubble[A];if(o(P,"stopPropagation",n),o(P,"stopImmediatePropagation",r),u(P,l),U==="capture"){if(D===void 0)return;for(let G=N.length-1;G>=0&&!e.has(P);--G){const B=N[G],I=D.get(B);if(I!==void 0){a.set(P,B);for(const Y of I){if(t.has(P))break;Y(P)}}if(G===0&&!j&&V!==void 0){const Y=V.get(B);if(Y!==void 0)for(const Q of Y){if(t.has(P))break;Q(P)}}}}else if(U==="bubble"){if(V===void 0)return;for(let G=0;GJ(P))};return v.displayName="evtdUnifiedWindowEventHandler",v}const S=m(),E=_();function L(v,P){const A=c[v];return A[P]===void 0&&(A[P]=new Map,window.addEventListener(P,S,v==="capture")),A[P]}function y(v){return d[v]===void 0&&(d[v]=new Set,window.addEventListener(v,E)),d[v]}function $(v,P){let A=v.get(P);return A===void 0&&v.set(P,A=new Set),A}function w(v,P,A,T){const j=c[P][A];if(j!==void 0){const J=j.get(v);if(J!==void 0&&J.has(T))return!0}return!1}function h(v,P){const A=d[v];return!!(A!==void 0&&A.has(P))}function p(v,P,A,T){let j;if(typeof T=="object"&&T.once===!0?j=D=>{b(v,P,j,T),A(D)}:j=A,DC(v,P,j,T))return;const U=T===!0||typeof T=="object"&&T.capture===!0?"capture":"bubble",O=L(U,v),N=$(O,P);if(N.has(j)||N.add(j),P===window){const D=y(v);D.has(j)||D.add(j)}}function b(v,P,A,T){if(FC(v,P,A,T))return;const J=T===!0||typeof T=="object"&&T.capture===!0,U=J?"capture":"bubble",O=L(U,v),N=$(O,P);if(P===window&&!w(P,J?"bubble":"capture",v,A)&&h(v,A)){const V=d[v];V.delete(A),V.size===0&&(window.removeEventListener(v,E),d[v]=void 0)}N.has(A)&&N.delete(A),N.size===0&&O.delete(P),O.size===0&&(window.removeEventListener(v,S,U==="capture"),c[U][v]=void 0)}return{on:p,off:b}}const{on:VC,off:xC}=jC();/*! - * pinia v2.1.4 - * (c) 2023 Eduardo San Martin Morote - * @license MIT - */const zC=Symbol();var Nl;(function(e){e.direct="direct",e.patchObject="patch object",e.patchFunction="patch function"})(Nl||(Nl={}));function dS(){const e=Qc(!0),t=e.run(()=>H({}));let n=[],r=[];const o=ed({install(i){o._a=i,i.provide(zC,o),i.config.globalProperties.$pinia=o,r.forEach(a=>n.push(a)),r=[]},use(i){return!this._a&&!Q_?r.push(i):n.push(i),this},_p:n,_a:null,_e:e,_s:new Map,state:t});return o}export{qC as A,vi as B,JC as C,QC as D,oS as E,Io as F,HC as G,fu as H,lh as I,ZC as L,Ms as M,er as O,Wh as S,tS as T,ce as _,sS as a,cS as b,aS as c,uS as d,xC as e,dS as f,rS as g,ap as h,eS as i,oe as j,de as k,od as l,qt as m,iS as n,VC as o,KC as p,Xf as q,rh as r,YC as s,lS as t,XC as u,GC as v,dm as w,zo as x,ry as y,nS as z}; diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/test_schema.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/test_schema.py deleted file mode 100644 index 12666e0d39bf4d369f24e21524fb67971d39e518..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/test_schema.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/20 10:40 -@Author : alexanderwu -@File : test_schema.py -""" -from metagpt.schema import AIMessage, Message, SystemMessage, UserMessage - - -def test_messages(): - test_content = 'test_message' - msgs = [ - UserMessage(test_content), - SystemMessage(test_content), - AIMessage(test_content), - Message(test_content, role='QA') - ] - text = str(msgs) - roles = ['user', 'system', 'assistant', 'QA'] - assert all([i in text for i in roles]) diff --git a/spaces/sub314xxl/MusicGen/audiocraft/quantization/vq.py b/spaces/sub314xxl/MusicGen/audiocraft/quantization/vq.py deleted file mode 100644 index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/quantization/vq.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regulariation. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/sub314xxl/StyleGAN-XL/app.py b/spaces/sub314xxl/StyleGAN-XL/app.py deleted file mode 100644 index 586b290a50835e2383a6662583e40cc96fe9e8f3..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/StyleGAN-XL/app.py +++ /dev/null @@ -1,204 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import json - -import gradio as gr -import numpy as np - -from model import Model - -DESCRIPTION = '# [StyleGAN-XL](https://github.com/autonomousvision/stylegan_xl)' - - -def update_class_index(name: str) -> dict: - if 'imagenet' in name: - return gr.Slider.update(maximum=999, visible=True) - elif 'cifar' in name: - return gr.Slider.update(maximum=9, visible=True) - else: - return gr.Slider.update(visible=False) - - -def get_sample_image_url(name: str) -> str: - sample_image_dir = 'https://huggingface.co/spaces/hysts/StyleGAN-XL/resolve/main/samples' - return f'{sample_image_dir}/{name}.jpg' - - -def get_sample_image_markdown(name: str) -> str: - url = get_sample_image_url(name) - if name == 'imagenet': - size = 128 - class_index = '0-999' - seed = '0' - elif name == 'cifar10': - size = 32 - class_index = '0-9' - seed = '0-9' - elif name == 'ffhq': - size = 256 - class_index = 'N/A' - seed = '0-99' - elif name == 'pokemon': - size = 256 - class_index = 'N/A' - seed = '0-99' - else: - raise ValueError - - return f''' - - size: {size}x{size} - - class_index: {class_index} - - seed: {seed} - - truncation: 0.7 - ![sample images]({url})''' - - -def load_class_names(name: str) -> list[str]: - with open(f'labels/{name}_classes.json') as f: - names = json.load(f) - return names - - -def get_class_name_df(name: str) -> list: - names = load_class_names(name) - return list(map(list, enumerate(names))) # type: ignore - - -IMAGENET_NAMES = load_class_names('imagenet') -CIFAR10_NAMES = load_class_names('cifar10') - - -def update_class_name(model_name: str, index: int) -> dict: - if 'imagenet' in model_name: - if index < len(IMAGENET_NAMES): - value = IMAGENET_NAMES[index] - else: - value = '-' - return gr.Textbox.update(value=value, visible=True) - elif 'cifar' in model_name: - if index < len(CIFAR10_NAMES): - value = CIFAR10_NAMES[index] - else: - value = '-' - return gr.Textbox.update(value=value, visible=True) - else: - return gr.Textbox.update(visible=False) - - -model = Model() - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - - with gr.Tabs(): - with gr.TabItem('App'): - with gr.Row(): - with gr.Column(): - with gr.Group(): - model_name = gr.Dropdown(model.MODEL_NAMES, - value=model.MODEL_NAMES[3], - label='Model') - seed = gr.Slider(0, - np.iinfo(np.uint32).max, - step=1, - value=0, - label='Seed') - psi = gr.Slider(0, - 2, - step=0.05, - value=0.7, - label='Truncation psi') - class_index = gr.Slider(0, - 999, - step=1, - value=83, - label='Class Index') - class_name = gr.Textbox( - value=IMAGENET_NAMES[class_index.value], - label='Class Label', - interactive=False) - tx = gr.Slider(-1, - 1, - step=0.05, - value=0, - label='Translate X') - ty = gr.Slider(-1, - 1, - step=0.05, - value=0, - label='Translate Y') - angle = gr.Slider(-180, - 180, - step=5, - value=0, - label='Angle') - run_button = gr.Button('Run') - with gr.Column(): - result = gr.Image(label='Result', elem_id='result') - - with gr.TabItem('Sample Images'): - with gr.Row(): - model_name2 = gr.Dropdown([ - 'imagenet', - 'cifar10', - 'ffhq', - 'pokemon', - ], - value='imagenet', - label='Model') - with gr.Row(): - text = get_sample_image_markdown(model_name2.value) - sample_images = gr.Markdown(text) - - with gr.TabItem('Class Names'): - with gr.Row(): - dataset_name = gr.Dropdown([ - 'imagenet', - 'cifar10', - ], - value='imagenet', - label='Dataset') - with gr.Row(): - df = get_class_name_df('imagenet') - class_names = gr.Dataframe(df, - col_count=2, - headers=['Class Index', 'Label'], - interactive=False) - - model_name.change(fn=model.set_model, inputs=model_name, outputs=None) - model_name.change(fn=update_class_index, - inputs=model_name, - outputs=class_index) - model_name.change(fn=update_class_name, - inputs=[ - model_name, - class_index, - ], - outputs=class_name) - class_index.change(fn=update_class_name, - inputs=[ - model_name, - class_index, - ], - outputs=class_name) - run_button.click(fn=model.set_model_and_generate_image, - inputs=[ - model_name, - seed, - psi, - class_index, - tx, - ty, - angle, - ], - outputs=result) - model_name2.change(fn=get_sample_image_markdown, - inputs=model_name2, - outputs=sample_images) - dataset_name.change(fn=get_class_name_df, - inputs=dataset_name, - outputs=class_names) - -demo.queue(max_size=10).launch() diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/ngrok.py b/spaces/supertori/files/stable-diffusion-webui/modules/ngrok.py deleted file mode 100644 index 3df2c06bf1f10d49b7e9397758bc4f3661a51ba7..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/ngrok.py +++ /dev/null @@ -1,26 +0,0 @@ -from pyngrok import ngrok, conf, exception - -def connect(token, port, region): - account = None - if token is None: - token = 'None' - else: - if ':' in token: - # token = authtoken:username:password - account = token.split(':')[1] + ':' + token.split(':')[-1] - token = token.split(':')[0] - - config = conf.PyngrokConfig( - auth_token=token, region=region - ) - try: - if account is None: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True).public_url - else: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True, auth=account).public_url - except exception.PyngrokNgrokError: - print(f'Invalid ngrok authtoken, ngrok connection aborted.\n' - f'Your token: {token}, get the right one on https://dashboard.ngrok.com/get-started/your-authtoken') - else: - print(f'ngrok connected to localhost:{port}! URL: {public_url}\n' - 'You can use this link after the launch is complete.') diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Animasi Flash Swf 20 [PATCHED].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Animasi Flash Swf 20 [PATCHED].md deleted file mode 100644 index e958db8244225f5c688707ae0caf8d9d2107fe97..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Animasi Flash Swf 20 [PATCHED].md +++ /dev/null @@ -1,6 +0,0 @@ -

                        Download Animasi Flash Swf 20


                        Download →→→ https://cinurl.com/2uEX3i



                        - -Download Ebook Workbook On. Macromedia Flash ... Macromedia Flash 8 document (Figure. 2). ... then displays the SWF file, so your website visitors can ... to adobe flash. Page 20/34 ... program mutimedia dan animasi yang. Page 29/34 ... 1fdad05405
                        -
                        -
                        -

                        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Astah Professional 7 Crack _HOT_ 49.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Astah Professional 7 Crack _HOT_ 49.md deleted file mode 100644 index 5cb888208c4b5816256d7863845544e5d82f030b..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Astah Professional 7 Crack _HOT_ 49.md +++ /dev/null @@ -1,9 +0,0 @@ - -

                        demo92 f91c64177c (l) a french guy (m) designed for you! free download pdf version.rar kies premium 2.4.zip macosx free download pdf version.rar
                        good clean hack free download pdf version.rar
                        winlion2 free download pdf version.rar
                        getnetpro 6.9.0.1 crack + patch 2015 macosx torrent download
                        winlion2 free download pdf version.rar
                        download kaspersky internet security 2011 full version [crack] activation key.zip

                        -

                        Astah Professional 7 Crack 49


                        DOWNLOAD ->>->>->> https://urluss.com/2uCGpr



                        -

                        domainripper portable 3.8.11.0 full crack dotnet forever free version home & business domains for windows, linux, freebsd, unix, osx, apple mac os, android & ios, iphone, ipad, ipod, blackberry, windows mobile (winmo) and html5 mobile browsers. it is easy to use and the best domain names buyer. domainripper is the ultimate domain name buying tool, and you'll get a better price for your domain name.

                        -

                        the program’s interface is clean, the file names are displayed correctly, results are not unprofessional and the sample next button makes it possible to test the application without fear of getting a nasty message in regard to the expired time limit. the only drawback of the software is that there is no need to cover up a developed tv series before you can have easy access to it via torrent. however, there are still other more user-friendly alternatives that would certainly be nice additions to your computer system.

                        -

                        while the program lacks some important features, such as the ability to download and upload files, it still deserves to be added to your hard drive. its interface is clean, the file names are displayed correctly, results are not unprofessional and the sample next button makes it possible to test the application without fear of getting a nasty message in regard to the expired time limit. the only drawback of the software is that there is no need to cover up a developed tv series before you can have easy access to it via torrent. however, there are still other more user-friendly alternatives that would certainly be nice additions to your computer system.

                        -

                        899543212b
                        -
                        -
                        \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/extract/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/extract/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/metrics/landmarks.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/metrics/landmarks.py deleted file mode 100644 index 6201394d4108ff26f20b018aa781de5df846f671..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/metrics/landmarks.py +++ /dev/null @@ -1,236 +0,0 @@ -import os -import numpy as np -import json -from collections import OrderedDict -from scipy.integrate import simps - -from spiga.data.loaders.dl_config import db_anns_path -from spiga.eval.benchmark.metrics.metrics import Metrics - - -class MetricsLandmarks(Metrics): - - def __init__(self, name='landmarks'): - super().__init__(name) - - self.db_info = None - self.nme_norm = "corners" - self.nme_thr = 8 - self.percentile = [90, 95, 99] - # Cumulative plot axis length - self.bins = 10000 - - def compute_error(self, data_anns, data_pred, database, select_ids=None): - - # Initialize global logs and variables of Computer Error function - self.init_ce(data_anns, data_pred, database) - self._update_lnd_param() - - # Order data and compute nme - self.error['nme_per_img'] = [] - self.error['ne_per_img'] = OrderedDict() - self.error['ne_per_ldm'] = OrderedDict() - for img_id, anns in enumerate(data_anns): - # Init variables per img - pred = data_pred[img_id] - - # Get select ids to compute - if select_ids is None: - selected_ldm = anns['ids'] - else: - selected_ldm = list(set(select_ids) & set(anns['ids'])) - - norm = self._get_img_norm(anns) - for ldm_id in selected_ldm: - # Compute Normalize Error - anns_ldm = self._get_lnd_from_id(anns, ldm_id) - pred_ldm = self._get_lnd_from_id(pred, ldm_id) - ne = self._dist_l2(anns_ldm, pred_ldm)/norm * 100 - self.error['ne_per_img'].setdefault(img_id, []).append(ne) - self.error['ne_per_ldm'].setdefault(ldm_id, []).append(ne) - - # NME per image - if self.database in ['merlrav']: - # LUVLI at MERLRAV divide by 68 despite the annotated landmarks in the image. - self.error['nme_per_img'].append(np.sum(self.error['ne_per_img'][img_id])/68) - else: - self.error['nme_per_img'].append(np.mean(self.error['ne_per_img'][img_id])) - - # Cumulative NME - self.error['cumulative_nme'] = self._cumulative_error(self.error['nme_per_img'], bins=self.bins) - - return self.error - - def metrics(self): - - # Initialize global logs and variables of Metrics function - self.init_metrics() - - # Basic metrics (NME/NMPE/AUC/FR) for full dataset - nme, nmpe, auc, fr, _, _ = self._basic_metrics() - - print('NME: %.3f' % nme) - self.metrics_log['nme'] = nme - for percent_id, percentile in enumerate(self.percentile): - print('NME_P%i: %.3f' % (percentile, nmpe[percent_id])) - self.metrics_log['nme_p%i' % percentile] = nmpe[percent_id] - self.metrics_log['nme_thr'] = self.nme_thr - self.metrics_log['nme_norm'] = self.nme_norm - print('AUC_%i: %.3f' % (self.nme_thr, auc)) - self.metrics_log['auc'] = auc - print('FR_%i: %.3f' % (self.nme_thr, fr)) - self.metrics_log['fr'] = fr - - # Subset basic metrics - subsets = self.db_info['test_subsets'] - if self.data_type == 'test' and len(subsets) > 0: - self.metrics_log['subset'] = OrderedDict() - for subset, img_filter in subsets.items(): - self.metrics_log['subset'][subset] = OrderedDict() - nme, nmpe, auc, fr, _, _ = self._basic_metrics(img_select=img_filter) - print('> Landmarks subset: %s' % subset.upper()) - print('NME: %.3f' % nme) - self.metrics_log['subset'][subset]['nme'] = nme - for percent_id, percentile in enumerate(self.percentile): - print('NME_P%i: %.3f' % (percentile, nmpe[percent_id])) - self.metrics_log['subset'][subset]['nme_p%i' % percentile] = nmpe[percent_id] - print('AUC_%i: %.3f' % (self.nme_thr, auc)) - self.metrics_log['subset'][subset]['auc'] = auc - print('FR_%i: %.3f' % (self.nme_thr, fr)) - self.metrics_log['subset'][subset]['fr'] = fr - - # NME/NPE per landmark - self.metrics_log['nme_per_ldm'] = OrderedDict() - for percentile in self.percentile: - self.metrics_log['npe%i_per_ldm' % percentile] = OrderedDict() - for k, v in self.error['ne_per_ldm'].items(): - self.metrics_log['nme_per_ldm'][k] = np.mean(v) - for percentile in self.percentile: - self.metrics_log['npe%i_per_ldm' % percentile][k] = np.percentile(v, percentile) - - return self.metrics_log - - def get_pimg_err(self, data_dict=None, img_select=None): - data = self.error['nme_per_img'] - if img_select is not None: - data = [data[img_id] for img_id in img_select] - name_dict = self.name + '/nme' - if data_dict is not None: - data_dict[name_dict] = data - else: - data_dict = data - return data_dict - - def _update_lnd_param(self): - db_info_file = db_anns_path.format(database=self.database, file_name='db_info') - if os.path.exists(db_info_file): - with open(db_info_file) as jsonfile: - self.db_info = json.load(jsonfile) - - norm_dict = self.db_info['norm'] - nme_norm, nme_thr = next(iter(norm_dict.items())) - print('Default landmarks configuration: \n %s: %i' % (nme_norm, nme_thr)) - answer = input("Change default config? (N/Y) >>> ") - if answer.lower() in ['yes', 'y']: - answer = input("Normalization options: %s >>> " % str(list(norm_dict.keys()))) - if answer in norm_dict.keys(): - nme_norm = answer - nme_thr = norm_dict[nme_norm] - else: - print("Option %s not available keep in default one: %s" % (answer, nme_norm)) - answer = input("Change threshold ->%s:%i ? (N/Y) >>> " % (nme_norm, nme_thr)) - if answer.lower() in ['yes', 'y']: - answer = input('NME threshold: >>> ') - nme_thr = float(answer) - else: - print("Keeping default threshold: %i" % nme_thr) - - self.nme_norm = nme_norm - self.nme_thr = nme_thr - - else: - raise ValueError('Database %s specifics not defined. Missing db_info.json' % self.database) - - def _dist_l2(self, pointA, pointB): - return float(((pointA - pointB) ** 2).sum() ** 0.5) - - def _get_lnd_from_id(self, anns, ids): - idx = anns['ids'].index(ids) - ref = np.array(anns['landmarks'][idx]) - return ref - - def _get_img_norm(self, anns): - if self.nme_norm == 'pupils': - print('WARNING: Pupils norm only implemented for 68 landmark configuration') - left_eye = [7, 138, 139, 8, 141, 142] - right_eye = [11, 144, 145, 12, 147, 148] - refA = np.zeros(2) - refB = np.zeros(2) - for i in range(len(left_eye)): - refA += self._get_lnd_from_id(anns, left_eye[i]) - refB += self._get_lnd_from_id(anns, right_eye[i]) - refA = refA/len(left_eye) # Left - refB = refB/len(right_eye) # Right - elif self.nme_norm == 'corners': - refA = self._get_lnd_from_id(anns, 12) # Left - refB = self._get_lnd_from_id(anns, 7) # Right - elif self.nme_norm == 'diagonal': - refA = anns['bbox'][0:2] - refB = refA + anns['bbox'][2:4] - elif self.nme_norm == 'height': - return anns['bbox'][3] - elif self.nme_norm == 'lnd_bbox': - lnd = np.array(anns['landmarks']) - lnd_max = np.max(lnd, axis=0) - lnd_min = np.min(lnd, axis=0) - lnd_wh = lnd_max - lnd_min - return (lnd_wh[0]*lnd_wh[1])**0.5 - elif self.nme_norm == 'bbox': - return (anns['bbox'][2] * anns['bbox'][3]) ** 0.5 - else: - raise ValueError('Normalization %s not implemented' % self.nme_norm) - - return self._dist_l2(refA, refB) - - def _cumulative_error(self, error, bins=10000): - num_imgs, base = np.histogram(error, bins=bins) - cumulative = [x / float(len(error)) for x in np.cumsum(num_imgs)] - base = base[:bins] - cumulative, base = self._filter_cumulative(cumulative, base) - return [cumulative, base] - - def _filter_cumulative(self, cumulative, base): - base = [x for x in base if (x < self.nme_thr)] - cumulative = cumulative[:len(base)] - return cumulative, base - - def _basic_metrics(self, img_select=None): - data = self.error['nme_per_img'] - if img_select is not None: - data = [data[img_id] for img_id in img_select] - [cumulative, base] = self._cumulative_error(data, bins=self.bins) - else: - [cumulative, base] = self.error['cumulative_nme'] - - # Normalize Mean Error across img - nme = np.mean(data) - # Normalize Mean Percentile Error across img - nmpe = [] - for percentile in self.percentile: - nmpe.append(np.percentile(data, percentile)) - - # Area Under Curve and Failure Rate - auc, fr = self._auc_fr_metrics(cumulative, base) - - return nme, nmpe, auc, fr, cumulative, base - - def _auc_fr_metrics(self, cumulative, base): - if not base: - auc = 0. - fr = 100. - else: - auc = (simps(cumulative, x=base) / self.nme_thr) * 100.0 - if base[-1] < self.nme_thr and cumulative[-1] == 1: - auc += ((self.nme_thr - base[-1]) / self.nme_thr) * 100 - fr = (1 - cumulative[-1]) * 100.0 - return auc, fr diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/merge_cells.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/merge_cells.py deleted file mode 100644 index 48ca8cc0a8aca8432835bd760c0403a3c35b34cf..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/merge_cells.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import abstractmethod - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..cnn import ConvModule - - -class BaseMergeCell(nn.Module): - """The basic class for cells used in NAS-FPN and NAS-FCOS. - - BaseMergeCell takes 2 inputs. After applying convolution - on them, they are resized to the target size. Then, - they go through binary_op, which depends on the type of cell. - If with_out_conv is True, the result of output will go through - another convolution layer. - - Args: - in_channels (int): number of input channels in out_conv layer. - out_channels (int): number of output channels in out_conv layer. - with_out_conv (bool): Whether to use out_conv layer - out_conv_cfg (dict): Config dict for convolution layer, which should - contain "groups", "kernel_size", "padding", "bias" to build - out_conv layer. - out_norm_cfg (dict): Config dict for normalization layer in out_conv. - out_conv_order (tuple): The order of conv/norm/activation layers in - out_conv. - with_input1_conv (bool): Whether to use convolution on input1. - with_input2_conv (bool): Whether to use convolution on input2. - input_conv_cfg (dict): Config dict for building input1_conv layer and - input2_conv layer, which is expected to contain the type of - convolution. - Default: None, which means using conv2d. - input_norm_cfg (dict): Config dict for normalization layer in - input1_conv and input2_conv layer. Default: None. - upsample_mode (str): Interpolation method used to resize the output - of input1_conv and input2_conv to target size. Currently, we - support ['nearest', 'bilinear']. Default: 'nearest'. - """ - - def __init__(self, - fused_channels=256, - out_channels=256, - with_out_conv=True, - out_conv_cfg=dict( - groups=1, kernel_size=3, padding=1, bias=True), - out_norm_cfg=None, - out_conv_order=('act', 'conv', 'norm'), - with_input1_conv=False, - with_input2_conv=False, - input_conv_cfg=None, - input_norm_cfg=None, - upsample_mode='nearest'): - super(BaseMergeCell, self).__init__() - assert upsample_mode in ['nearest', 'bilinear'] - self.with_out_conv = with_out_conv - self.with_input1_conv = with_input1_conv - self.with_input2_conv = with_input2_conv - self.upsample_mode = upsample_mode - - if self.with_out_conv: - self.out_conv = ConvModule( - fused_channels, - out_channels, - **out_conv_cfg, - norm_cfg=out_norm_cfg, - order=out_conv_order) - - self.input1_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input1_conv else nn.Sequential() - self.input2_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input2_conv else nn.Sequential() - - def _build_input_conv(self, channel, conv_cfg, norm_cfg): - return ConvModule( - channel, - channel, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True) - - @abstractmethod - def _binary_op(self, x1, x2): - pass - - def _resize(self, x, size): - if x.shape[-2:] == size: - return x - elif x.shape[-2:] < size: - return F.interpolate(x, size=size, mode=self.upsample_mode) - else: - assert x.shape[-2] % size[-2] == 0 and x.shape[-1] % size[-1] == 0 - kernel_size = x.shape[-1] // size[-1] - x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size) - return x - - def forward(self, x1, x2, out_size=None): - assert x1.shape[:2] == x2.shape[:2] - assert out_size is None or len(out_size) == 2 - if out_size is None: # resize to larger one - out_size = max(x1.size()[2:], x2.size()[2:]) - - x1 = self.input1_conv(x1) - x2 = self.input2_conv(x2) - - x1 = self._resize(x1, out_size) - x2 = self._resize(x2, out_size) - - x = self._binary_op(x1, x2) - if self.with_out_conv: - x = self.out_conv(x) - return x - - -class SumCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super(SumCell, self).__init__(in_channels, out_channels, **kwargs) - - def _binary_op(self, x1, x2): - return x1 + x2 - - -class ConcatCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super(ConcatCell, self).__init__(in_channels * 2, out_channels, - **kwargs) - - def _binary_op(self, x1, x2): - ret = torch.cat([x1, x2], dim=1) - return ret - - -class GlobalPoolingCell(BaseMergeCell): - - def __init__(self, in_channels=None, out_channels=None, **kwargs): - super().__init__(in_channels, out_channels, **kwargs) - self.global_pool = nn.AdaptiveAvgPool2d((1, 1)) - - def _binary_op(self, x1, x2): - x2_att = self.global_pool(x2).sigmoid() - return x2 + x2_att * x1 diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/backbones/unet.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/backbones/unet.py deleted file mode 100644 index 82caa16a94c195c192a2a920fb7bc7e60f0f3ce3..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/backbones/unet.py +++ /dev/null @@ -1,429 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer, constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import UpConvBlock - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsample_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsample_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = nn.Upsample(**upsample_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -@BACKBONES.register_module() -class UNet(nn.Module): - """UNet backbone. - U-Net: Convolutional Networks for Biomedical Image Segmentation. - https://arxiv.org/pdf/1505.04597.pdf - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - - Notice: - The input image size should be divisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_divisible. - - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None): - super(UNet, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - self.base_channels = base_channels - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - self._check_input_divisible(x) - enc_outs = [] - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(UNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - def _check_input_divisible(self, x): - h, w = x.shape[-2:] - whole_downsample_rate = 1 - for i in range(1, self.num_stages): - if self.strides[i] == 2 or self.downsamples[i - 1]: - whole_downsample_rate *= 2 - assert (h % whole_downsample_rate == 0) \ - and (w % whole_downsample_rate == 0),\ - f'The input image size {(h, w)} should be divisible by the whole '\ - f'downsample rate {whole_downsample_rate}, when num_stages is '\ - f'{self.num_stages}, strides is {self.strides}, and downsamples '\ - f'is {self.downsamples}.' - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/szk1ck/similarity_by_fasttext_api/Dockerfile b/spaces/szk1ck/similarity_by_fasttext_api/Dockerfile deleted file mode 100644 index 4a5a821629c9a08569f0e83004405a13032cd177..0000000000000000000000000000000000000000 --- a/spaces/szk1ck/similarity_by_fasttext_api/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/constants.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/constants.py deleted file mode 100644 index 28bbbd82e467d041237f785b8934f726cdd1b706..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/constants.py +++ /dev/null @@ -1,17 +0,0 @@ -import json -import os -tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../")) - -with open(os.path.join(tencentpretrain_dir, "models/special_tokens_map.json"), mode="r", encoding="utf-8") as f: - special_tokens_map = json.load(f) - -UNK_TOKEN = special_tokens_map["unk_token"] -CLS_TOKEN = special_tokens_map["cls_token"] -SEP_TOKEN = special_tokens_map["sep_token"] -MASK_TOKEN = special_tokens_map["mask_token"] -PAD_TOKEN = special_tokens_map["pad_token"] -try: - # e.g. , , ... , should have consecutive IDs. - SENTINEL_TOKEN = special_tokens_map["sentinel_token"] -except KeyError: - pass diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/data_utils.py b/spaces/t110-ai-admin/InspectLens/video_llama/datasets/data_utils.py deleted file mode 100644 index 8fe6a567bae667f00ef0ee1d4d9075649107b471..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/data_utils.py +++ /dev/null @@ -1,196 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import gzip -import logging -import os -import random as rnd -import tarfile -import zipfile -import random -from typing import List -from tqdm import tqdm - -import decord -from decord import VideoReader -import webdataset as wds -import numpy as np -import torch -from torch.utils.data.dataset import IterableDataset - -from video_llama.common.registry import registry -from video_llama.datasets.datasets.base_dataset import ConcatDataset - - -decord.bridge.set_bridge("torch") -MAX_INT = registry.get("MAX_INT") - - -class ChainDataset(wds.DataPipeline): - r"""Dataset for chaining multiple :class:`DataPipeline` s. - - This class is useful to assemble different existing dataset streams. The - chaining operation is done on-the-fly, so concatenating large-scale - datasets with this class will be efficient. - - Args: - datasets (iterable of IterableDataset): datasets to be chained together - """ - def __init__(self, datasets: List[wds.DataPipeline]) -> None: - super().__init__() - self.datasets = datasets - self.prob = [] - self.names = [] - for dataset in self.datasets: - if hasattr(dataset, 'name'): - self.names.append(dataset.name) - else: - self.names.append('Unknown') - if hasattr(dataset, 'sample_ratio'): - self.prob.append(dataset.sample_ratio) - else: - self.prob.append(1) - logging.info("One of the datapipeline doesn't define ratio and set to 1 automatically.") - - def __iter__(self): - datastreams = [iter(dataset) for dataset in self.datasets] - while True: - select_datastream = random.choices(datastreams, weights=self.prob, k=1)[0] - yield next(select_datastream) - - -def apply_to_sample(f, sample): - if len(sample) == 0: - return {} - - def _apply(x): - if torch.is_tensor(x): - return f(x) - elif isinstance(x, dict): - return {key: _apply(value) for key, value in x.items()} - elif isinstance(x, list): - return [_apply(x) for x in x] - else: - return x - - return _apply(sample) - - -def move_to_cuda(sample): - def _move_to_cuda(tensor): - return tensor.cuda() - - return apply_to_sample(_move_to_cuda, sample) - - -def prepare_sample(samples, cuda_enabled=True): - if cuda_enabled: - samples = move_to_cuda(samples) - - # TODO fp16 support - - return samples - - -def reorg_datasets_by_split(datasets): - """ - Organizes datasets by split. - - Args: - datasets: dict of torch.utils.data.Dataset objects by name. - - Returns: - Dict of datasets by split {split_name: List[Datasets]}. - """ - # if len(datasets) == 1: - # return datasets[list(datasets.keys())[0]] - # else: - reorg_datasets = dict() - - # reorganize by split - for _, dataset in datasets.items(): - for split_name, dataset_split in dataset.items(): - if split_name not in reorg_datasets: - reorg_datasets[split_name] = [dataset_split] - else: - reorg_datasets[split_name].append(dataset_split) - - return reorg_datasets - - -def concat_datasets(datasets): - """ - Concatenates multiple datasets into a single dataset. - - It supports may-style datasets and DataPipeline from WebDataset. Currently, does not support - generic IterableDataset because it requires creating separate samplers. - - Now only supports conctenating training datasets and assuming validation and testing - have only a single dataset. This is because metrics should not be computed on the concatenated - datasets. - - Args: - datasets: dict of torch.utils.data.Dataset objects by split. - - Returns: - Dict of concatenated datasets by split, "train" is the concatenation of multiple datasets, - "val" and "test" remain the same. - - If the input training datasets contain both map-style and DataPipeline datasets, returns - a tuple, where the first element is a concatenated map-style dataset and the second - element is a chained DataPipeline dataset. - - """ - # concatenate datasets in the same split - for split_name in datasets: - if split_name != "train": - assert ( - len(datasets[split_name]) == 1 - ), "Do not support multiple {} datasets.".format(split_name) - datasets[split_name] = datasets[split_name][0] - else: - iterable_datasets, map_datasets = [], [] - for dataset in datasets[split_name]: - if isinstance(dataset, wds.DataPipeline): - logging.info( - "Dataset {} is IterableDataset, can't be concatenated.".format( - dataset - ) - ) - iterable_datasets.append(dataset) - elif isinstance(dataset, IterableDataset): - raise NotImplementedError( - "Do not support concatenation of generic IterableDataset." - ) - else: - map_datasets.append(dataset) - - # if len(iterable_datasets) > 0: - # concatenate map-style datasets and iterable-style datasets separately - if len(iterable_datasets) > 1: - chained_datasets = ( - ChainDataset(iterable_datasets) - ) - elif len(iterable_datasets) == 1: - chained_datasets = iterable_datasets[0] - else: - chained_datasets = None - - concat_datasets = ( - ConcatDataset(map_datasets) if len(map_datasets) > 0 else None - ) - - train_datasets = concat_datasets, chained_datasets - train_datasets = tuple([x for x in train_datasets if x is not None]) - train_datasets = ( - train_datasets[0] if len(train_datasets) == 1 else train_datasets - ) - - datasets[split_name] = train_datasets - - return datasets - diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/lvis_v1.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/lvis_v1.py deleted file mode 100644 index 4b9b279f17663def1c4913321efbb7490d591e90..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/lvis_v1.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os - -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.lvis import get_lvis_instances_meta - -logger = logging.getLogger(__name__) - -__all__ = ["custom_load_lvis_json", "custom_register_lvis_instances"] - - -def custom_register_lvis_instances(name, metadata, json_file, image_root): - """ - """ - DatasetCatalog.register(name, lambda: custom_load_lvis_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="lvis", **metadata - ) - - -def custom_load_lvis_json(json_file, image_root, dataset_name=None): - ''' - Modifications: - use `file_name` - convert neg_category_ids - add pos_category_ids - ''' - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - catid2contid = {x['id']: i for i, x in enumerate( - sorted(lvis_api.dataset['categories'], key=lambda x: x['id']))} - if len(lvis_api.dataset['categories']) == 1203: - for x in lvis_api.dataset['categories']: - assert catid2contid[x['id']] == x['id'] - 1 - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - if img_dict["file_name"].startswith("COCO"): - file_name = file_name[-16:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'coco_url' in img_dict: - # e.g., http://images.cocodataset.org/train2017/000000391895.jpg - file_name = img_dict["coco_url"][30:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'tar_index' in img_dict: - record['tar_index'] = img_dict['tar_index'] - - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get( - "not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - # NOTE: modified by Xingyi: convert to 0-based - record["neg_category_ids"] = [ - catid2contid[x] for x in record["neg_category_ids"]] - if 'pos_category_ids' in img_dict: - record['pos_category_ids'] = [ - catid2contid[x] for x in img_dict.get("pos_category_ids", [])] - if 'captions' in img_dict: - record['captions'] = img_dict['captions'] - if 'caption_features' in img_dict: - record['caption_features'] = img_dict['caption_features'] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = catid2contid[anno['category_id']] - if 'segmentation' in anno: - segm = anno["segmentation"] - valid_segm = [poly for poly in segm \ - if len(poly) % 2 == 0 and len(poly) >= 6] - # assert len(segm) == len( - # valid_segm - # ), "Annotation contains an invalid polygon with < 3 points" - if not len(segm) == len(valid_segm): - print('Annotation contains an invalid polygon with < 3 points') - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - -_CUSTOM_SPLITS_LVIS = { - "lvis_v1_train+coco": ("coco/", "lvis/lvis_v1_train+coco_mask.json"), - "lvis_v1_train_norare": ("coco/", "lvis/lvis_v1_train_norare.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - custom_register_lvis_instances( - key, - get_lvis_instances_meta(key), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - - -def get_lvis_22k_meta(): - from .lvis_22k_categories import CATEGORIES - cat_ids = [k["id"] for k in CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - -_CUSTOM_SPLITS_LVIS_22K = { - "lvis_v1_train_22k": ("coco/", "lvis/lvis_v1_train_lvis-22k.json"), -} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS_22K.items(): - custom_register_lvis_instances( - key, - get_lvis_22k_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Docuworks 7.2 Serial Number REPACK.md b/spaces/terfces0erbo/CollegeProjectV2/Docuworks 7.2 Serial Number REPACK.md deleted file mode 100644 index 29adccbf52e065a60db4755f0a18eee4843d57f1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Docuworks 7.2 Serial Number REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

                        Docuworks 7.2 serial number


                        Download Zip ✫✫✫ https://bytlly.com/2uGjwx



                        - - 3cee63e6c2
                        -
                        -
                        -

                        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Government Hcl Ltc Model 02102 Laptop Drivers For Windows 7 Full VERIFIED.md b/spaces/terfces0erbo/CollegeProjectV2/Government Hcl Ltc Model 02102 Laptop Drivers For Windows 7 Full VERIFIED.md deleted file mode 100644 index 752bcd98546e277e6a63c902365f1757bc8645a3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Government Hcl Ltc Model 02102 Laptop Drivers For Windows 7 Full VERIFIED.md +++ /dev/null @@ -1,35 +0,0 @@ - -

                        How to Download and Install Government Hcl Ltc Model 02102 Laptop Drivers For Windows 7 Full

                        -

                        If you are looking for a way to download and install government Hcl Ltc model 02102 laptop drivers for Windows 7 full, you have come to the right place. In this article, we will show you how to find the correct drivers for your laptop model and how to install them on your Windows 7 operating system.

                        -

                        Government Hcl Ltc model 02102 is a laptop that was distributed by the Indian government to various educational institutions and government employees under the National Mission on Education through Information and Communication Technology (NMEICT) scheme. This laptop has a 14-inch screen, an Intel Core i3 processor, 2 GB of RAM, a 320 GB hard disk, and a DVD writer. It also comes with a pre-installed Windows 7 operating system.

                        -

                        Government Hcl Ltc Model 02102 Laptop Drivers For Windows 7 Full


                        Download Zip 🆓 https://bytlly.com/2uGiXl



                        -

                        However, some users may face issues with the performance or compatibility of their laptop due to outdated or missing drivers. Drivers are software components that enable the communication between the hardware devices and the operating system. Without proper drivers, your laptop may not function properly or may experience errors or crashes.

                        -

                        Therefore, it is important to keep your drivers updated and install the correct ones for your laptop model. Here are the steps to download and install government Hcl Ltc model 02102 laptop drivers for Windows 7 full.

                        -

                        Step 1: Identify your laptop model and hardware devices

                        -

                        The first step is to identify your laptop model and the hardware devices that are installed on it. You can do this by checking the label on the bottom of your laptop or by using a software tool like Speccy or CPU-Z. These tools can scan your laptop and provide you with detailed information about its specifications and components.

                        -

                        Alternatively, you can also use the Device Manager in Windows 7 to check the hardware devices on your laptop. To access the Device Manager, follow these steps:

                        -
                          -
                        • Click on the Start button and type "device manager" in the search box.
                        • -
                        • Click on the Device Manager option that appears in the results.
                        • -
                        • A window will open that shows a list of categories of hardware devices on your laptop.
                        • -
                        • Expand each category to see the specific devices under it.
                        • -
                        • Note down the name and model of each device that you want to update or install drivers for.
                        • -
                        -

                        Step 2: Download the drivers from the official website

                        -

                        The next step is to download the drivers from the official website of Hcl. The website has a dedicated page for government laptops where you can find the drivers for various models and operating systems. To download the drivers from the website, follow these steps:

                        -
                          -
                        • Go to http://www.hclsupportservice.in/drvr-dwnld.jsp?pF=RGF0YUxhcHRvcA==&pC=TWluaUxhcHRvcA==.
                        • -
                        • Select your laptop model from the drop-down menu. In this case, select "Government Hcl Ltc Model 02102".
                        • -
                        • Select your operating system from the drop-down menu. In this case, select "Windows 7".
                        • -
                        • A list of drivers will appear below for various hardware devices such as audio, video, chipset, LAN, WLAN, Bluetooth, card reader, touchpad, webcam, etc.
                        • -
                        • Click on the download link for each driver that you need and save it to a folder on your laptop or a USB drive.
                        • -
                        -

                        Step 3: Install the drivers on your laptop

                        -

                        The final step is to install the drivers on your laptop. To install the drivers, follow these steps:

                        -
                          -
                        • Locate the folder or USB drive where you saved the downloaded drivers.
                        • -
                        • Double-click on each driver file to run it.
                        • -
                        • A wizard will guide you through the installation process. Follow the instructions on the screen and accept any terms and conditions or license

                          -

                          d5da3c52bf
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Cara Download IDM Full Crack Tanpa Registrasi Gratis Selamanya.md b/spaces/tialenAdioni/chat-gpt-api/logs/Cara Download IDM Full Crack Tanpa Registrasi Gratis Selamanya.md deleted file mode 100644 index 0e3d8bf162dc6d741cea31fe711ab22020af63e2..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Cara Download IDM Full Crack Tanpa Registrasi Gratis Selamanya.md +++ /dev/null @@ -1,25 +0,0 @@ -
                          -

                          Cara Download IDM Full Crack Tanpa Registrasi Gratis Selamanya

                          -

                          Internet Download Manager atau IDM adalah salah satu aplikasi yang sangat populer dan berguna untuk meningkatkan kecepatan download, mengelola file, dan mengintegrasikan browser di PC Anda. Aplikasi ini dapat meningkatkan kecepatan download hingga lima kali lipat dengan sistem multipart yang membagi file menjadi bagian-bagian kecil. Aplikasi ini juga mendukung firewall, redirect, proxy server, cookie, otorisasi, FTP dan HTTP protocol, MPEG video, dan MP3 audio. Namun, IDM adalah aplikasi berbayar yang membutuhkan lisensi atau serial number untuk menggunakannya secara legal. Harganya juga tidak terlalu mahal, namun banyak pengguna yang ingin mencoba aplikasi ini terlebih dahulu sebelum membelinya. Oleh karena itu, banyak pengguna mencari cara download IDM full crack tanpa registrasi gratis selamanya di internet. Apakah Anda salah satunya? Jika iya, maka Anda berada di tempat yang tepat. Dalam artikel ini, saya akan menunjukkan kepada Anda cara download IDM full crack tanpa registrasi gratis selamanya dan apa saja kelebihan dan kekurangan dari cara ini.

                          -

                          free download idm full crack tanpa registrasi


                          DOWNLOAD 🆓 https://urlcod.com/2uK96m



                          -

                          Apa itu IDM full crack?

                          -

                          IDM full crack adalah versi bajakan dari IDM yang telah dimodifikasi atau dihack untuk melewati proses aktivasi dan membuatnya berfungsi tanpa lisensi yang valid. Ini berarti bahwa Anda dapat menggunakan semua fitur dan fungsi dari IDM tanpa membayar apapun. Namun, ini juga berarti bahwa Anda melanggar syarat dan ketentuan dari Microsoft dan melanggar hukum dengan menggunakan salinan ilegal dari software ini.

                          -

                          Bagaimana cara download IDM full crack tanpa registrasi gratis selamanya?

                          -

                          Untuk download IDM full crack tanpa registrasi gratis selamanya, Anda perlu mengikuti langkah-langkah berikut:

                          -
                            -
                          1. Hapus atau uninstall versi IDM yang sudah ada di komputer Anda.
                          2. -
                          3. Download installer offline IDM dari sumber yang terpercaya. Anda dapat menemukan banyak situs web yang menawarkan IDM full crack tanpa registrasi gratis selamanya di internet, namun berhati-hatilah karena beberapa di antaranya mungkin mengandung virus atau malware yang dapat merusak komputer Anda. Salah satu situs web yang dapat Anda coba adalah YASIR252 , yang menyediakan IDM 2021 Professional Plus dengan crack untuk sistem 32-bit dan 64-bit.
                          4. -
                          5. Ekstrak file yang telah Anda download menggunakan WinRAR atau software lainnya.
                          6. -
                          7. Jalankan file setup.exe sebagai administrator dan ikuti instruksi untuk menginstall IDM di komputer Anda.
                          8. -
                          9. Setelah instalasi selesai, jangan buka aplikasi IDM.
                          10. -
                          11. Download aktivator IDM dari situs web yang sama atau sumber lain. Aktivator adalah alat yang dapat menghasilkan kunci lisensi palsu dan mengaktifkan IDM tanpa memerlukan koneksi internet. Salah satu aktivator yang dapat Anda gunakan adalah KMSpico, yang dapat mengaktifkan IDM 2019 dan 2021.
                          12. -
                          13. Ekstrak file aktivator menggunakan WinRAR atau software lainnya.
                          14. -
                          15. Jalankan file KMSpico.exe sebagai administrator dan tunggu sampai mendeteksi versi IDM yang telah Anda install.
                          16. -
                          17. Klik tombol merah untuk mengaktifkan IDM.
                          18. -
                          19. Selamat! Anda telah berhasil download dan install IDM full crack tanpa registrasi gratis selamanya.
                          20. -
                          -

                          Apa saja kelebihan dan kekurangan dari download IDM full crack tanpa registrasi gratis selamanya?

                          -

                          Download IDM full crack tanpa registrasi gratis selamanya mungkin terdengar seperti ide yang bagus pada awalnya, namun juga memiliki beberapa kelemahan yang harus Anda ketahui. Berikut adalah beberapa kelebihan dan kekurangan dari

                          -

                          ddb901b051
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Charmilles Technologies CT Expert For PC Download The Ultimate Solution for 4-Axis Copper Electrode Processing.md b/spaces/tialenAdioni/chat-gpt-api/logs/Charmilles Technologies CT Expert For PC Download The Ultimate Solution for 4-Axis Copper Electrode Processing.md deleted file mode 100644 index 59a447749034f38b25cc5f6b6ba9eaca69edb9e7..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Charmilles Technologies CT Expert For PC Download The Ultimate Solution for 4-Axis Copper Electrode Processing.md +++ /dev/null @@ -1,115 +0,0 @@ -
                          -

                          Charmilles Technologies CT Expert For PC Download: A Guide to Wire EDM Programming

                          - -

                          If you are looking for a software that can help you program and optimize your wire EDM machines from Charmilles Technologies, you might be interested in CT Expert. CT Expert is a system that selects the best machining settings, suggests the best wire, automatically calculates all the offsets and creates a command program linking the various machining sequences. In this article, we will explain what CT Expert is, how to install it on your PC, and how to use it for your wire EDM projects.

                          - -

                          What is CT Expert?

                          - -

                          CT Expert is a software that is provided by GF AgieCharmilles, the manufacturer of Charmilles wire EDM machines. CT Expert is designed to simplify and automate the programming and optimization of wire EDM machining. It can help you achieve faster cutting speeds, better surface quality, lower wire consumption, and higher accuracy.

                          -

                          Charmilles Technologies CT Expert For PC Download


                          DOWNLOAD >> https://urlcod.com/2uK76t



                          - -

                          CT Expert can be used for various models of Charmilles NC machines, such as Charmilles, Charmilles-Fanuc, Charmilles-Millennium, and Charmilles-Orange. It can also be integrated with ESPRIT, a CAM software that supports wire EDM machining.

                          - -

                          How to Install CT Expert on Your PC?

                          - -

                          To install CT Expert on your PC, you need to download it from the official website of GF AgieCharmilles or from a trusted source. You also need to have a license key to activate the software. You can contact your local GF AgieCharmilles representative or distributor to get the license key.

                          - -

                          Once you have downloaded the software and obtained the license key, you can follow these steps to install CT Expert on your PC:

                          - -
                            -
                          1. Run the setup file and follow the instructions on the screen.
                          2. -
                          3. Choose a destination folder and a program folder for the software. You should choose a different folder for each model of Charmilles NC machine you have. Otherwise, the new installation will overwrite the previous one.
                          4. -
                          5. Enter the license key when prompted.
                          6. -
                          7. Finish the installation and restart your PC.
                          8. -
                          - -

                          How to Use CT Expert for Wire EDM Programming?

                          - -

                          To use CT Expert for wire EDM programming, you need to have a CAD file of your part or a DXF file of your profile. You can import these files into ESPRIT or into CT Expert directly. Then you can follow these steps to create and optimize your wire EDM program:

                          - -
                            -
                          1. Select your machine type and model from the list of available machines.
                          2. -
                          3. Select your material type and thickness from the list of available materials.
                          4. -
                          5. Select your wire type and diameter from the list of available wires.
                          6. -
                          7. Select your desired cutting quality from the list of available qualities.
                          8. -
                          9. CT Expert will automatically calculate the best machining settings, such as power, frequency, tension, feed rate, etc., based on your selections.
                          10. -
                          11. CT Expert will also automatically calculate all the offsets and create a command program linking the various machining sequences, such as roughing, finishing, tapering, etc.
                          12. -
                          13. You can preview and edit your program in ESPRIT or in CT Expert directly.
                          14. -
                          15. You can save and export your program as an ISO file or as a specific machine file format.
                          16. -
                          17. You can transfer your program to your wire EDM machine via a network connection or a USB drive.
                          18. -
                          - -

                          Conclusion

                          - -

                          CT Expert is a software that can help you program and optimize your wire EDM machines from Charmilles Technologies. It can save you time and money by selecting the best machining settings, suggesting the best wire, automatically calculating all the offsets and creating a command program linking the various machining sequences. To use CT Expert, you need to install it on your PC with a license key and import your CAD or DXF files into ESPRIT or into CT Expert directly. Then you can create and edit your wire EDM program with ease and transfer it to your machine. If you want to learn more about CT Expert or download it for your PC, you can visit the official website of GF AgieCharmilles or contact your local representative or distributor.

                          -

                          What are the Benefits of Using CT Expert for Wire EDM Programming?

                          - -

                          Using CT Expert for wire EDM programming can bring you many benefits that can improve your productivity and profitability. Here are some of the benefits of using CT Expert:

                          - -
                            -
                          • You can save time and money: CT Expert can help you reduce your programming time and costs by automating and simplifying the process. You don't need to manually enter the machining parameters or calculate the offsets. You also don't need to test and adjust your program on the machine. CT Expert can do all these tasks for you with a few clicks.
                          • -
                          • You can improve your cutting quality and accuracy: CT Expert can help you achieve better cutting results by selecting the optimal machining settings and wire for your material and thickness. You can also get access to Charmilles cut data, which are based on extensive research and testing by GF AgieCharmilles. These data can ensure that your program is compatible with your machine and wire.
                          • -
                          • You can increase your flexibility and versatility: CT Expert can help you handle various types of wire EDM projects with ease. You can program different types of machining sequences, such as roughing, finishing, tapering, etc. You can also program different types of geometries, such as 2D, 3D, 4-axis, etc. You can also switch between different models of Charmilles NC machines without changing your program.
                          • -
                          - -

                          FAQs about CT Expert

                          - -

                          Here are some frequently asked questions about CT Expert and their answers:

                          - -
                            -
                          1. What are the system requirements for CT Expert?
                          2. -

                            To run CT Expert on your PC, you need to have a Windows operating system (XP, Vista, 7, 8, or 10), a Pentium processor or higher, at least 256 MB of RAM, at least 100 MB of free disk space, a CD-ROM drive or a USB port, and a network connection or a USB drive.

                            -

                            How to install CT-EXPERT for Charmilles NC machine
                            -CT-EXPERT software review by Charmilles Technologies
                            -Download CT-EXPERT for Charmilles-Fanuc wire EDM
                            -CT-EXPERT system requirements and compatibility
                            -Charmilles Technologies CT-EXPERT user manual and guide
                            -CT-EXPERT latest version and update download
                            -Charmilles Technologies CT-EXPERT free trial and license
                            -CT-EXPERT features and benefits for wire EDM programming
                            -Charmilles Technologies CT-EXPERT customer support and feedback
                            -Download CT-EXPERT for Charmilles-Millennium wire EDM
                            -CT-EXPERT alternatives and competitors comparison
                            -Charmilles Technologies CT-EXPERT discount and coupon code
                            -Download CT-EXPERT for Charmilles-Orange wire EDM
                            -CT-EXPERT pros and cons for wire EDM machining
                            -Charmilles Technologies CT-EXPERT testimonials and case studies
                            -Download CT-EXPERT for Charmilles Robofil wire EDM
                            -CT-EXPERT integration with ESPRIT CAM software
                            -Charmilles Technologies CT-EXPERT FAQ and troubleshooting
                            -Download CT-EXPERT for Charmilles Robocut wire EDM
                            -CT-EXPERT best practices and tips for wire EDM optimization

                            -
                          3. How to get CT Expert for your PC?
                          4. -

                            To get CT Expert for your PC, you need to download it from the official website of GF AgieCharmilles or from a trusted source. You also need to have a license key to activate the software. You can contact your local GF AgieCharmilles representative or distributor to get the license key.

                            -
                          5. How to update CT Expert on your PC?
                          6. -

                            To update CT Expert on your PC, you need to download the latest version of the software from the official website of GF AgieCharmilles or from a trusted source. You also need to have a valid license key to activate the software. You can contact your local GF AgieCharmilles representative or distributor to get the license key.

                            -
                          7. How to get technical support for CT Expert?
                          8. -

                            To get technical support for CT Expert, you can contact your local GF AgieCharmilles representative or distributor or visit the official website of GF AgieCharmilles. You can also check the user manual or the online help of CT Expert for more information.

                            -
                          - -

                          Conclusion

                          - -

                          CT Expert is a software that can help you program and optimize your wire EDM machines from Charmilles Technologies. It can save you time and money by selecting the best machining settings, suggesting the best wire, automatically calculating all the offsets and creating a command program linking the various machining sequences. To use CT Expert, you need to install it on your PC with a license key and import your CAD or DXF files into ESPRIT or into CT Expert directly. Then you can create and edit your wire EDM program with ease and transfer it to your machine. If you want to learn more about CT Expert or download it for your PC, you can visit the official website of GF AgieCharmilles or contact your local representative or distributor.

                          -

                          What are the Features of CT Expert for Wire EDM Programming?

                          - -

                          CT Expert for wire EDM programming has many features that can make your work easier and faster. Here are some of the features of CT Expert:

                          - -
                            -
                          • Wire selection: CT Expert can suggest the best wire for your material and thickness. You can choose from a variety of wires, such as brass, coated, stratified, etc. You can also see the characteristics and advantages of each wire.
                          • -
                          • Machining settings: CT Expert can select the best machining settings for your wire and quality. You can see the values of power, frequency, tension, feed rate, etc. You can also adjust these values manually if you want.
                          • -
                          • Offset calculation: CT Expert can automatically calculate all the offsets for your profile and geometry. You can see the values of corner compensation, taper angle, conicity, etc. You can also modify these values manually if you want.
                          • -
                          • Machining sequence: CT Expert can automatically create a command program linking the various machining sequences for your profile and geometry. You can see the order and duration of each sequence, such as roughing, finishing, tapering, etc. You can also add or delete sequences manually if you want.
                          • -
                          • Cut data: CT Expert can access and use Charmilles cut data for your machine and wire. These data are based on extensive research and testing by GF AgieCharmilles and can ensure that your program is compatible with your machine and wire.
                          • -
                          • Preview and edit: CT Expert can preview and edit your program in ESPRIT or in CT Expert directly. You can see a graphical representation of your profile and geometry with the machining parameters and offsets. You can also modify your program by changing the values or adding commands.
                          • -
                          • Save and export: CT Expert can save and export your program as an ISO file or as a specific machine file format. You can choose the name and location of your file. You can also print or email your program if you want.
                          • -
                          • Transfer: CT Expert can transfer your program to your wire EDM machine via a network connection or a USB drive. You can choose the destination folder and machine name of your program. You can also verify or simulate your program on your machine before cutting.
                          • -
                          - -

                          Conclusion

                          - -

                          CT Expert is a software that can help you program and optimize your wire EDM machines from Charmilles Technologies. It can save you time and money by selecting the best machining settings, suggesting the best wire, automatically calculating all the offsets and creating a command program linking the various machining sequences. To use CT Expert, you need to install it on your PC with a license key and import your CAD or DXF files into ESPRIT or into CT Expert directly. Then you can create and edit your wire EDM program with ease and transfer it to your machine. If you want to learn more about CT Expert or download it for your PC, you can visit the official website of GF AgieCharmilles or contact your local representative or distributor.

                          -

                          Conclusion

                          - -

                          In this article, we have explained what CT Expert is, how to install it on your PC, and how to use it for wire EDM programming. We have also discussed the benefits and features of using CT Expert for your wire EDM projects. We hope this article has helped you to understand the importance of using a safe and reliable software for your wire EDM needs. If you have any questions or feedback, please feel free to contact us. Thank you for reading and happy wire EDM!

                          679dcb208e
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dhoom 3 telugu movie 1080p torrent The fastest and easiest way to watch the film on your device.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dhoom 3 telugu movie 1080p torrent The fastest and easiest way to watch the film on your device.md deleted file mode 100644 index 5907b831a04eb294242c7e457d5e3daeca84afdc..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dhoom 3 telugu movie 1080p torrent The fastest and easiest way to watch the film on your device.md +++ /dev/null @@ -1,101 +0,0 @@ -
                          -

                          Dhoom 3 Telugu Movie 1080p Torrent: How to Download and Enjoy the Action Thriller

                          -

                          Dhoom 3 is a 2013 Indian action thriller film starring Aamir Khan, Abhishek Bachchan, Katrina Kaif, and Uday Chopra. It is the third installment of the Dhoom series, which follows the adventures of a pair of cops and a master thief. Dhoom 3 was a huge blockbuster hit, breaking several box office records and becoming one of the highest-grossing Indian films of all time.

                          -

                          If you are a fan of Dhoom 3 and want to watch it in Telugu language, you might be looking for a way to download it in high quality. In this article, we will show you how to find and download Dhoom 3 telugu movie 1080p torrent, and what are the benefits and risks of using torrents. We will also give you some tips and tricks to enjoy the movie in full HD.

                          -

                          Dhoom 3 telugu movie 1080p torrent


                          Download File > https://urlcod.com/2uK7De



                          -

                          What is Dhoom 3 Telugu Movie 1080p Torrent

                          -

                          A torrent is a file that contains information about other files that are distributed over a peer-to-peer network. A torrent file does not contain the actual content of the files, but only their names, sizes, locations, and checksums. To download the files, you need a torrent client, which is a software that connects to other users who have the same torrent file and downloads the files from them.

                          -

                          Dhoom 3 telugu movie 1080p torrent is a torrent file that contains information about the Telugu version of Dhoom 3 movie in 1080p resolution. This means that the movie has a high definition quality of 1920x1080 pixels, which is suitable for large screens and monitors. By downloading this torrent file, you can get access to the Telugu version of Dhoom 3 movie in HD quality.

                          -

                          How to Find and Download Dhoom 3 Telugu Movie 1080p Torrent

                          -

                          To find and download Dhoom 3 telugu movie 1080p torrent, you need to follow these steps:

                          -

                          Dhoom 3 full movie in telugu hd 1080p download torrent
                          -Dhoom 3 telugu dubbed movie torrent free download 1080p
                          -Dhoom 3 telugu version torrent magnet link 1080p
                          -How to watch Dhoom 3 movie in telugu online 1080p torrent
                          -Dhoom 3 telugu movie bluray 1080p torrent download
                          -Dhoom 3 telugu movie subtitles download 1080p torrent
                          -Dhoom 3 telugu movie songs mp3 download 1080p torrent
                          -Dhoom 3 telugu movie trailer hd 1080p torrent
                          -Dhoom 3 telugu movie review and rating 1080p torrent
                          -Dhoom 3 telugu movie cast and crew details 1080p torrent
                          -Dhoom 3 telugu movie box office collection report 1080p torrent
                          -Dhoom 3 telugu movie behind the scenes video 1080p torrent
                          -Dhoom 3 telugu movie best scenes and dialogues 1080p torrent
                          -Dhoom 3 telugu movie wallpapers and posters download 1080p torrent
                          -Dhoom 3 telugu movie trivia and facts 1080p torrent
                          -Dhoom 3 telugu movie awards and nominations list 1080p torrent
                          -Dhoom 3 telugu movie fan made edits and memes 1080p torrent
                          -Dhoom 3 telugu movie comparison with other dhoom movies 1080p torrent
                          -Dhoom 3 telugu movie director's cut and deleted scenes 1080p torrent
                          -Dhoom 3 telugu movie bloopers and mistakes 1080p torrent
                          -Dhoom 3 telugu movie sequel and prequel news and rumors 1080p torrent
                          -Dhoom 3 telugu movie inspired by hollywood movies list 1080p torrent
                          -Dhoom 3 telugu movie remake rights and remake versions list 1080p torrent
                          -Dhoom 3 telugu movie spoof and parody videos online watch 1080p torrent
                          -Dhoom 3 telugu movie references and easter eggs in other movies list 1080p torrent
                          -Dhoom 3 telugu movie analysis and breakdown video by experts watch online free in hd quality with english subtitles download via magnet link or direct link or google drive or dropbox or mega or mediafire or zippyshare or openload or streamango or rapidvideo or vidzi or vidoza or vidlox or vidcloud or clipwatching or mixdrop or doodstream or upstream or fembed or gounlimited or vevio or vidtodo or vidup or vshare or flashx or streamplay or streamtape or streamz or videobin or jetload or aparat or okru or mailru or waaw or vidia or vidloxme. (This is one long-tail keyword)

                          -
                            -
                          1. Download and install a torrent client on your computer. Some of the popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc.
                          2. -
                          3. Go to a torrent search engine or website that provides torrents for movies. Some of the popular torrent sites are YTS, The Pirate Bay, Kickass Torrents, etc.
                          4. -
                          5. Type "Dhoom 3 telugu movie 1080p" in the search box and press enter. You will see a list of results that match your query.
                          6. -
                          7. Select the torrent file that has the most seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. The more seeders and leechers a torrent file has, the faster and more reliable the download will be.
                          8. -
                          9. Click on the download button or magnet link to download the torrent file or open it directly with your torrent client.
                          10. -
                          11. Wait for your torrent client to connect to other users and start downloading the files. You can see the progress and speed of your download on your torrent client.
                          12. -
                          13. Once the download is complete, you can open the folder where the files are saved and play them with your preferred media player.
                          14. -
                          -

                          That's it! You have successfully downloaded Dhoom 3 telugu movie 1080p torrent and can watch it in HD quality.

                          -

                          What are the Benefits and Risks of Using Torrents

                          -

                          Using torrents can have some benefits and risks that you should be aware of before downloading any file. Here are some of them:

                          -
                            -
                          • Benefits: Torrents can help you download large files faster and easier than other methods. You can also find rare and exclusive content that may not be available elsewhere. You can also pause and resume your downloads at any time without losing your progress.
                          • -
                          • Risks: Torrents can expose you to legal issues if you download copyrighted content without permission. You can also get infected by viruses or malware that may harm your computer or data. You can also face bandwidth throttling or blocking by your ISP if they detect excessive torrent traffic on your network.
                          • -
                          -

                          To avoid these risks, you should always use a VPN when downloading torrents. A VPN is a service that encrypts your internet traffic and hides your IP address from others. This way, you can protect your privacy and security online, and bypass any restrictions or censorship by your ISP or government.

                          -
                          Tips and Tricks to Enjoy Dhoom 3 Telugu Movie in Full HD
                          -

                          Now that you have downloaded Dhoom 3 telugu movie 1080p torrent, you can enjoy watching it in full HD quality. Here are some tips and tricks to enhance your viewing experience:

                          -
                            -
                          • Use a good media player that supports HD playback and subtitles. Some of the popular media players are VLC, MPC-HC, KMPlayer, etc.
                          • -
                          • Adjust your screen brightness, contrast, and color settings to suit your preference and environment.
                          • -
                          • Use headphones or speakers to enjoy the sound effects and music of the movie.
                          • -
                          • Avoid any distractions or interruptions while watching the movie. Turn off your phone notifications, close any unnecessary programs or tabs on your computer, etc.
                          • -
                          • Watch the movie with your friends or family for more fun and excitement.
                          • -
                          -

                          We hope this article has helped you find and download Dhoom 3 telugu movie 1080p torrent, and enjoy watching it in full HD quality. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading!

                          -

                          Dhoom 3 Telugu Movie 1080p Torrent: How to Download and Enjoy the Action Thriller

                          -

                          Dhoom 3 is a 2013 Indian action thriller film starring Aamir Khan, Abhishek Bachchan, Katrina Kaif, and Uday Chopra. It is the third installment of the Dhoom series, which follows the adventures of a pair of cops and a master thief. Dhoom 3 was a huge blockbuster hit, breaking several box office records and becoming one of the highest-grossing Indian films of all time.

                          -

                          If you are a fan of Dhoom 3 and want to watch it in Telugu language, you might be looking for a way to download it in high quality. In this article, we will show you how to find and download Dhoom 3 telugu movie 1080p torrent, and what are the benefits and risks of using torrents. We will also give you some tips and tricks to enjoy the movie in full HD.

                          -

                          What is Dhoom 3 Telugu Movie 1080p Torrent

                          -

                          A torrent is a file that contains information about other files that are distributed over a peer-to-peer network. A torrent file does not contain the actual content of the files, but only their names, sizes, locations, and checksums. To download the files, you need a torrent client, which is a software that connects to other users who have the same torrent file and downloads the files from them.

                          -

                          Dhoom 3 telugu movie 1080p torrent is a torrent file that contains information about the Telugu version of Dhoom 3 movie in 1080p resolution. This means that the movie has a high definition quality of 1920x1080 pixels, which is suitable for large screens and monitors. By downloading this torrent file, you can get access to the Telugu version of Dhoom 3 movie in HD quality.

                          -

                          How to Find and Download Dhoom 3 Telugu Movie 1080p Torrent

                          -

                          To find and download Dhoom 3 telugu movie 1080p torrent, you need to follow these steps:

                          -
                            -
                          1. Download and install a torrent client on your computer. Some of the popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc.
                          2. -
                          3. Go to a torrent search engine or website that provides torrents for movies. Some of the popular torrent sites are YTS, The Pirate Bay, Kickass Torrents, etc.
                          4. -
                          5. Type "Dhoom 3 telugu movie 1080p" in the search box and press enter. You will see a list of results that match your query.
                          6. -
                          7. Select the torrent file that has the most seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. The more seeders and leechers a torrent file has, the faster and more reliable the download will be.
                          8. -
                          9. Click on the download button or magnet link to download the torrent file or open it directly with your torrent client.
                          10. -
                          11. Wait for your torrent client to connect to other users and start downloading the files. You can see the progress and speed of your download on your torrent client.
                          12. -
                          13. Once the download is complete, you can open the folder where the files are saved and play them with your preferred media player.
                          14. -
                          -

                          That's it! You have successfully downloaded Dhoom 3 telugu movie 1080p torrent and can watch it in HD quality.

                          -

                          What are the Benefits and Risks of Using Torrents

                          -

                          Using torrents can have some benefits and risks that you should be aware of before downloading any file. Here are some of them:

                          -
                            -
                          • Benefits: Torrents can help you download large files faster and easier than other methods. You can also find rare and exclusive content that may not be available elsewhere. You can also pause and resume your downloads at any time without losing your progress.
                          • -
                          • Risks: Torrents can expose you to legal issues if you download copyrighted content without permission. You can also get infected by viruses or malware that may harm your computer or data. You can also face bandwidth throttling or blocking by your ISP if they detect excessive torrent traffic on your network.
                          • -
                          -

                          To avoid these risks, you should always use a VPN when downloading torrents. A VPN is a service that encrypts your internet traffic and hides your IP address from others. This way, you can protect your privacy and security online, and bypass any restrictions or censorship by your ISP or government.

                          -
                          Tips and Tricks to Enjoy Dhoom 3 Telugu Movie in Full HD
                          -

                          Now that you have downloaded Dhoom 3 telugu movie 1080p torrent, you can enjoy watching it in full HD quality. Here are some tips and tricks to enhance your viewing experience:

                          -
                            -
                          • Use a good media player that supports HD playback and subtitles. Some of the popular media players are VLC, MPC-HC, KMPlayer, etc.
                          • -
                          • Adjust your screen brightness, contrast, and color settings to suit your preference and environment.
                          • -
                          • Use headphones or speakers to enjoy the sound effects and music of the movie.
                          • -
                          • Avoid any distractions or interruptions while watching the movie. Turn off your phone notifications, close any unnecessary programs or tabs on your computer, etc.
                          • -
                          • Watch the movie with your friends or family for more fun and excitement.
                          • -
                          -

                          We hope this article has helped you find and download Dhoom 3 telugu movie 1080p torrent, and enjoy watching it in full HD quality. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading!

                          -

                          We hope this article has helped you find and download Dhoom 3 telugu movie 1080p torrent, and enjoy watching it in full HD quality. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading!

                          679dcb208e
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Jcop Plugin For Eclipse How to Create and Personalize Java Card Applets.md b/spaces/tialenAdioni/chat-gpt-api/logs/Jcop Plugin For Eclipse How to Create and Personalize Java Card Applets.md deleted file mode 100644 index b1c4179e7086a7c3c755edf2f0af4f569f5b1921..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Jcop Plugin For Eclipse How to Create and Personalize Java Card Applets.md +++ /dev/null @@ -1,169 +0,0 @@ -
                          -

                          What is Jcop Plugin For Eclipse and How to Use It?

                          - -

                          If you are a Java Card developer, you might have heard of Jcop Plugin For Eclipse. This is a tool that helps you create, test and deploy Java Card applets using the popular Eclipse IDE. In this article, we will explain what Jcop Plugin For Eclipse is, what are its features and benefits, and how to install and use it.

                          -

                          Jcop Plugin For Eclipse


                          Download ::: https://urlcod.com/2uK9a8



                          - -

                          What is Jcop Plugin For Eclipse?

                          - -

                          Jcop Plugin For Eclipse is a collection of Eclipse Plug-Ins and libraries that facilitate the development of Java Card applets. The Plug-Ins are integrated into Eclipse's Java Development Tooling (JDT), which means you can use the same environment and features that you are familiar with for Java development. Jcop Plugin For Eclipse also includes a command line tool called JCShell, which allows you to communicate with smart cards and execute scripts. An applet personalization tool called Quipper is also available, which helps you create and manage personalization data for your applets.

                          - -

                          What are the features and benefits of Jcop Plugin For Eclipse?

                          - -

                          Jcop Plugin For Eclipse offers many features and benefits for Java Card developers, such as:

                          - -
                            -
                          • Support for various Java Card platforms and versions, including JCOP® Pay, JCOP® ID 1, JCOP® ID 2, EdgeLock® SE050 and EdgeLock® SE051.
                          • -
                          • Support for various smart card readers and protocols, such as PC/SC, T=0, T=1 and USB.
                          • -
                          • Support for various applet formats and standards, such as CAP, IJC, GlobalPlatform and ISO 7816.
                          • -
                          • Support for debugging and testing applets on real or simulated smart cards.
                          • -
                          • Support for signing applets with digital certificates.
                          • -
                          • Support for generating documentation and reports for your applets.
                          • -
                          • Support for importing and exporting applets and personalization data.
                          • -
                          - -

                          By using Jcop Plugin For Eclipse, you can benefit from:

                          - -
                            -
                          • A user-friendly and intuitive graphical interface that simplifies your development workflow.
                          • -
                          • A seamless integration with Eclipse that allows you to use the same tools and features that you are used to for Java development.
                          • -
                          • A comprehensive and up-to-date documentation that guides you through the installation and usage of the tool.
                          • -
                          • A reliable and secure tool that is developed by NXP Semiconductors, a leading provider of smart card solutions.
                          • -
                          - -

                          How to install and use Jcop Plugin For Eclipse?

                          - -

                          To install Jcop Plugin For Eclipse, you need to have Eclipse IDE installed on your computer. You can download Eclipse from https://www.eclipse.org/downloads/. You also need to have Java Development Kit (JDK) installed on your computer. You can download JDK from https://www.oracle.com/java/technologies/javase-downloads.html.

                          - -

                          Once you have Eclipse and JDK installed, you can follow these steps to install Jcop Plugin For Eclipse:

                          - -
                            -
                          1. Download the Jcop Plugin For Eclipse jar files from https://www.nxp.com/design/training/secure-element-common-jcop-tools-part-1-eclipse-plug-in:TIP-SECURE-ELEMENT-COMMON-JCOP-TOOLS-PART-1. You will need these jar files:
                          2. -
                              -
                            • com.ibm.bluez.jcop.eclipse.demopack_1.0.2.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse.perftest_1.0.2.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse.signlite_1.0.2.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse.targetpack.gemplus_1.0.0.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse.targetpack_1.0.3.3.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse_1.0.3.3.jar
                            • -
                            • com.ibm.bluez.jcop.eclipse_3.1.1.a.jar
                            • -
                            -
                          3. Copy the jar files to the plugins folder of your Eclipse installation directory.
                          4. -
                          5. Restart Eclipse to activate the Jcop Plugin For Eclipse.
                          6. -
                          - -

                          To use Jcop Plugin For Eclipse, you can follow these steps:

                          - -
                            -
                          1. Create a new Java Card project in Eclipse by selecting File > New > Project > Java Card Project.
                          2. -
                          3. Create a new Java Card applet in your project by selecting File > New > Class > Java Card Applet.
                          4. -
                          5. Specify the package name, applet name, applet ID and select a basic template for your applet.
                          6. -
                          7. Edit your applet code in the editor window using the JDT features.
                          8. -
                          9. Build your project by selecting Project > Build Project. This will generate a CAP file in the bin folder of your project.
                          10. -
                          11. Select your CAP file in the Package Explorer view and right-click on it. Select JCOP Tools > Load Applet to load your applet to a smart card or simulator.
                          12. -
                          13. Select your CAP file in the Package Explorer view and right-click on it. Select JCOP Tools > Debug Applet to debug your applet on a smart card or simulator.
                          14. -
                          - -

                          You can also use JCShell and Quipper tools to communicate with smart cards and personalize your applets. To access these tools, select Window > Show View > Other > JCOP Tools > JCShell or Quipper.

                          - -

                          Conclusion

                          - -

                          Jcop Plugin For Eclipse is a powerful tool that helps you develop Java Card applets using the Eclipse IDE. It supports various Java Card platforms, smart card readers, applet formats and standards. It also provides features for debugging, testing, signing, documenting and managing your applets. By using Jcop Plugin For Eclipse, you can simplify your development workflow and create high-quality applets that meet your requirements.

                          -

                          How to update and uninstall Jcop Plugin For Eclipse?

                          - -

                          If you want to update Jcop Plugin For Eclipse to a newer version, you can follow these steps:

                          -

                          How to install Jcop Plugin For Eclipse
                          -Jcop Plugin For Eclipse tutorial
                          -Jcop Plugin For Eclipse download
                          -Jcop Plugin For Eclipse features
                          -Jcop Plugin For Eclipse documentation
                          -Jcop Plugin For Eclipse license
                          -Jcop Plugin For Eclipse support
                          -Jcop Plugin For Eclipse alternatives
                          -Jcop Plugin For Eclipse review
                          -Jcop Plugin For Eclipse update
                          -Jcop Plugin For Eclipse error
                          -Jcop Plugin For Eclipse configuration
                          -Jcop Plugin For Eclipse compatibility
                          -Jcop Plugin For Eclipse requirements
                          -Jcop Plugin For Eclipse benefits
                          -Jcop Plugin For Eclipse use cases
                          -Jcop Plugin For Eclipse examples
                          -Jcop Plugin For Eclipse demo
                          -Jcop Plugin For Eclipse source code
                          -Jcop Plugin For Eclipse forum
                          -Jcop Plugin For Eclipse feedback
                          -Jcop Plugin For Eclipse issues
                          -Jcop Plugin For Eclipse tips
                          -Jcop Plugin For Eclipse tricks
                          -Jcop Plugin For Eclipse best practices
                          -Jcop Plugin For Eclipse comparison
                          -Jcop Plugin For Eclipse vs Netbeans
                          -Jcop Plugin For Eclipse vs IntelliJ IDEA
                          -Jcop Plugin For Eclipse vs Visual Studio Code
                          -Jcop Plugin For Eclipse vs Smartcardio API
                          -Jcop Plugin For Eclipse for Java Card development
                          -Jcop Plugin For Eclipse for smart card programming
                          -Jcop Plugin For Eclipse for NFC applications
                          -Jcop Plugin For Eclipse for EMV transactions
                          -Jcop Plugin For Eclipse for SIM cards
                          -Jcop Plugin For Eclipse for e-passports
                          -Jcop Plugin For Eclipse for e-ID cards
                          -Jcop Plugin For Eclipse for e-health cards
                          -Jcop Plugin For Eclipse for e-signature cards
                          -Jcop Plugin For Eclipse for loyalty cards
                          -Jcop Plugin For Eclipse for payment cards
                          -Jcop Plugin For Eclipse for access control cards
                          -Jcop Plugin For Eclipse for transport cards
                          -Jcop Plugin For Eclipse for PKI cards
                          -Jcop Plugin For Eclipse for biometric cards
                          -Jcop Tool plugin for eclipse integration guide pdf
                          -How to debug Java Card applets with jCop plugin for eclipse
                          -How to create a Java Card project with jCop plugin for eclipse
                          -How to test Java Card applets with jCop plugin for eclipse

                          - -
                            -
                          1. Download the latest Jcop Plugin For Eclipse jar files from https://www.nxp.com/design/training/secure-element-common-jcop-tools-part-1-eclipse-plug-in:TIP-SECURE-ELEMENT-COMMON-JCOP-TOOLS-PART-1.
                          2. -
                          3. Delete the old jar files from the plugins folder of your Eclipse installation directory.
                          4. -
                          5. Copy the new jar files to the plugins folder of your Eclipse installation directory.
                          6. -
                          7. Restart Eclipse to activate the updated Jcop Plugin For Eclipse.
                          8. -
                          - -

                          If you want to uninstall Jcop Plugin For Eclipse, you can follow these steps:

                          - -
                            -
                          1. Delete the Jcop Plugin For Eclipse jar files from the plugins folder of your Eclipse installation directory.
                          2. -
                          3. Restart Eclipse to deactivate the Jcop Plugin For Eclipse.
                          4. -
                          - -

                          What are some alternatives to Jcop Plugin For Eclipse?

                          - -

                          If you are looking for some alternatives to Jcop Plugin For Eclipse, you might want to consider these options:

                          - -
                            -
                          • Java Card Development Kit (JCDK): This is the official development kit for Java Card technology, provided by Oracle. It includes a converter, a verifier, an emulator, an API and a reference implementation. You can download JCDK from https://www.oracle.com/java/technologies/javacard-sdk-downloads.html.
                          • -
                          • Java Card Development Environment (JCDE): This is a plugin for Eclipse that supports Java Card development. It includes a converter, a verifier, an emulator, an API and a debugger. You can download JCDE from https://sourceforge.net/projects/jcde/.
                          • -
                          • JACoP: This is a plugin for ImageJ that allows you to perform colocalization analysis of fluorescence images. It includes various methods and parameters for measuring colocalization. You can download JACoP from https://imagej.net/plugins/jacop.
                          • -
                          -

                          How to use JCShell and Quipper with Jcop Plugin For Eclipse?

                          - -

                          JCShell and Quipper are two tools that are included in Jcop Plugin For Eclipse. They allow you to communicate with smart cards and personalize your applets. You can access these tools by selecting Window > Show View > Other > JCOP Tools > JCShell or Quipper.

                          - -

                          JCShell is a command line tool that allows you to send APDUs (Application Protocol Data Units) to smart cards and receive responses. You can use JCShell to perform various operations on smart cards, such as selecting applets, sending data, verifying PINs, managing keys, etc. You can also use JCShell to execute scripts that contain a sequence of commands and responses. JCShell supports various smart card readers and protocols, such as PC/SC, T=0, T=1 and USB.

                          - -

                          Quipper is a graphical tool that allows you to create and manage personalization data for your applets. Personalization data are the values that are assigned to the applet variables during the installation process. You can use Quipper to define personalization data for your applets, such as AIDs (Application Identifiers), keys, PINs, parameters, etc. You can also use Quipper to export and import personalization data in various formats, such as XML, CSV, CAP or IJC.

                          - -

                          How to get help and support for Jcop Plugin For Eclipse?

                          - -

                          If you need help and support for Jcop Plugin For Eclipse, you have several options:

                          - - -

                          Conclusion

                          - -

                          Jcop Plugin For Eclipse is a powerful tool that helps you develop Java Card applets using the Eclipse IDE. It supports various Java Card platforms, smart card readers, applet formats and standards. It also provides features for debugging, testing, signing, documenting and managing your applets. By using Jcop Plugin For Eclipse, you can simplify your development workflow and create high-quality applets that meet your requirements. You can also use JCShell and Quipper tools to communicate with smart cards and personalize your applets. If you need help and support for Jcop Plugin For Eclipse, you can refer to the documentation, the community forum or the support team. Jcop Plugin For Eclipse is a tool that you should definitely try if you are a Java Card developer.

                          679dcb208e
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cocktail Full Movie Tamil Download Movies.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cocktail Full Movie Tamil Download Movies.md deleted file mode 100644 index 1ec30207ab7e173f3ea0dbb7e015657d0d1f6544..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cocktail Full Movie Tamil Download Movies.md +++ /dev/null @@ -1,19 +0,0 @@ - -

                          Cocktail Full Movie Tamil Download Movies: How to Watch the Latest Tamil Comedy Online

                          - -

                          If you are looking for a fun and entertaining movie to watch online, you might want to check out Cocktail, the latest Tamil comedy film starring Yogi Babu, Sayaji Shinde, and Reshmi Gopinath. The movie is directed by R. A. Vijaya Murugan and produced by P. G. Muthiah and M. Deepa under the banner of P. G. Media Works.

                          - -

                          Cocktail is a hilarious story of four friends who get involved in a series of mishaps after they steal a dog from a gangster. The dog, named Cocktail, has a special ability to sniff out drugs and money, which leads the friends into trouble with the police and the underworld. The movie is full of twists and turns, as well as witty dialogues and funny situations.

                          -

                          Cocktail Full Movie Tamil Download Movies


                          Download Ziphttps://urlcod.com/2uHvNa



                          - -

                          Cocktail was released in theatres on March 6, 2020, but due to the COVID-19 pandemic, it had a limited run. However, the movie is now available online for streaming and downloading on various platforms such as Amazon Prime Video, Zee5, Hotstar, and more. You can also find Cocktail full movie Tamil download movies on some torrent sites, but we do not recommend that as it is illegal and unethical.

                          - -

                          So, if you are looking for a good laugh and some quality entertainment, you should definitely watch Cocktail full movie Tamil online or download it from a legal source. You will not regret it!

                          - -

                          But what makes Cocktail full movie Tamil download movies so popular among the audience? Well, there are many reasons for that. First of all, the movie has a talented cast of actors who deliver excellent performances. Yogi Babu, who plays the lead role of Don Bosco, is one of the most popular comedians in Tamil cinema. He has a great sense of timing and expression, and he makes every scene hilarious with his antics. Sayaji Shinde, who plays the villainous gangster Baasha, is also a veteran actor who has appeared in many Tamil and Telugu films. He brings a menacing and humorous touch to his character. Reshmi Gopinath, who plays the female lead of Anjali, is a newcomer who has impressed the viewers with her charm and acting skills. She has a good chemistry with Yogi Babu and adds some romance to the story.

                          - -

                          Another reason why Cocktail full movie Tamil download movies are in high demand is the direction and screenplay of R. A. Vijaya Murugan. He has crafted a well-paced and engaging story that keeps the audience hooked from start to finish. He has also infused the movie with some social messages and satire on the current issues such as corruption, drug abuse, and animal rights. He has also used some innovative techniques such as split-screen and animation to enhance the visual appeal of the movie.

                          - -

                          Finally, Cocktail full movie Tamil download movies are also loved by the audience for their music and songs. The movie has four songs composed by Sai Bhaskar, who has given some catchy and melodious tunes that suit the mood and theme of the movie. The songs are sung by some popular singers such as Anirudh Ravichander, Sivakarthikeyan, Shweta Mohan, and more. The songs are also well-choreographed and picturized, adding more fun and color to the movie.

                          7b8c122e87
                          -
                          -
                          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Stock It Easy For Windows 10 Pro 64bit Current Version.md b/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Stock It Easy For Windows 10 Pro 64bit Current Version.md deleted file mode 100644 index 9c5d79352c29292654e04bcf13ac0c53cd2f3837..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Stock It Easy For Windows 10 Pro 64bit Current Version.md +++ /dev/null @@ -1,87 +0,0 @@ - -

                          Free download Stock It Easy for windows 10 pro 64bit current version

                          -

                          If you are looking for a simple, efficient, and professional software to manage your stock of goods, you might want to try Stock It Easy. Stock It Easy is a software that allows you to handle an unlimited number of items and storage locations, track your inventory movements, create and process orders, generate documents and reports, and much more. In this article, we will show you how to download and install Stock It Easy for windows 10 pro 64bit current version, and how to use it for your inventory management needs.

                          -

                          Free download Stock It Easy for windows 10 pro 64bit current version


                          Download Zip ->->->-> https://urlcod.com/2uHyly



                          -

                          What is Stock It Easy?

                          -

                          Stock It Easy is a software developed by Juste Un Clic SPRL, a Belgian company specialized in inventory management solutions. Stock It Easy is designed for small and medium-sized businesses, as well as training centers and schools. It is easy to use, but highly customizable and adaptable to different business scenarios. Stock It Easy can be used in single or multi-user mode, with or without an internet connection.

                          -

                          Features and benefits of Stock It Easy

                          -

                          Some of the main features and benefits of Stock It Easy are:

                          -
                            -
                          • It supports multiple languages, currencies, units of measure, tax rates, etc.
                          • -
                          • It allows you to manage multiple warehouses, locations, categories, suppliers, customers, etc.
                          • -
                          • It lets you create and edit items with various attributes, such as barcode, serial number, expiration date, batch number, etc.
                          • -
                          • It enables you to create and process purchase orders, sales orders, shipments, receipts, returns, etc.
                          • -
                          • It provides you with various tools to optimize your stock levels, such as automatic replenishment, minimum and maximum quantities, alerts, etc.
                          • -
                          • It generates various documents and reports, such as inventory valuation, stock movements, order status, labels, invoices, etc.
                          • -
                          • It integrates with other software and devices, such as Excel, Word, Outlook, barcode scanners, printers, etc.
                          • -
                          • It offers online help and support to help you with the software installation and usage.
                          • -
                          -

                          Versions and pricing of Stock It Easy

                          -

                          Stock It Easy offers three versions: lite (free), standard (paid), and full (paid). The lite version is limited to 50 items and one warehouse. The standard version allows unlimited items and warehouses. The full version includes additional features such as CRM (Customer Relationship Management), WMS (Warehouse Management System), mobile app (Android), web access (cloud), etc. The pricing of the paid versions depends on the number of users and the duration of the license. You can check the pricing details on the official website. You can also test the full version for free for one month.

                          -

                          How to download and install Stock It Easy for windows 10 pro 64bit current version

                          -

                          To download and install Stock It Easy for windows 10 pro 64bit current version, you need to follow these steps:

                          -

                          Downloading Stock It Easy from the official website

                          -

                          The first step is to download the setup file of Stock It Easy from the official website. You can choose between the lite version or the full version. The setup file is about 60 MB in size. You can also download the user manual in PDF format from the same page.

                          -

                          - Installing Stock It Easy on your PC -

                          The second step is to install Stock It Easy on your PC. To do this, you need to run the setup file that you downloaded in the previous step. You will see a welcome screen that asks you to choose the language of the installation. You can select from English, French, Spanish, German, Italian, Portuguese, Dutch, or Turkish. Then, you will see a license agreement that you need to accept to continue. Next, you will see a screen that asks you to choose the destination folder for the installation. You can use the default folder or browse to another one. Then, you will see a screen that asks you to choose the components to install. You can select from Stock It Easy (required), Database (required), Mobile app (optional), and Web access (optional). After that, you will see a screen that shows the progress of the installation. When the installation is complete, you will see a screen that asks you to launch Stock It Easy.

                          -

                          Activating Stock It Easy with a license key

                          -

                          The third step is to activate Stock It Easy with a license key. If you downloaded the lite version, you don't need a license key. You can use the software for free with some limitations. If you downloaded the full version, you need a license key to use the software after the one-month trial period. You can buy a license key from the official website or request one by email. To activate Stock It Easy with a license key, you need to open the software and go to Help > Activate license. Then, you need to enter your name, email address, and license key in the corresponding fields. After that, you need to click on Activate and wait for the confirmation message.

                          -

                          How to use Stock It Easy for inventory management

                          -

                          Now that you have downloaded and installed Stock It Easy for windows 10 pro 64bit current version, you can start using it for your inventory management needs. Here are some of the main steps to use Stock It Easy:

                          -

                          Setting up your database and parameters

                          -

                          The first step is to set up your database and parameters. You need to do this before using any other features of Stock It Easy. To set up your database and parameters, you need to go to File > Parameters. Then, you will see a window that allows you to configure various settings, such as:

                          -
                            -
                          • General: You can set your company name, logo, address, phone number, email address, website, etc.
                          • -
                          • Language: You can choose the language of the software interface and the documents.
                          • -
                          • Currency: You can choose the currency of your transactions and set the exchange rates.
                          • -
                          • Tax: You can set the tax rates and rules for your items and customers.
                          • -
                          • Unit: You can set the units of measure and conversion factors for your items.
                          • -
                          • Barcode: You can set the barcode format and parameters for your items.
                          • -
                          • Document: You can set the document templates and options for your orders, shipments, receipts, labels, invoices, etc.
                          • -
                          • Email: You can set the email settings and options for sending documents by email.
                          • -
                          • Backup: You can set the backup settings and options for saving your data.
                          • -
                          -

                          You can also import or export your data from or to Excel files by using the Import/Export buttons on the toolbar.

                          -

                          Managing your items, customers, suppliers, and warehouses

                          -

                          The second step is to manage your items, customers, suppliers, and warehouses. These are the main entities that you need to create and update in Stock It Easy. To manage them, you need to use the corresponding buttons on the toolbar or go to Edit > Items/Customers/Suppliers/Warehouses. Then, you will see a window that allows you to add, edit, delete, or search for any entity. For each entity, you can enter various information, such as:

                          -
                            -
                          • Items: You can enter the item code, name, description, category, barcode, serial number, expiration date, batch number, unit, price, tax, stock level, minimum and maximum quantities, reorder point, supplier, etc.
                          • -
                          • Customers: You can enter the customer code, name, address, phone number, email address, website, contact person, tax number, discount rate, payment terms, currency, etc.
                          • -
                          • Suppliers: You can enter the supplier code, name, address, phone number, email address, website, contact person, tax number, payment terms, currency, etc.
                          • -
                          • Warehouses: You can enter the warehouse code, name, address, phone number, email address, contact person, capacity, etc.
                          • -

                            You can also use the barcode scanner to scan the items or the labels to quickly enter or update the information. You can also use the filters and the search box to find any entity by any criteria.

                            -

                            Creating and processing orders, shipments, and receipts

                            -

                            The third step is to create and process orders, shipments, and receipts. These are the main transactions that you need to perform in Stock It Easy. To create and process them, you need to use the corresponding buttons on the toolbar or go to File > Orders/Shipments/Receipts. Then, you will see a window that allows you to add, edit, delete, or search for any transaction. For each transaction, you can enter various information, such as:

                            -
                              -
                            • Orders: You can enter the order number, date, customer or supplier, items, quantities, prices, taxes, discounts, shipping costs, payment method, etc.
                            • -
                            • Shipments: You can enter the shipment number, date, customer, warehouse, items, quantities, tracking number, carrier, etc.
                            • -
                            • Receipts: You can enter the receipt number, date, supplier, warehouse, items, quantities, invoice number, etc.
                            • -
                            -

                            You can also use the barcode scanner to scan the items or the labels to quickly enter or update the information. You can also use the filters and the search box to find any transaction by any criteria.

                            -

                            Generating reports, labels, and invoices

                            -

                            The fourth step is to generate reports, labels, and invoices. These are the main documents that you need to produce in Stock It Easy. To generate them, you need to use the corresponding buttons on the toolbar or go to File > Reports/Labels/Invoices. Then, you will see a window that allows you to select, preview, print, or export any document. For each document, you can choose various options, such as:

                            -
                              -
                            • Reports: You can choose from various types of reports, such as inventory valuation, stock movements, order status, item history, customer history, supplier history, etc. You can also filter the reports by date range, warehouse, category, supplier, customer, etc.
                            • -
                            • Labels: You can choose from various types of labels, such as item labels, customer labels, supplier labels, warehouse labels, etc. You can also customize the label size, layout, font, color, etc.
                            • -
                            • Invoices: You can choose from various types of invoices, such as sales invoices, purchase invoices, credit notes, etc. You can also customize the invoice template, logo, header, footer, etc.
                            • -
                            -

                            You can also export the documents to various formats, such as PDF, Excel, Word, HTML, etc. You can also send the documents by email directly from Stock It Easy.

                            -

                            Conclusion

                            -

                            In conclusion, Stock It Easy is a software that can help you manage your stock of goods easily and efficiently. It has many features and benefits that make it suitable for different business scenarios. It is compatible with windows 10 pro 64bit current version and easy to download and install. It is also easy to use and customize according to your preferences and needs. If you want to try Stock It Easy for free for one month or buy a license key for a longer period of time , you can visit the official website and follow the instructions. We hope this article has helped you learn more about Stock It Easy and how to use it for your inventory management needs.

                            -

                            FAQs

                            -

                            Here are some of the frequently asked questions about Stock It Easy:

                            -
                              -
                            1. What are the system requirements for Stock It Easy?
                            2. -

                              Stock It Easy can run on any PC with Windows XP, Vista, 7, 8, 8.1, or 10 (32 or 64 bit). It requires at least 512 MB of RAM and 100 MB of free disk space. It also requires an internet connection for activation and updates.

                              -
                            3. How can I get technical support for Stock It Easy?
                            4. -

                              You can get technical support for Stock It Easy by contacting the developer via email, phone, or online form. You can also access the online help and the user manual from the software interface. You can also visit the official website and check the FAQ section and the forum for more information.

                              -
                            5. How can I update Stock It Easy to the latest version?
                            6. -

                              You can update Stock It Easy to the latest version by using the Check for updates option in the software interface. You can also download the latest version from the official website and install it over the existing one. You don't need to uninstall or deactivate the previous version.

                              -
                            7. How can I backup and restore my data in Stock It Easy?
                            8. -

                              You can backup and restore your data in Stock It Easy by using the Backup/Restore option in the software interface. You can also use the Export/Import option to export or import your data to or from Excel files. You should backup your data regularly and store it in a safe place.

                              -
                            9. How can I use Stock It Easy on multiple PCs or devices?
                            10. -

                              You can use Stock It Easy on multiple PCs or devices by purchasing a multi-user license or a web access license. A multi-user license allows you to install and use Stock It Easy on multiple PCs in a local network. A web access license allows you to access and use Stock It Easy on any device with an internet browser. You can also use the mobile app (Android) to access and use Stock It Easy on your smartphone or tablet.

                              -

                            b2dd77e56b
                            -
                            -
                            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py deleted file mode 100644 index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/compat.py +++ /dev/null @@ -1,13 +0,0 @@ -from .core import * -from .codec import * -from typing import Any, Union - -def ToASCII(label: str) -> bytes: - return encode(label) - -def ToUnicode(label: Union[bytes, bytearray]) -> str: - return decode(label) - -def nameprep(s: Any) -> None: - raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol') - diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py deleted file mode 100644 index f68f7d467530845447278f6c0ad104b4beca9531..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py +++ /dev/null @@ -1,584 +0,0 @@ -""" -requests.adapters -~~~~~~~~~~~~~~~~~ - -This module contains the transport adapters that Requests uses to define -and maintain connections. -""" - -import os.path -import socket # noqa: F401 - -from pip._vendor.urllib3.exceptions import ClosedPoolError, ConnectTimeoutError -from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError -from pip._vendor.urllib3.exceptions import InvalidHeader as _InvalidHeader -from pip._vendor.urllib3.exceptions import ( - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, -) -from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError -from pip._vendor.urllib3.exceptions import ReadTimeoutError, ResponseError -from pip._vendor.urllib3.exceptions import SSLError as _SSLError -from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url -from pip._vendor.urllib3.response import HTTPResponse -from pip._vendor.urllib3.util import Timeout as TimeoutSauce -from pip._vendor.urllib3.util import parse_url -from pip._vendor.urllib3.util.retry import Retry - -from .auth import _basic_auth_str -from .compat import basestring, urlparse -from .cookies import extract_cookies_to_jar -from .exceptions import ( - ConnectionError, - ConnectTimeout, - InvalidHeader, - InvalidProxyURL, - InvalidSchema, - InvalidURL, - ProxyError, - ReadTimeout, - RetryError, - SSLError, -) -from .models import Response -from .structures import CaseInsensitiveDict -from .utils import ( - DEFAULT_CA_BUNDLE_PATH, - extract_zipped_paths, - get_auth_from_url, - get_encoding_from_headers, - prepend_scheme_if_needed, - select_proxy, - urldefragauth, -) - -try: - from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager -except ImportError: - - def SOCKSProxyManager(*args, **kwargs): - raise InvalidSchema("Missing dependencies for SOCKS support.") - - -DEFAULT_POOLBLOCK = False -DEFAULT_POOLSIZE = 10 -DEFAULT_RETRIES = 0 -DEFAULT_POOL_TIMEOUT = None - - -class BaseAdapter: - """The Base Transport Adapter""" - - def __init__(self): - super().__init__() - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - """ - raise NotImplementedError - - def close(self): - """Cleans up adapter specific items.""" - raise NotImplementedError - - -class HTTPAdapter(BaseAdapter): - """The built-in HTTP Adapter for urllib3. - - Provides a general-case interface for Requests sessions to contact HTTP and - HTTPS urls by implementing the Transport Adapter interface. This class will - usually be created by the :class:`Session ` class under the - covers. - - :param pool_connections: The number of urllib3 connection pools to cache. - :param pool_maxsize: The maximum number of connections to save in the pool. - :param max_retries: The maximum number of retries each connection - should attempt. Note, this applies only to failed DNS lookups, socket - connections and connection timeouts, never to requests where data has - made it to the server. By default, Requests does not retry failed - connections. If you need granular control over the conditions under - which we retry a request, import urllib3's ``Retry`` class and pass - that instead. - :param pool_block: Whether the connection pool should block for connections. - - Usage:: - - >>> import requests - >>> s = requests.Session() - >>> a = requests.adapters.HTTPAdapter(max_retries=3) - >>> s.mount('http://', a) - """ - - __attrs__ = [ - "max_retries", - "config", - "_pool_connections", - "_pool_maxsize", - "_pool_block", - ] - - def __init__( - self, - pool_connections=DEFAULT_POOLSIZE, - pool_maxsize=DEFAULT_POOLSIZE, - max_retries=DEFAULT_RETRIES, - pool_block=DEFAULT_POOLBLOCK, - ): - if max_retries == DEFAULT_RETRIES: - self.max_retries = Retry(0, read=False) - else: - self.max_retries = Retry.from_int(max_retries) - self.config = {} - self.proxy_manager = {} - - super().__init__() - - self._pool_connections = pool_connections - self._pool_maxsize = pool_maxsize - self._pool_block = pool_block - - self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) - - def __getstate__(self): - return {attr: getattr(self, attr, None) for attr in self.__attrs__} - - def __setstate__(self, state): - # Can't handle by adding 'proxy_manager' to self.__attrs__ because - # self.poolmanager uses a lambda function, which isn't pickleable. - self.proxy_manager = {} - self.config = {} - - for attr, value in state.items(): - setattr(self, attr, value) - - self.init_poolmanager( - self._pool_connections, self._pool_maxsize, block=self._pool_block - ) - - def init_poolmanager( - self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs - ): - """Initializes a urllib3 PoolManager. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param connections: The number of urllib3 connection pools to cache. - :param maxsize: The maximum number of connections to save in the pool. - :param block: Block when no free connections are available. - :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager. - """ - # save these values for pickling - self._pool_connections = connections - self._pool_maxsize = maxsize - self._pool_block = block - - self.poolmanager = PoolManager( - num_pools=connections, - maxsize=maxsize, - block=block, - strict=True, - **pool_kwargs, - ) - - def proxy_manager_for(self, proxy, **proxy_kwargs): - """Return urllib3 ProxyManager for the given proxy. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The proxy to return a urllib3 ProxyManager for. - :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager. - :returns: ProxyManager - :rtype: urllib3.ProxyManager - """ - if proxy in self.proxy_manager: - manager = self.proxy_manager[proxy] - elif proxy.lower().startswith("socks"): - username, password = get_auth_from_url(proxy) - manager = self.proxy_manager[proxy] = SOCKSProxyManager( - proxy, - username=username, - password=password, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - else: - proxy_headers = self.proxy_headers(proxy) - manager = self.proxy_manager[proxy] = proxy_from_url( - proxy, - proxy_headers=proxy_headers, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - - return manager - - def cert_verify(self, conn, url, verify, cert): - """Verify a SSL certificate. This method should not be called from user - code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param conn: The urllib3 connection object associated with the cert. - :param url: The requested URL. - :param verify: Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: The SSL certificate to verify. - """ - if url.lower().startswith("https") and verify: - - cert_loc = None - - # Allow self-specified cert location. - if verify is not True: - cert_loc = verify - - if not cert_loc: - cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH) - - if not cert_loc or not os.path.exists(cert_loc): - raise OSError( - f"Could not find a suitable TLS CA certificate bundle, " - f"invalid path: {cert_loc}" - ) - - conn.cert_reqs = "CERT_REQUIRED" - - if not os.path.isdir(cert_loc): - conn.ca_certs = cert_loc - else: - conn.ca_cert_dir = cert_loc - else: - conn.cert_reqs = "CERT_NONE" - conn.ca_certs = None - conn.ca_cert_dir = None - - if cert: - if not isinstance(cert, basestring): - conn.cert_file = cert[0] - conn.key_file = cert[1] - else: - conn.cert_file = cert - conn.key_file = None - if conn.cert_file and not os.path.exists(conn.cert_file): - raise OSError( - f"Could not find the TLS certificate file, " - f"invalid path: {conn.cert_file}" - ) - if conn.key_file and not os.path.exists(conn.key_file): - raise OSError( - f"Could not find the TLS key file, invalid path: {conn.key_file}" - ) - - def build_response(self, req, resp): - """Builds a :class:`Response ` object from a urllib3 - response. This should not be called from user code, and is only exposed - for use when subclassing the - :class:`HTTPAdapter ` - - :param req: The :class:`PreparedRequest ` used to generate the response. - :param resp: The urllib3 response object. - :rtype: requests.Response - """ - response = Response() - - # Fallback to None if there's no status_code, for whatever reason. - response.status_code = getattr(resp, "status", None) - - # Make headers case-insensitive. - response.headers = CaseInsensitiveDict(getattr(resp, "headers", {})) - - # Set encoding. - response.encoding = get_encoding_from_headers(response.headers) - response.raw = resp - response.reason = response.raw.reason - - if isinstance(req.url, bytes): - response.url = req.url.decode("utf-8") - else: - response.url = req.url - - # Add new cookies from the server. - extract_cookies_to_jar(response.cookies, req, resp) - - # Give the Response some context. - response.request = req - response.connection = self - - return response - - def get_connection(self, url, proxies=None): - """Returns a urllib3 connection for the given URL. This should not be - called from user code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param url: The URL to connect to. - :param proxies: (optional) A Requests-style dictionary of proxies used on this request. - :rtype: urllib3.ConnectionPool - """ - proxy = select_proxy(url, proxies) - - if proxy: - proxy = prepend_scheme_if_needed(proxy, "http") - proxy_url = parse_url(proxy) - if not proxy_url.host: - raise InvalidProxyURL( - "Please check proxy URL. It is malformed " - "and could be missing the host." - ) - proxy_manager = self.proxy_manager_for(proxy) - conn = proxy_manager.connection_from_url(url) - else: - # Only scheme should be lower case - parsed = urlparse(url) - url = parsed.geturl() - conn = self.poolmanager.connection_from_url(url) - - return conn - - def close(self): - """Disposes of any internal state. - - Currently, this closes the PoolManager and any active ProxyManager, - which closes any pooled connections. - """ - self.poolmanager.clear() - for proxy in self.proxy_manager.values(): - proxy.clear() - - def request_url(self, request, proxies): - """Obtain the url to use when making the final request. - - If the message is being sent through a HTTP proxy, the full URL has to - be used. Otherwise, we should only use the path portion of the URL. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` being sent. - :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs. - :rtype: str - """ - proxy = select_proxy(request.url, proxies) - scheme = urlparse(request.url).scheme - - is_proxied_http_request = proxy and scheme != "https" - using_socks_proxy = False - if proxy: - proxy_scheme = urlparse(proxy).scheme.lower() - using_socks_proxy = proxy_scheme.startswith("socks") - - url = request.path_url - if is_proxied_http_request and not using_socks_proxy: - url = urldefragauth(request.url) - - return url - - def add_headers(self, request, **kwargs): - """Add any headers needed by the connection. As of v2.0 this does - nothing by default, but is left for overriding by users that subclass - the :class:`HTTPAdapter `. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` to add headers to. - :param kwargs: The keyword arguments from the call to send(). - """ - pass - - def proxy_headers(self, proxy): - """Returns a dictionary of the headers to add to any request sent - through a proxy. This works with urllib3 magic to ensure that they are - correctly sent to the proxy, rather than in a tunnelled request if - CONNECT is being used. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The url of the proxy being used for this request. - :rtype: dict - """ - headers = {} - username, password = get_auth_from_url(proxy) - - if username: - headers["Proxy-Authorization"] = _basic_auth_str(username, password) - - return headers - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple or urllib3 Timeout object - :param verify: (optional) Either a boolean, in which case it controls whether - we verify the server's TLS certificate, or a string, in which case it - must be a path to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - :rtype: requests.Response - """ - - try: - conn = self.get_connection(request.url, proxies) - except LocationValueError as e: - raise InvalidURL(e, request=request) - - self.cert_verify(conn, request.url, verify, cert) - url = self.request_url(request, proxies) - self.add_headers( - request, - stream=stream, - timeout=timeout, - verify=verify, - cert=cert, - proxies=proxies, - ) - - chunked = not (request.body is None or "Content-Length" in request.headers) - - if isinstance(timeout, tuple): - try: - connect, read = timeout - timeout = TimeoutSauce(connect=connect, read=read) - except ValueError: - raise ValueError( - f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " - f"or a single float to set both timeouts to the same value." - ) - elif isinstance(timeout, TimeoutSauce): - pass - else: - timeout = TimeoutSauce(connect=timeout, read=timeout) - - try: - if not chunked: - resp = conn.urlopen( - method=request.method, - url=url, - body=request.body, - headers=request.headers, - redirect=False, - assert_same_host=False, - preload_content=False, - decode_content=False, - retries=self.max_retries, - timeout=timeout, - ) - - # Send the request. - else: - if hasattr(conn, "proxy_pool"): - conn = conn.proxy_pool - - low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) - - try: - skip_host = "Host" in request.headers - low_conn.putrequest( - request.method, - url, - skip_accept_encoding=True, - skip_host=skip_host, - ) - - for header, value in request.headers.items(): - low_conn.putheader(header, value) - - low_conn.endheaders() - - for i in request.body: - low_conn.send(hex(len(i))[2:].encode("utf-8")) - low_conn.send(b"\r\n") - low_conn.send(i) - low_conn.send(b"\r\n") - low_conn.send(b"0\r\n\r\n") - - # Receive the response from the server - r = low_conn.getresponse() - - resp = HTTPResponse.from_httplib( - r, - pool=conn, - connection=low_conn, - preload_content=False, - decode_content=False, - ) - except Exception: - # If we hit any problems here, clean up the connection. - # Then, raise so that we can handle the actual exception. - low_conn.close() - raise - - except (ProtocolError, OSError) as err: - raise ConnectionError(err, request=request) - - except MaxRetryError as e: - if isinstance(e.reason, ConnectTimeoutError): - # TODO: Remove this in 3.0.0: see #2811 - if not isinstance(e.reason, NewConnectionError): - raise ConnectTimeout(e, request=request) - - if isinstance(e.reason, ResponseError): - raise RetryError(e, request=request) - - if isinstance(e.reason, _ProxyError): - raise ProxyError(e, request=request) - - if isinstance(e.reason, _SSLError): - # This branch is for urllib3 v1.22 and later. - raise SSLError(e, request=request) - - raise ConnectionError(e, request=request) - - except ClosedPoolError as e: - raise ConnectionError(e, request=request) - - except _ProxyError as e: - raise ProxyError(e) - - except (_SSLError, _HTTPError) as e: - if isinstance(e, _SSLError): - # This branch is for urllib3 versions earlier than v1.22 - raise SSLError(e, request=request) - elif isinstance(e, ReadTimeoutError): - raise ReadTimeout(e, request=request) - elif isinstance(e, _InvalidHeader): - raise InvalidHeader(e, request=request) - else: - raise - - return self.build_response(request, resp) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/status_codes.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/status_codes.py deleted file mode 100644 index 4bd072be9769748a852740d037d5c63021472c9d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/status_codes.py +++ /dev/null @@ -1,128 +0,0 @@ -r""" -The ``codes`` object defines a mapping from common names for HTTP statuses -to their numerical codes, accessible either as attributes or as dictionary -items. - -Example:: - - >>> import requests - >>> requests.codes['temporary_redirect'] - 307 - >>> requests.codes.teapot - 418 - >>> requests.codes['\o/'] - 200 - -Some codes have multiple names, and both upper- and lower-case versions of -the names are allowed. For example, ``codes.ok``, ``codes.OK``, and -``codes.okay`` all correspond to the HTTP status code 200. -""" - -from .structures import LookupDict - -_codes = { - # Informational. - 100: ("continue",), - 101: ("switching_protocols",), - 102: ("processing",), - 103: ("checkpoint",), - 122: ("uri_too_long", "request_uri_too_long"), - 200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"), - 201: ("created",), - 202: ("accepted",), - 203: ("non_authoritative_info", "non_authoritative_information"), - 204: ("no_content",), - 205: ("reset_content", "reset"), - 206: ("partial_content", "partial"), - 207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"), - 208: ("already_reported",), - 226: ("im_used",), - # Redirection. - 300: ("multiple_choices",), - 301: ("moved_permanently", "moved", "\\o-"), - 302: ("found",), - 303: ("see_other", "other"), - 304: ("not_modified",), - 305: ("use_proxy",), - 306: ("switch_proxy",), - 307: ("temporary_redirect", "temporary_moved", "temporary"), - 308: ( - "permanent_redirect", - "resume_incomplete", - "resume", - ), # "resume" and "resume_incomplete" to be removed in 3.0 - # Client Error. - 400: ("bad_request", "bad"), - 401: ("unauthorized",), - 402: ("payment_required", "payment"), - 403: ("forbidden",), - 404: ("not_found", "-o-"), - 405: ("method_not_allowed", "not_allowed"), - 406: ("not_acceptable",), - 407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"), - 408: ("request_timeout", "timeout"), - 409: ("conflict",), - 410: ("gone",), - 411: ("length_required",), - 412: ("precondition_failed", "precondition"), - 413: ("request_entity_too_large",), - 414: ("request_uri_too_large",), - 415: ("unsupported_media_type", "unsupported_media", "media_type"), - 416: ( - "requested_range_not_satisfiable", - "requested_range", - "range_not_satisfiable", - ), - 417: ("expectation_failed",), - 418: ("im_a_teapot", "teapot", "i_am_a_teapot"), - 421: ("misdirected_request",), - 422: ("unprocessable_entity", "unprocessable"), - 423: ("locked",), - 424: ("failed_dependency", "dependency"), - 425: ("unordered_collection", "unordered"), - 426: ("upgrade_required", "upgrade"), - 428: ("precondition_required", "precondition"), - 429: ("too_many_requests", "too_many"), - 431: ("header_fields_too_large", "fields_too_large"), - 444: ("no_response", "none"), - 449: ("retry_with", "retry"), - 450: ("blocked_by_windows_parental_controls", "parental_controls"), - 451: ("unavailable_for_legal_reasons", "legal_reasons"), - 499: ("client_closed_request",), - # Server Error. - 500: ("internal_server_error", "server_error", "/o\\", "✗"), - 501: ("not_implemented",), - 502: ("bad_gateway",), - 503: ("service_unavailable", "unavailable"), - 504: ("gateway_timeout",), - 505: ("http_version_not_supported", "http_version"), - 506: ("variant_also_negotiates",), - 507: ("insufficient_storage",), - 509: ("bandwidth_limit_exceeded", "bandwidth"), - 510: ("not_extended",), - 511: ("network_authentication_required", "network_auth", "network_authentication"), -} - -codes = LookupDict(name="status_codes") - - -def _init(): - for code, titles in _codes.items(): - for title in titles: - setattr(codes, title, code) - if not title.startswith(("\\", "/")): - setattr(codes, title.upper(), code) - - def doc(code): - names = ", ".join(f"``{n}``" for n in _codes[code]) - return "* %d: %s" % (code, names) - - global __doc__ - __doc__ = ( - __doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes)) - if __doc__ is not None - else None - ) - - -_init() diff --git a/spaces/tobiascz/demotime/pytorch_grad_cam/score_cam.py b/spaces/tobiascz/demotime/pytorch_grad_cam/score_cam.py deleted file mode 100644 index 2c814d226ab8a1452bc53d8e3770816a70a8a242..0000000000000000000000000000000000000000 --- a/spaces/tobiascz/demotime/pytorch_grad_cam/score_cam.py +++ /dev/null @@ -1,63 +0,0 @@ -import torch -import tqdm -from pytorch_grad_cam.base_cam import BaseCAM - - -class ScoreCAM(BaseCAM): - def __init__( - self, - model, - target_layers, - use_cuda=False, - reshape_transform=None): - super(ScoreCAM, self).__init__(model, - target_layers, - use_cuda, - reshape_transform=reshape_transform, - uses_gradients=False) - - if len(target_layers) > 0: - print("Warning: You are using ScoreCAM with target layers, " - "however ScoreCAM will ignore them.") - - def get_cam_weights(self, - input_tensor, - target_layer, - targets, - activations, - grads): - with torch.no_grad(): - upsample = torch.nn.UpsamplingBilinear2d( - size=input_tensor.shape[-2:]) - activation_tensor = torch.from_numpy(activations) - if self.cuda: - activation_tensor = activation_tensor.cuda() - - upsampled = upsample(activation_tensor) - - maxs = upsampled.view(upsampled.size(0), - upsampled.size(1), -1).max(dim=-1)[0] - mins = upsampled.view(upsampled.size(0), - upsampled.size(1), -1).min(dim=-1)[0] - - maxs, mins = maxs[:, :, None, None], mins[:, :, None, None] - upsampled = (upsampled - mins) / (maxs - mins) - - input_tensors = input_tensor[:, None, - :, :] * upsampled[:, :, None, :, :] - - if hasattr(self, "batch_size"): - BATCH_SIZE = self.batch_size - else: - BATCH_SIZE = 16 - - scores = [] - for target, tensor in zip(targets, input_tensors): - for i in tqdm.tqdm(range(0, tensor.size(0), BATCH_SIZE)): - batch = tensor[i: i + BATCH_SIZE, :] - outputs = [target(o).cpu().item() for o in self.model(batch)] - scores.extend(outputs) - scores = torch.Tensor(scores) - scores = scores.view(activations.shape[0], activations.shape[1]) - weights = torch.nn.Softmax(dim=-1)(scores).numpy() - return weights diff --git a/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/audiogen_base_16khz.py b/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/audiogen_base_16khz.py deleted file mode 100644 index 190cc1d0a1e316347e8ebbdfc8de7e2942c1b3d7..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/audiogen_base_16khz.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ..musicgen._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=64, partition=partitions) - launcher.bind_(solver='audiogen/audiogen_base_16khz') - # replace this by the desired environmental sound dataset - launcher.bind_(dset='internal/sounds_16khz') - - fsdp = {'autocast': False, 'fsdp.use': True} - medium = {'model/lm/model_scale': 'medium'} - - launcher.bind_(fsdp) - launcher(medium) diff --git a/spaces/trttung1610/musicgen/audiocraft/optim/ema.py b/spaces/trttung1610/musicgen/audiocraft/optim/ema.py deleted file mode 100644 index 4337eaff066a8ca124dca3e3e63ee36e417c055c..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/optim/ema.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# ModelEMA implementation is taken from -# https://github.com/facebookresearch/demucs - -from collections import defaultdict -import typing as tp - -import torch -import torch.nn as nn - - -def _get_all_non_persistent_buffers_set(module: nn.Module, root: str = "") -> set: - names: set = set() - for (name, sub_module) in module.named_modules(): - if name == '': - buffer_names = module._non_persistent_buffers_set - buffer_names = {f"{root}.{buff_name}" if len(root) > 0 else buff_name - for buff_name in buffer_names} - names.update(buffer_names) - else: - sub_name = f"{root}.{name}" if len(root) > 0 else name - sub_buffer_names = _get_all_non_persistent_buffers_set(sub_module, sub_name) - names.update(sub_buffer_names) - return names - - -def _get_named_tensors(module: nn.Module): - non_persistent_buffers_set = _get_all_non_persistent_buffers_set(module) - named_buffers = [(name, buffer) for (name, buffer) in module.named_buffers() - if name not in non_persistent_buffers_set] - named_parameters = list(module.named_parameters()) - return named_parameters + named_buffers - - -class ModuleDictEMA: - """Exponential Moving Average over a nn.ModuleDict. - - You can switch to the EMA weights temporarily. - """ - def __init__(self, module_dict: nn.ModuleDict, decay: float = 0.999, - unbias: bool = True, device: tp.Union[torch.device, str] = 'cpu'): - self.decay = decay - self.module_dict = module_dict - self.state: dict = defaultdict(dict) - self.count = 0 - self.device = device - self.unbias = unbias - self._init() - - def _init(self): - for module_name, module in self.module_dict.items(): - for key, val in _get_named_tensors(module): - if not val.is_floating_point(): - continue - device = self.device or val.device - if key not in self.state[module_name]: - self.state[module_name][key] = val.detach().to(device, copy=True) - - def step(self): - if self.unbias: - self.count = self.count * self.decay + 1 - w = 1 / self.count - else: - w = 1 - self.decay - for module_name, module in self.module_dict.items(): - for key, val in _get_named_tensors(module): - if not val.is_floating_point(): - continue - device = self.device or val.device - self.state[module_name][key].mul_(1 - w) - self.state[module_name][key].add_(val.detach().to(device), alpha=w) - - def state_dict(self): - return {'state': self.state, 'count': self.count} - - def load_state_dict(self, state): - self.count = state['count'] - for module_name, module in state['state'].items(): - for key, val in module.items(): - self.state[module_name][key].copy_(val) diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/universal_batch_generator.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/universal_batch_generator.py deleted file mode 100644 index 7d0c5afba3a939d5f207ac59ac22b9325f023d6c..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/universal_batch_generator.py +++ /dev/null @@ -1,46 +0,0 @@ -''' -用于把生成器的多个输出打包成多个批量 -例如 -生成器1生成 (1,2,3) (4,5,6) -universal_batch_generator 将会输出 (1,4) (2,5) (3,6) -''' - - -def universal_batch_generator(g, batch_size): - elems = next(g) - assert isinstance(elems, tuple), '请务必确保生成器返回值为输出元组' - batch_bulk = [[i] for i in elems] - - for items in g: - assert len(items) == len(batch_bulk) - if len(batch_bulk[0]) == batch_size: - # 注意线程安全,所以需要返回 batch_bulk 的浅复制副本 - yield tuple(batch_bulk) - for i in range(len(batch_bulk)): - batch_bulk[i] = [] - - for i, bulk in zip(items, batch_bulk): - bulk.append(i) - - if len(batch_bulk[0]) > 0: - yield tuple(batch_bulk) - for i in range(len(batch_bulk)): - batch_bulk[i] = [] - - -if __name__ == '__main__': - def gen1(n_out, epochs): - ''' - 测试用任意生成器 - :param n_out: 多少个输出 - :param epochs: 多少轮 - :return: - ''' - for e in range(epochs): - out = (e,) * n_out - yield out - - g = gen1(5, 100) - g2 = universal_batch_generator(g, 6) - for i in g2: - print(i) diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/install.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/install.py deleted file mode 100644 index b9166e71c44972d8582836239636d0f483a51ff5..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/install.py +++ /dev/null @@ -1,14 +0,0 @@ -import launch -import os -import sys - -req_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "requirements.txt") - -with open(req_file) as file: - for lib in file: - lib = lib.strip() - if not launch.is_installed(lib): - if lib == 'rich': - launch.run(f'"{sys.executable}" -m pip install {lib}', desc=f"Installing Deforum requirement: {lib}", errdesc=f"Couldn't install {lib}") - else: - launch.run_pip(f"install {lib}", f"Deforum requirement: {lib}") diff --git a/spaces/vaibhavsharda/semantic_clustering/app.py b/spaces/vaibhavsharda/semantic_clustering/app.py deleted file mode 100644 index 8fbaee1c322fcbdb1db250bfbdcf8d9f11ec78a2..0000000000000000000000000000000000000000 --- a/spaces/vaibhavsharda/semantic_clustering/app.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import sys -import streamlit as st -import string -from io import StringIO -import pdb -import json -from twc_embeddings import HFModel,SimCSEModel,SGPTModel,CausalLMModel,SGPTQnAModel -from twc_openai_embeddings import OpenAIModel -from twc_clustering import TWCClustering -import torch -import requests -import socket - - -MAX_INPUT = 10000 - -SEM_SIMILARITY="1" -DOC_RETRIEVAL="2" -CLUSTERING="3" - - -use_case = {"1":"Finding similar phrases/sentences","2":"Retrieving semantically matching information to a query. It may not be a factual match","3":"Clustering"} -use_case_url = {"1":"https://huggingface.co/spaces/taskswithcode/semantic_similarity","2":"https://huggingface.co/spaces/taskswithcode/semantic_search","3":""} - - - -from transformers import BertTokenizer, BertForMaskedLM - - -APP_NAME = "hf/semantic_clustering" -INFO_URL = "https://www.taskswithcode.com/stats/" - - - - - -def get_views(action): - ret_val = 0 - hostname = socket.gethostname() - ip_address = socket.gethostbyname(hostname) - if ("view_count" not in st.session_state): - try: - app_info = {'name': APP_NAME,"action":action,"host":hostname,"ip":ip_address} - #res = requests.post(INFO_URL, json = app_info).json() - #print(res) - data = res["count"] - except: - data = 0 - ret_val = data - st.session_state["view_count"] = data - else: - ret_val = st.session_state["view_count"] - if (action != "init"): - app_info = {'name': APP_NAME,"action":action,"host":hostname,"ip":ip_address} - #res = requests.post(INFO_URL, json = app_info).json() - return "{:,}".format(ret_val) - - - - -def construct_model_info_for_display(model_names): - options_arr = [] - markdown_str = f"

                            Models evaluated ({len(model_names)})
                            The selected models satisfy one or more of the following (1) state-of-the-art (2) the most downloaded models on Hugging Face (3) Large Language Models (e.g. GPT-3)
                            " - markdown_str += f"

                            " - for node in model_names: - options_arr .append(node["name"]) - if (node["mark"] == "True"): - markdown_str += f"
                             • Model: {node['name']}
                                Code released by: {node['orig_author']}
                                Model info: {node['sota_info']['task']}
                            " - if ("Note" in node): - markdown_str += f"
                                {node['Note']}link
                            " - markdown_str += "

                            " - - markdown_str += "
                            Note:
                            • Uploaded files are loaded into non-persistent memory for the duration of the computation. They are not cached
                            " - limit = "{:,}".format(MAX_INPUT) - markdown_str += f"
                            • User uploaded file has a maximum limit of {limit} sentences.
                            " - return options_arr,markdown_str - - -st.set_page_config(page_title='TWC - Compare popular/state-of-the-art models for semantic clustering using sentence embeddings', page_icon="logo.jpg", layout='centered', initial_sidebar_state='auto', - menu_items={ - 'About': 'This app was created by taskswithcode. http://taskswithcode.com' - - }) -col,pad = st.columns([85,15]) - -with col: - st.image("long_form_logo_with_icon.png") - - -@st.experimental_memo -def load_model(model_name,model_class,load_model_name): - try: - ret_model = None - obj_class = globals()[model_class] - ret_model = obj_class() - ret_model.init_model(load_model_name) - assert(ret_model is not None) - except Exception as e: - st.error(f"Unable to load model class:{model_class} model_name: {model_name} load_model_name: {load_model_name} {str(e)}") - pass - return ret_model - - - -@st.experimental_memo -def cached_compute_similarity(input_file_name,sentences,_model,model_name,threshold,_cluster,clustering_type): - texts,embeddings = _model.compute_embeddings(input_file_name,sentences,is_file=False) - results = _cluster.cluster(None,texts,embeddings,threshold,clustering_type) - return results - - -def uncached_compute_similarity(input_file_name,sentences,_model,model_name,threshold,cluster,clustering_type): - with st.spinner('Computing vectors for sentences'): - texts,embeddings = _model.compute_embeddings(input_file_name,sentences,is_file=False) - results = cluster.cluster(None,texts,embeddings,threshold,clustering_type) - #st.success("Similarity computation complete") - return results - -DEFAULT_HF_MODEL = "sentence-transformers/paraphrase-MiniLM-L6-v2" -def get_model_info(model_names,model_name): - for node in model_names: - if (model_name == node["name"]): - return node,model_name - return get_model_info(model_names,DEFAULT_HF_MODEL) - - -def run_test(model_names,model_name,input_file_name,sentences,display_area,threshold,user_uploaded,custom_model,clustering_type): - display_area.text("Loading model:" + model_name) - #Note. model_name may get mapped to new name in the call below for custom models - orig_model_name = model_name - model_info,model_name = get_model_info(model_names,model_name) - if (model_name != orig_model_name): - load_model_name = orig_model_name - else: - load_model_name = model_info["model"] - if ("Note" in model_info): - fail_link = f"{model_info['Note']} [link]({model_info['alt_url']})" - display_area.write(fail_link) - if (user_uploaded and "custom_load" in model_info and model_info["custom_load"] == "False"): - fail_link = f"{model_info['Note']} [link]({model_info['alt_url']})" - display_area.write(fail_link) - return {"error":fail_link} - model = load_model(model_name,model_info["class"],load_model_name) - display_area.text("Model " + model_name + " load complete") - try: - if (user_uploaded): - results = uncached_compute_similarity(input_file_name,sentences,model,model_name,threshold,st.session_state["cluster"],clustering_type) - else: - display_area.text("Computing vectors for sentences") - results = cached_compute_similarity(input_file_name,sentences,model,model_name,threshold,st.session_state["cluster"],clustering_type) - display_area.text("Similarity computation complete") - return results - - except Exception as e: - st.error("Some error occurred during prediction" + str(e)) - st.stop() - return {} - - - - - -def display_results(orig_sentences,results,response_info,app_mode,model_name): - main_sent = f"
                            {response_info}

                            " - main_sent += f"
                            Showing results for model: {model_name}
                            " - score_text = "cosine distance" - main_sent += f"
                            Clustering by {score_text}. {len(results['clusters'])} clusters.  mean:{results['info']['mean']:.2f}; std:{results['info']['std']:.2f}; current threshold:{results['info']['current_threshold']}
                            Threshold hints:{str(results['info']['zscores'])}
                            Overlap stats(overlap,freq):{str(results['info']['overlap'])}
                            " - body_sent = [] - download_data = {} - for i in range(len(results["clusters"])): - pivot_index = results["clusters"][i]["pivot_index"] - pivot_sent = orig_sentences[pivot_index] - pivot_index += 1 - d_cluster = {} - download_data[i + 1] = d_cluster - d_cluster["pivot"] = {"pivot_index":pivot_index,"sent":pivot_sent,"children":{}} - body_sent.append(f"
                            {pivot_index}] {pivot_sent} (Cluster {i+1})  
                            ") - neighs_dict = results["clusters"][i]["neighs"] - for key in neighs_dict: - cosine_dist = neighs_dict[key] - child_index = key - sentence = orig_sentences[child_index] - child_index += 1 - body_sent.append(f"
                            {child_index}] {sentence}   {cosine_dist:.2f}
                            ") - d_cluster["pivot"]["children"][sentence] = f"{cosine_dist:.2f}" - body_sent.append(f"
                             
                            ") - main_sent = main_sent + "\n" + '\n'.join(body_sent) - st.markdown(main_sent,unsafe_allow_html=True) - st.session_state["download_ready"] = json.dumps(download_data,indent=4) - get_views("submit") - - -def init_session(): - if ("model_name" not in st.session_state): - st.session_state["model_name"] = "ss_test" - st.session_state["download_ready"] = None - st.session_state["model_name"] = "ss_test" - st.session_state["threshold"] = 1.5 - st.session_state["file_name"] = "default" - st.session_state["overlapped"] = "overlapped" - st.session_state["cluster"] = TWCClustering() - else: - print("Skipping init session") - -def app_main(app_mode,example_files,model_name_files,clus_types): - init_session() - with open(example_files) as fp: - example_file_names = json.load(fp) - with open(model_name_files) as fp: - model_names = json.load(fp) - with open(clus_types) as fp: - cluster_types = json.load(fp) - curr_use_case = use_case[app_mode].split(".")[0] - st.markdown("
                            Compare popular/state-of-the-art models for semantic clustering using sentence embeddings
                            ", unsafe_allow_html=True) - st.markdown(f"

                            Or compare your own model with state-of-the-art/popular models

                            ", unsafe_allow_html=True) - st.markdown(f"
                            Use cases for sentence embeddings
                               •  {use_case['1']}
                               •  {use_case['2']}
                               •  {use_case['3']}
                            This app illustrates '{curr_use_case}' use case
                            ", unsafe_allow_html=True) - st.markdown(f"
                            views: {get_views('init')}
                            ", unsafe_allow_html=True) - - - try: - - with st.form('twc_form'): - - step1_line = "Upload text file(one sentence in a line) or choose an example text file below" - if (app_mode == DOC_RETRIEVAL): - step1_line += ". The first line is treated as the query" - uploaded_file = st.file_uploader(step1_line, type=".txt") - - selected_file_index = st.selectbox(label=f'Example files ({len(example_file_names)})', - options = list(dict.keys(example_file_names)), index=0, key = "twc_file") - st.write("") - options_arr,markdown_str = construct_model_info_for_display(model_names) - selection_label = 'Select Model' - selected_model = st.selectbox(label=selection_label, - options = options_arr, index=0, key = "twc_model") - st.write("") - custom_model_selection = st.text_input("Model not listed above? Type any Hugging Face sentence embedding model name ", "",key="custom_model") - hf_link_str = "" - st.markdown(hf_link_str, unsafe_allow_html=True) - threshold = st.number_input('Choose a zscore threshold (number of std devs from mean)',value=st.session_state["threshold"],min_value = 0.0,step=.01) - st.write("") - clustering_type = st.selectbox(label=f'Select type of clustering', - options = list(dict.keys(cluster_types)), index=0, key = "twc_cluster_types") - st.write("") - submit_button = st.form_submit_button('Run') - - - input_status_area = st.empty() - display_area = st.empty() - if submit_button: - start = time.time() - if uploaded_file is not None: - st.session_state["file_name"] = uploaded_file.name - sentences = StringIO(uploaded_file.getvalue().decode("utf-8")).read() - else: - st.session_state["file_name"] = example_file_names[selected_file_index]["name"] - sentences = open(example_file_names[selected_file_index]["name"]).read() - sentences = sentences.split("\n")[:-1] - if (len(sentences) > MAX_INPUT): - st.info(f"Input sentence count exceeds maximum sentence limit. First {MAX_INPUT} out of {len(sentences)} sentences chosen") - sentences = sentences[:MAX_INPUT] - if (len(custom_model_selection) != 0): - run_model = custom_model_selection - else: - run_model = selected_model - st.session_state["model_name"] = selected_model - st.session_state["threshold"] = threshold - st.session_state["overlapped"] = cluster_types[clustering_type]["type"] - results = run_test(model_names,run_model,st.session_state["file_name"],sentences,display_area,threshold,(uploaded_file is not None),(len(custom_model_selection) != 0),cluster_types[clustering_type]["type"]) - display_area.empty() - with display_area.container(): - if ("error" in results): - st.error(results["error"]) - else: - device = 'GPU' if torch.cuda.is_available() else 'CPU' - response_info = f"Computation time on {device}: {time.time() - start:.2f} secs for {len(sentences)} sentences" - if (len(custom_model_selection) != 0): - st.info("Custom model overrides model selection in step 2 above. So please clear the custom model text box to choose models from step 2") - display_results(sentences,results,response_info,app_mode,run_model) - #st.json(results) - - if submit_button: - st.download_button( - label="Download results as JSON", - data=st.session_state["download_ready"] if st.session_state["download_ready"] is not None else "", - disabled=not st.session_state["download_ready"], - file_name=(st.session_state["model_name"] + "_" + str(st.session_state["threshold"]) + "_" + - st.session_state["overlapped"] + "_" + '_'.join(st.session_state["file_name"].split(".")[:-1]) + - ".json").replace("/", "_"), - mime='text/json', - key="download" - ) - - - - except Exception as e: - st.error("Some error occurred during loading" + str(e)) - if submit_button: - st.download_button( - label="Download results as JSON", - data=st.session_state["download_ready"] if st.session_state["download_ready"] is not None else "", - disabled=not st.session_state["download_ready"], - file_name=(st.session_state["model_name"] + "_" + str(st.session_state["threshold"]) + "_" + - st.session_state["overlapped"] + "_" + '_'.join(st.session_state["file_name"].split(".")[:-1]) + - ".json").replace("/", "_"), - mime='text/json', - key="download" - ) - #st.stop() - - st.markdown(markdown_str, unsafe_allow_html=True) - - - -if __name__ == "__main__": - #print("comand line input:",len(sys.argv),str(sys.argv)) - #app_main(sys.argv[1],sys.argv[2],sys.argv[3]) - #app_main("1","sim_app_examples.json","sim_app_models.json") - app_main("3","clus_app_examples.json","clus_app_models.json","clus_app_clustypes.json") - diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/dvc.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/dvc.md deleted file mode 100644 index b32fc7a47e8b3acaaeca39b4eaf3fb62362db45d..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/dvc.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -description: Explore Ultralytics YOLO Utils DVC Callbacks such as logging images, plots, confusion matrices, and training progress. -keywords: Ultralytics, YOLO, Utils, DVC, Callbacks, images, plots, confusion matrices, training progress ---- - -## _logger_disabled ---- -### ::: ultralytics.yolo.utils.callbacks.dvc._logger_disabled -

                            - -## _log_images ---- -### ::: ultralytics.yolo.utils.callbacks.dvc._log_images -

                            - -## _log_plots ---- -### ::: ultralytics.yolo.utils.callbacks.dvc._log_plots -

                            - -## _log_confusion_matrix ---- -### ::: ultralytics.yolo.utils.callbacks.dvc._log_confusion_matrix -

                            - -## on_pretrain_routine_start ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_pretrain_routine_start -

                            - -## on_pretrain_routine_end ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_pretrain_routine_end -

                            - -## on_train_start ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_train_start -

                            - -## on_train_epoch_start ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_train_epoch_start -

                            - -## on_fit_epoch_end ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_fit_epoch_end -

                            - -## on_train_end ---- -### ::: ultralytics.yolo.utils.callbacks.dvc.on_train_end -

                            diff --git a/spaces/venkatks515/VenkatASR/app.py b/spaces/venkatks515/VenkatASR/app.py deleted file mode 100644 index 0c6e9e01d3943e315d1a360c4ffff7b27afb8c51..0000000000000000000000000000000000000000 --- a/spaces/venkatks515/VenkatASR/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import gradio as gr -import torch -import time -import librosa -import soundfile -import nemo.collections.asr as nemo_asr -import tempfile -import os -import uuid - -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# --------------------------------------------- -# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions -# This should allow you to save your results to your own Dataset hosted on HF. - -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ASRLive.csv" -DATASET_REPO_ID = "awacke1/ASRLive.csv" -DATA_FILENAME = "ASRLive.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -PersistToDataset = False -#PersistToDataset = True # uncomment to save inference output to ASRLive.csv dataset - -if PersistToDataset: - try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - except: - print("file not found") - repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN - ) - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - # uncomment line below to begin saving - - commit_url = repo.push_to_hub() - ret = "" - with open(DATA_FILE, "r") as csvfile: - reader = csv.DictReader(csvfile) - - for row in reader: - ret += row - ret += "\r\n" - return ret - -# main ------------------------- -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - filterTokenCount = 128 # filter last 128 tokens - if inputs['input_ids'].shape[1] > filterTokenCount: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-filterTokenCount:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-filterTokenCount:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - - - -SAMPLE_RATE = 16000 -model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") -model.change_decoding_strategy(None) -model.eval() - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != SAMPLE_RATE: - data = librosa.resample(data, orig_sr=sr, target_sr=SAMPLE_RATE) - data = librosa.to_mono(data) - return data - - -def transcribe(audio, state = ""): - if state is None: - state = "" - audio_data = process_audio_file(audio) - with tempfile.TemporaryDirectory() as tmpdir: - audio_path = os.path.join(tmpdir, f'audio_{uuid.uuid4()}.wav') - soundfile.write(audio_path, audio_data, SAMPLE_RATE) - transcriptions = model.transcribe([audio_path]) - if type(transcriptions) == tuple and len(transcriptions) == 2: - transcriptions = transcriptions[0] - transcriptions = transcriptions[0] - - if PersistToDataset: - ret = store_message(transcriptions, state) # Save to dataset - uncomment to store into a dataset - hint you will need your HF_TOKEN - state = state + transcriptions + " " + ret - else: - state = state + transcriptions - return state, state - -gr.Interface( - fn=transcribe, - inputs=[ - gr.Audio(source="microphone", type='filepath', streaming=True), - "state", - ], - outputs=[ - "textbox", - "state" - ], - layout="horizontal", - theme="huggingface", - title="🗣️ASR-Live🧠Memory💾", - description=f"Live Automatic Speech Recognition (ASR) with Memory💾 Dataset.", - allow_flagging='never', - live=True, - article=f"Result Output Saved to Memory💾 Dataset: [{DATASET_REPO_URL}]({DATASET_REPO_URL})" -).launch(debug=True) diff --git a/spaces/voices/voice-directory/Dockerfile b/spaces/voices/voice-directory/Dockerfile deleted file mode 100644 index 49fcf4378af4233b27bee290d3b439cb4ba771b1..0000000000000000000000000000000000000000 --- a/spaces/voices/voice-directory/Dockerfile +++ /dev/null @@ -1,42 +0,0 @@ -FROM python:3.9 - -# Update apt -RUN apt-get update -y - -# Add apt packages -RUN apt-get install libsndfile1 curl wget git-lfs espeak-ng -y - -# Deps -# RUN apt-get install libsndfile1 espeak-ng -y - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Clone the GitHub repository -RUN git clone https://github.com/neural-loop/TTS.git . - -RUN pip install --no-cache-dir --upgrade tts - -# Install dependencies -RUN pip install --no-cache-dir -r requirements.txt - -RUN git lfs install -RUN git clone https://huggingface.co/voices/VCTK_Canadian_English model - -# Copy the current directory contents into the container at $HOME/app, setting the owner to the user -COPY --chown=user . $HOME/app - -RUN sed -i 's/supplemental\//model\/supplemental\//g' model/config.json - -# Set the command to run the server -CMD ["python", "TTS/server/server.py", "--model_path", "model/checkpoint_40000.pth", "--config_path", "model/config.json", "--port", "7860"] \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/midas/utils.py b/spaces/vumichien/canvas_controlnet/annotator/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/builder.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/builder.py deleted file mode 100644 index db61f03d4abb2072f2532ce4429c0842495e015b..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg - -PIXEL_SAMPLERS = Registry('pixel sampler') - - -def build_pixel_sampler(cfg, **default_args): - """Build pixel sampler for segmentation map.""" - return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/datasets/voc.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/datasets/voc.py deleted file mode 100644 index a8855203b14ee0dc4da9099a2945d4aedcffbcd6..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/datasets/voc.py +++ /dev/null @@ -1,29 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalVOCDataset(CustomDataset): - """Pascal VOC dataset. - - Args: - split (str): Split txt file for Pascal VOC. - """ - - CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', - 'train', 'tvmonitor') - - PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - def __init__(self, split, **kwargs): - super(PascalVOCDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/spaces/w1zrd/MusicGen/tests/modules/test_conv.py b/spaces/w1zrd/MusicGen/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/wangfuchao/bingo-wangfuchao/Dockerfile b/spaces/wangfuchao/bingo-wangfuchao/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/wangfuchao/bingo-wangfuchao/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/wangyanbing1989/text2image/README.md b/spaces/wangyanbing1989/text2image/README.md deleted file mode 100644 index ef00c275c391ebc588107c7b17786960ea79a1c0..0000000000000000000000000000000000000000 --- a/spaces/wangyanbing1989/text2image/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text2image -emoji: 🏢 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wilson1/bingo/src/app/page.tsx b/spaces/wilson1/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
                            - - - ) -} diff --git a/spaces/wuhuik/bingo/src/components/chat-notification.tsx b/spaces/wuhuik/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 3474e522992c43a4d1d0eadcf205a9760d5b930b..0000000000000000000000000000000000000000 --- a/spaces/wuhuik/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,91 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
                            - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
                            - ) - } - if (error.code === ErrorCode.BING_IP_FORBIDDEN) { - return ( - - 你的服务器或代理已被封禁,请更换服务器或使用代理重试 - - ) - } - if (error.code === ErrorCode.BING_TRY_LATER) { - return ( - - 创建会话失败,请稍候重试 - - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
                            - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
                            - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
                            -
                            -
                            -
                            -
                            - error - {getAction(message.error, () => bot.resetConversation())} -
                            -
                            -
                            -
                            -
                            - ) -} diff --git a/spaces/wwwwwwww2/bingo/Dockerfile b/spaces/wwwwwwww2/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/densenet.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/densenet.py deleted file mode 100644 index a1d9b7ef85a79cbc4c4e8a81840935531df636b8..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/densenet.py +++ /dev/null @@ -1,380 +0,0 @@ -""" -Code source: https://github.com/pytorch/vision -""" -from __future__ import division, absolute_import -import re -from collections import OrderedDict -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils import model_zoo - -__all__ = [ - 'densenet121', 'densenet169', 'densenet201', 'densenet161', - 'densenet121_fc512' -] - -model_urls = { - 'densenet121': - 'https://download.pytorch.org/models/densenet121-a639ec97.pth', - 'densenet169': - 'https://download.pytorch.org/models/densenet169-b2777c0a.pth', - 'densenet201': - 'https://download.pytorch.org/models/densenet201-c1103571.pth', - 'densenet161': - 'https://download.pytorch.org/models/densenet161-8d451a50.pth', -} - - -class _DenseLayer(nn.Sequential): - - def __init__(self, num_input_features, growth_rate, bn_size, drop_rate): - super(_DenseLayer, self).__init__() - self.add_module('norm1', nn.BatchNorm2d(num_input_features)), - self.add_module('relu1', nn.ReLU(inplace=True)), - self.add_module( - 'conv1', - nn.Conv2d( - num_input_features, - bn_size * growth_rate, - kernel_size=1, - stride=1, - bias=False - ) - ), - self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)), - self.add_module('relu2', nn.ReLU(inplace=True)), - self.add_module( - 'conv2', - nn.Conv2d( - bn_size * growth_rate, - growth_rate, - kernel_size=3, - stride=1, - padding=1, - bias=False - ) - ), - self.drop_rate = drop_rate - - def forward(self, x): - new_features = super(_DenseLayer, self).forward(x) - if self.drop_rate > 0: - new_features = F.dropout( - new_features, p=self.drop_rate, training=self.training - ) - return torch.cat([x, new_features], 1) - - -class _DenseBlock(nn.Sequential): - - def __init__( - self, num_layers, num_input_features, bn_size, growth_rate, drop_rate - ): - super(_DenseBlock, self).__init__() - for i in range(num_layers): - layer = _DenseLayer( - num_input_features + i*growth_rate, growth_rate, bn_size, - drop_rate - ) - self.add_module('denselayer%d' % (i+1), layer) - - -class _Transition(nn.Sequential): - - def __init__(self, num_input_features, num_output_features): - super(_Transition, self).__init__() - self.add_module('norm', nn.BatchNorm2d(num_input_features)) - self.add_module('relu', nn.ReLU(inplace=True)) - self.add_module( - 'conv', - nn.Conv2d( - num_input_features, - num_output_features, - kernel_size=1, - stride=1, - bias=False - ) - ) - self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2)) - - -class DenseNet(nn.Module): - """Densely connected network. - - Reference: - Huang et al. Densely Connected Convolutional Networks. CVPR 2017. - - Public keys: - - ``densenet121``: DenseNet121. - - ``densenet169``: DenseNet169. - - ``densenet201``: DenseNet201. - - ``densenet161``: DenseNet161. - - ``densenet121_fc512``: DenseNet121 + FC. - """ - - def __init__( - self, - num_classes, - loss, - growth_rate=32, - block_config=(6, 12, 24, 16), - num_init_features=64, - bn_size=4, - drop_rate=0, - fc_dims=None, - dropout_p=None, - **kwargs - ): - - super(DenseNet, self).__init__() - self.loss = loss - - # First convolution - self.features = nn.Sequential( - OrderedDict( - [ - ( - 'conv0', - nn.Conv2d( - 3, - num_init_features, - kernel_size=7, - stride=2, - padding=3, - bias=False - ) - ), - ('norm0', nn.BatchNorm2d(num_init_features)), - ('relu0', nn.ReLU(inplace=True)), - ( - 'pool0', - nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - ), - ] - ) - ) - - # Each denseblock - num_features = num_init_features - for i, num_layers in enumerate(block_config): - block = _DenseBlock( - num_layers=num_layers, - num_input_features=num_features, - bn_size=bn_size, - growth_rate=growth_rate, - drop_rate=drop_rate - ) - self.features.add_module('denseblock%d' % (i+1), block) - num_features = num_features + num_layers*growth_rate - if i != len(block_config) - 1: - trans = _Transition( - num_input_features=num_features, - num_output_features=num_features // 2 - ) - self.features.add_module('transition%d' % (i+1), trans) - num_features = num_features // 2 - - # Final batch norm - self.features.add_module('norm5', nn.BatchNorm2d(num_features)) - - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.feature_dim = num_features - self.fc = self._construct_fc_layer(fc_dims, num_features, dropout_p) - - # Linear layer - self.classifier = nn.Linear(self.feature_dim, num_classes) - - self._init_params() - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """Constructs fully connected layer. - - Args: - fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed - input_dim (int): input dimension - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def _init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu' - ) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm1d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - f = self.features(x) - f = F.relu(f, inplace=True) - v = self.global_avgpool(f) - v = v.view(v.size(0), -1) - - if self.fc is not None: - v = self.fc(v) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - - # '.'s are no longer allowed in module names, but pervious _DenseLayer - # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'. - # They are also in the checkpoints in model_urls. This pattern is used - # to find such keys. - pattern = re.compile( - r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$' - ) - for key in list(pretrain_dict.keys()): - res = pattern.match(key) - if res: - new_key = res.group(1) + res.group(2) - pretrain_dict[new_key] = pretrain_dict[key] - del pretrain_dict[key] - - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -""" -Dense network configurations: --- -densenet121: num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16) -densenet169: num_init_features=64, growth_rate=32, block_config=(6, 12, 32, 32) -densenet201: num_init_features=64, growth_rate=32, block_config=(6, 12, 48, 32) -densenet161: num_init_features=96, growth_rate=48, block_config=(6, 12, 36, 24) -""" - - -def densenet121(num_classes, loss='softmax', pretrained=True, **kwargs): - model = DenseNet( - num_classes=num_classes, - loss=loss, - num_init_features=64, - growth_rate=32, - block_config=(6, 12, 24, 16), - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['densenet121']) - return model - - -def densenet169(num_classes, loss='softmax', pretrained=True, **kwargs): - model = DenseNet( - num_classes=num_classes, - loss=loss, - num_init_features=64, - growth_rate=32, - block_config=(6, 12, 32, 32), - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['densenet169']) - return model - - -def densenet201(num_classes, loss='softmax', pretrained=True, **kwargs): - model = DenseNet( - num_classes=num_classes, - loss=loss, - num_init_features=64, - growth_rate=32, - block_config=(6, 12, 48, 32), - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['densenet201']) - return model - - -def densenet161(num_classes, loss='softmax', pretrained=True, **kwargs): - model = DenseNet( - num_classes=num_classes, - loss=loss, - num_init_features=96, - growth_rate=48, - block_config=(6, 12, 36, 24), - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['densenet161']) - return model - - -def densenet121_fc512(num_classes, loss='softmax', pretrained=True, **kwargs): - model = DenseNet( - num_classes=num_classes, - loss=loss, - num_init_features=64, - growth_rate=32, - block_config=(6, 12, 24, 16), - fc_dims=[512], - dropout_p=None, - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['densenet121']) - return model diff --git a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/mandarin.py b/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/mandarin.py deleted file mode 100644 index 093d8826809aa2681f6088174427337a59e0c882..0000000000000000000000000000000000000000 --- a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/text/mandarin.py +++ /dev/null @@ -1,329 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - -logging.getLogger('jieba').setLevel(logging.WARNING) -jieba.initialize() - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text \ No newline at end of file diff --git a/spaces/xxccc/gpt-academic/request_llm/bridge_chatgpt.py b/spaces/xxccc/gpt-academic/request_llm/bridge_chatgpt.py deleted file mode 100644 index eef8fbf0b43f30b915f770f4bc54120c84ebd092..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/request_llm/bridge_chatgpt.py +++ /dev/null @@ -1,285 +0,0 @@ -# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目 - -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import json -import time -import gradio as gr -import logging -import traceback -import requests -import importlib - -# config_private.py放自己的秘密如API和代理网址 -# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件 -from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc -proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \ - get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY') - -timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ - '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。' - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - stream_response = response.iter_lines() - result = '' - while True: - try: chunk = next(stream_response).decode() - except StopIteration: - break - except requests.exceptions.ConnectionError: - chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。 - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode() - if "reduce the length" in error_msg: - raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) - else: - raise RuntimeError("OpenAI拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break # api2d 正常完成 - json_data = json.loads(chunk.lstrip('data:'))['choices'][0] - delta = json_data["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if json_data['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if is_any_api_key(inputs): - chatbot._cookies['api_key'] = inputs - chatbot.append(("输入已识别为openai的api_key", what_keys(inputs))) - yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面 - return - elif not is_any_api_key(chatbot._cookies['api_key']): - chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")) - yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面 - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - try: - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) - except RuntimeError as e: - chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") - yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 - return - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=True - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS);break - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - - is_head_of_the_stream = True - if stream: - stream_response = response.iter_lines() - while True: - try: - chunk = next(stream_response) - except StopIteration: - # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里 - from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk.decode())}") - yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk.decode()) # 刷新界面 - return - - # print(chunk.decode()[6:]) - if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - try: - chunk_decoded = chunk.decode() - # 前者API2D的 - if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0): - # 判定为数据流的结束,gpt_replying_buffer也写完了 - logging.info(f'[response] {gpt_replying_buffer}') - break - # 处理数据流的主体 - chunkjson = json.loads(chunk_decoded[6:]) - status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}" - # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出 - gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"] - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - chunk_decoded = chunk.decode() - error_msg = chunk_decoded - if "reduce the length" in error_msg: - if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出 - history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], - max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 - chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") - # history = [] # 清除历史 - elif "does not exist" in error_msg: - chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.") - elif "Incorrect API key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.") - elif "exceeded your current quota" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.") - elif "bad forward key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.") - elif "Not enough point" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.") - else: - from toolbox import regular_txt_to_markdown - tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}") - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - -def generate_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 - """ - if not is_any_api_key(llm_kwargs['api_key']): - raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。") - - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api_key}" - } - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - if what_gpt_answer["content"] == timeout_bot_msg: continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'].strip('api2d-'), - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return headers,payload - - diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/__init__.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/__init__.py deleted file mode 100644 index 2276f1eecded80d1f00ff97b45c66c7a8922b987..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * -from .version import * diff --git a/spaces/yash161101/deepwords/app.py b/spaces/yash161101/deepwords/app.py deleted file mode 100644 index 057ff8c1cfcb7ab56668002bbc7a1a53f798e4e7..0000000000000000000000000000000000000000 --- a/spaces/yash161101/deepwords/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import os -import torch -import torch.nn as nn -from transformers.activations import get_activation -from transformers import AutoTokenizer, AutoModelWithLMHead, AutoModelForCausalLM - -st.title('DeepWords') -st.text('Still under Construction.') -st.text('Tip: Try writing a sentence and making the model predict final word.') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -@st.cache(allow_output_mutation=True) -def get_model(): - tokenizer = AutoTokenizer.from_pretrained("ml6team/gpt-2-medium-conditional-quote-generator") - model = AutoModelForCausalLM.from_pretrained("ml6team/gpt-2-medium-conditional-quote-generator") - return model, tokenizer - - -model, tokenizer = get_model() -#g = -c = 5 -with st.form(key='my_form'): - prompt = st.text_input('Enter sentence:', '') - c = st.number_input('Enter Number of words: ', 1) - submit_button = st.form_submit_button(label='Submit') - if submit_button: - with torch.no_grad(): - text = tokenizer.encode(prompt) - myinput, past_key_values = torch.tensor([text]), None - myinput = myinput - myinput= myinput.to(device) - logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) - logits = logits[0,-1] - probabilities = torch.nn.functional.softmax(logits) - best_logits, best_indices = logits.topk(200) - best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] - text.append(best_indices[0].item()) - best_probabilities = probabilities[best_indices].tolist() - words = [] - - best_words = ' '.join(best_words[0:c]) - final_string = prompt + best_words - st.write(final_string) \ No newline at end of file diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/TempoGraph/TempoGraphToolbar.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/TempoGraph/TempoGraphToolbar.tsx deleted file mode 100644 index f83b9c0d97398f344900332645ff3e42c165ee70..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/TempoGraph/TempoGraphToolbar.tsx +++ /dev/null @@ -1,64 +0,0 @@ -import styled from "@emotion/styled" -import { observer } from "mobx-react-lite" -import { FC, useCallback } from "react" -import { Localized } from "../../../components/Localized" -import { useStores } from "../../hooks/useStores" -import { AutoScrollButton } from "../Toolbar/AutoScrollButton" -import QuantizeSelector from "../Toolbar/QuantizeSelector/QuantizeSelector" -import { Toolbar } from "../Toolbar/Toolbar" -import { ToolSelector } from "../Toolbar/ToolSelector" - -const Title = styled.span` - font-weight: bold; - margin-right: 2em; - font-size: 1rem; -` - -const FlexibleSpacer = styled.div` - flex-grow: 1; -` - -export const TempoGraphToolbar: FC = observer(() => { - const { tempoEditorStore } = useStores() - const { autoScroll, quantize, isQuantizeEnabled, mouseMode } = - tempoEditorStore - - const onSelectQuantize = useCallback( - (denominator: number) => (tempoEditorStore.quantize = denominator), - [tempoEditorStore], - ) - - const onClickQuantizeSwitch = useCallback(() => { - tempoEditorStore.isQuantizeEnabled = !tempoEditorStore.isQuantizeEnabled - }, [tempoEditorStore]) - - return ( - - - <Localized default="Tempo">tempo</Localized> - - - - - (tempoEditorStore.mouseMode = mouseMode), - [], - )} - /> - - - - (tempoEditorStore.autoScroll = !autoScroll)} - selected={autoScroll} - /> - - ) -}) diff --git a/spaces/ygangang/VToonify/vtoonify/smooth_parsing_map.py b/spaces/ygangang/VToonify/vtoonify/smooth_parsing_map.py deleted file mode 100644 index 7720d0c7786925db38d3e793d6a3a8f68f6e663e..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/smooth_parsing_map.py +++ /dev/null @@ -1,172 +0,0 @@ -import os -#os.environ['CUDA_VISIBLE_DEVICES'] = "0" -import numpy as np -import cv2 -import math -import argparse -from tqdm import tqdm -import torch -from torch import nn -from torchvision import transforms -import torch.nn.functional as F -from model.raft.core.raft import RAFT -from model.raft.core.utils.utils import InputPadder -from model.bisenet.model import BiSeNet -from model.stylegan.model import Downsample - -class Options(): - def __init__(self): - - self.parser = argparse.ArgumentParser(description="Smooth Parsing Maps") - self.parser.add_argument("--window_size", type=int, default=5, help="temporal window size") - - self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model") - self.parser.add_argument("--raft_path", type=str, default='./checkpoint/raft-things.pth', help="path of the RAFT model") - - self.parser.add_argument("--video_path", type=str, help="path of the target video") - self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output parsing maps") - - def parse(self): - self.opt = self.parser.parse_args() - args = vars(self.opt) - print('Load options') - for name, value in sorted(args.items()): - print('%s: %s' % (str(name), str(value))) - return self.opt - -# from RAFT -def warp(x, flo): - """ - warp an image/tensor (im2) back to im1, according to the optical flow - x: [B, C, H, W] (im2) - flo: [B, 2, H, W] flow - """ - B, C, H, W = x.size() - # mesh grid - xx = torch.arange(0, W).view(1,-1).repeat(H,1) - yy = torch.arange(0, H).view(-1,1).repeat(1,W) - xx = xx.view(1,1,H,W).repeat(B,1,1,1) - yy = yy.view(1,1,H,W).repeat(B,1,1,1) - grid = torch.cat((xx,yy),1).float() - - - #x = x.cuda() - grid = grid.cuda() - vgrid = grid + flo # B,2,H,W - - # scale grid to [-1,1] - ##2019 code - vgrid[:,0,:,:] = 2.0*vgrid[:,0,:,:].clone()/max(W-1,1)-1.0 - vgrid[:,1,:,:] = 2.0*vgrid[:,1,:,:].clone()/max(H-1,1)-1.0 - - vgrid = vgrid.permute(0,2,3,1) - output = nn.functional.grid_sample(x, vgrid,align_corners=True) - mask = torch.autograd.Variable(torch.ones(x.size())).cuda() - mask = nn.functional.grid_sample(mask, vgrid,align_corners=True) - - ##2019 author - mask[mask<0.9999] = 0 - mask[mask>0] = 1 - - ##2019 code - # mask = torch.floor(torch.clamp(mask, 0 ,1)) - - return output*mask, mask - - -if __name__ == "__main__": - - parser = Options() - args = parser.parse() - print('*'*98) - - - device = "cuda" - - transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]), - ]) - - parser = argparse.ArgumentParser() - parser.add_argument('--model', help="restore checkpoint") - parser.add_argument('--small', action='store_true', help='use small model') - parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision') - parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation') - - raft_model = torch.nn.DataParallel(RAFT(parser.parse_args(['--model', args.raft_path]))) - raft_model.load_state_dict(torch.load(args.raft_path)) - - raft_model = raft_model.module - raft_model.to(device) - raft_model.eval() - - parsingpredictor = BiSeNet(n_classes=19) - parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage)) - parsingpredictor.to(device).eval() - - down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device).eval() - - print('Load models successfully!') - - window = args.window_size - - video_cap = cv2.VideoCapture(args.video_path) - num = int(video_cap.get(7)) - - Is = [] - for i in range(num): - success, frame = video_cap.read() - if success == False: - break - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - with torch.no_grad(): - Is += [transform(frame).unsqueeze(dim=0).cpu()] - video_cap.release() - - # enlarge frames for more accurate parsing maps and optical flows - Is = F.upsample(torch.cat(Is, dim=0), scale_factor=2, mode='bilinear') - Is_ = torch.cat((Is[0:window], Is, Is[-window:]), dim=0) - - print('Load video with %d frames successfully!'%(len(Is))) - - Ps = [] - for i in tqdm(range(len(Is))): - with torch.no_grad(): - Ps += [parsingpredictor(2*Is[i:i+1].to(device))[0].detach().cpu()] - Ps = torch.cat(Ps, dim=0) - Ps_ = torch.cat((Ps[0:window], Ps, Ps[-window:]), dim=0) - - print('Predict parsing maps successfully!') - - - # temporal weights of the (2*args.window_size+1) frames - wt = torch.exp(-(torch.arange(2*window+1).float()-window)**2/(2*((window+0.5)**2))).reshape(2*window+1,1,1,1).to(device) - - parse = [] - for ii in tqdm(range(len(Is))): - i = ii + window - image2 = Is_[i-window:i+window+1].to(device) - image1 = Is_[i].repeat(2*window+1,1,1,1).to(device) - padder = InputPadder(image1.shape) - image1, image2 = padder.pad(image1, image2) - with torch.no_grad(): - flow_low, flow_up = raft_model((image1+1)*255.0/2, (image2+1)*255.0/2, iters=20, test_mode=True) - output, mask = warp(torch.cat((image2, Ps_[i-window:i+window+1].to(device)), dim=1), flow_up) - aligned_Is = output[:,0:3].detach() - aligned_Ps = output[:,3:].detach() - # the spatial weight - ws = torch.exp(-((aligned_Is-image1)**2).mean(dim=1, keepdims=True)/(2*(0.2**2))) * mask[:,0:1] - aligned_Ps[window] = Ps_[i].to(device) - # the weight between i and i shoud be 1.0 - ws[window,:,:,:] = 1.0 - weights = ws*wt - weights = weights / weights.sum(dim=(0), keepdims=True) - fused_Ps = (aligned_Ps * weights).sum(dim=0, keepdims=True) - parse += [down(fused_Ps).detach().cpu()] - parse = torch.cat(parse, dim=0) - - basename = os.path.basename(args.video_path).split('.')[0] - np.save(os.path.join(args.output_path, basename+'_parsingmap.npy'), parse.numpy()) - - print('Done!') \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/__init__.py deleted file mode 100644 index 9eccb238780f7e3615dc155d4cc3cdcc763b903b..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv2/__init__.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - is_tokenizers_available, - is_torch_available, - is_vision_available, -) - - -_import_structure = { - "configuration_layoutlmv2": ["LAYOUTLMV2_PRETRAINED_CONFIG_ARCHIVE_MAP", "LayoutLMv2Config"], - "processing_layoutlmv2": ["LayoutLMv2Processor"], - "tokenization_layoutlmv2": ["LayoutLMv2Tokenizer"], -} - -try: - if not is_tokenizers_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["tokenization_layoutlmv2_fast"] = ["LayoutLMv2TokenizerFast"] - -try: - if not is_vision_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["feature_extraction_layoutlmv2"] = ["LayoutLMv2FeatureExtractor"] - _import_structure["image_processing_layoutlmv2"] = ["LayoutLMv2ImageProcessor"] - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_layoutlmv2"] = [ - "LAYOUTLMV2_PRETRAINED_MODEL_ARCHIVE_LIST", - "LayoutLMv2ForQuestionAnswering", - "LayoutLMv2ForSequenceClassification", - "LayoutLMv2ForTokenClassification", - "LayoutLMv2Layer", - "LayoutLMv2Model", - "LayoutLMv2PreTrainedModel", - ] - -if TYPE_CHECKING: - from .configuration_layoutlmv2 import LAYOUTLMV2_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMv2Config - from .processing_layoutlmv2 import LayoutLMv2Processor - from .tokenization_layoutlmv2 import LayoutLMv2Tokenizer - - try: - if not is_tokenizers_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .tokenization_layoutlmv2_fast import LayoutLMv2TokenizerFast - - try: - if not is_vision_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .feature_extraction_layoutlmv2 import LayoutLMv2FeatureExtractor, LayoutLMv2ImageProcessor - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_layoutlmv2 import ( - LAYOUTLMV2_PRETRAINED_MODEL_ARCHIVE_LIST, - LayoutLMv2ForQuestionAnswering, - LayoutLMv2ForSequenceClassification, - LayoutLMv2ForTokenClassification, - LayoutLMv2Layer, - LayoutLMv2Model, - LayoutLMv2PreTrainedModel, - ) -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/modeling_llama.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/modeling_llama.py deleted file mode 100644 index 55753d5f75d9af6abcc4350f1c79b37ad8c1bf5e..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/modeling_llama.py +++ /dev/null @@ -1,1239 +0,0 @@ -# coding=utf-8 -# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. -# -# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX -# and OPT implementations in this library. It has been modified from its -# original forms to accommodate minor architectural differences compared -# to GPT-NeoX and OPT used by the Meta AI team that trained the model. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch LLaMA model.""" -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import ALL_LAYERNORM_LAYERS -from ...utils import ( - add_start_docstrings, - add_start_docstrings_to_model_forward, - is_flash_attn_available, - logging, - replace_return_docstrings, -) -from .configuration_llama import LlamaConfig - - -if is_flash_attn_available(): - from flash_attn import flash_attn_func, flash_attn_varlen_func - from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa - - -logger = logging.get_logger(__name__) - -_CONFIG_FOR_DOC = "LlamaConfig" - - -def _get_unpad_data(padding_mask): - seqlens_in_batch = padding_mask.sum(dim=-1, dtype=torch.int32) - indices = torch.nonzero(padding_mask.flatten(), as_tuple=False).flatten() - max_seqlen_in_batch = seqlens_in_batch.max().item() - cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0)) - return ( - indices, - cu_seqlens, - max_seqlen_in_batch, - ) - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -# Copied from transformers.models.bart.modeling_bart._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -class LlamaRMSNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-6): - """ - LlamaRMSNorm is equivalent to T5LayerNorm - """ - super().__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.variance_epsilon = eps - - def forward(self, hidden_states): - input_dtype = hidden_states.dtype - hidden_states = hidden_states.to(torch.float32) - variance = hidden_states.pow(2).mean(-1, keepdim=True) - hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) - return self.weight * hidden_states.to(input_dtype) - - -ALL_LAYERNORM_LAYERS.append(LlamaRMSNorm) - - -class LlamaRotaryEmbedding(nn.Module): - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): - super().__init__() - - self.dim = dim - self.max_position_embeddings = max_position_embeddings - self.base = base - inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) - self.register_buffer("inv_freq", inv_freq, persistent=False) - - # Build here to make `torch.jit.trace` work. - self._set_cos_sin_cache( - seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype() - ) - - def _set_cos_sin_cache(self, seq_len, device, dtype): - self.max_seq_len_cached = seq_len - t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) - - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False) - self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False) - - def forward(self, x, seq_len=None): - # x: [bs, num_attention_heads, seq_len, head_size] - if seq_len > self.max_seq_len_cached: - self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype) - - return ( - self.cos_cached[:seq_len].to(dtype=x.dtype), - self.sin_cached[:seq_len].to(dtype=x.dtype), - ) - - -class LlamaLinearScalingRotaryEmbedding(LlamaRotaryEmbedding): - """LlamaRotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev""" - - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0): - self.scaling_factor = scaling_factor - super().__init__(dim, max_position_embeddings, base, device) - - def _set_cos_sin_cache(self, seq_len, device, dtype): - self.max_seq_len_cached = seq_len - t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) - t = t / self.scaling_factor - - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False) - self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False) - - -class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding): - """LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla""" - - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0): - self.scaling_factor = scaling_factor - super().__init__(dim, max_position_embeddings, base, device) - - def _set_cos_sin_cache(self, seq_len, device, dtype): - self.max_seq_len_cached = seq_len - - if seq_len > self.max_position_embeddings: - base = self.base * ( - (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1) - ) ** (self.dim / (self.dim - 2)) - inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) - self.register_buffer("inv_freq", inv_freq, persistent=False) - - t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) - - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False) - self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False) - - -def rotate_half(x): - """Rotates half the hidden dims of the input.""" - x1 = x[..., : x.shape[-1] // 2] - x2 = x[..., x.shape[-1] // 2 :] - return torch.cat((-x2, x1), dim=-1) - - -# Copied from transformers.models.gpt_neox.modeling_gpt_neox.apply_rotary_pos_emb -def apply_rotary_pos_emb(q, k, cos, sin, position_ids): - cos = cos[position_ids].unsqueeze(1) # [seq_len, dim] -> [batch_size, 1, seq_len, head_dim] - sin = sin[position_ids].unsqueeze(1) - q_embed = (q * cos) + (rotate_half(q) * sin) - k_embed = (k * cos) + (rotate_half(k) * sin) - return q_embed, k_embed - - -class LlamaMLP(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.hidden_size = config.hidden_size - self.intermediate_size = config.intermediate_size - self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False) - self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False) - self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) - self.act_fn = ACT2FN[config.hidden_act] - - def forward(self, x): - if self.config.pretraining_tp > 1: - slice = self.intermediate_size // self.config.pretraining_tp - gate_proj_slices = self.gate_proj.weight.split(slice, dim=0) - up_proj_slices = self.up_proj.weight.split(slice, dim=0) - down_proj_slices = self.down_proj.weight.split(slice, dim=1) - - gate_proj = torch.cat( - [F.linear(x, gate_proj_slices[i]) for i in range(self.config.pretraining_tp)], dim=-1 - ) - up_proj = torch.cat([F.linear(x, up_proj_slices[i]) for i in range(self.config.pretraining_tp)], dim=-1) - - intermediate_states = (self.act_fn(gate_proj) * up_proj).split(slice, dim=2) - down_proj = [ - F.linear(intermediate_states[i], down_proj_slices[i]) for i in range(self.config.pretraining_tp) - ] - down_proj = sum(down_proj) - else: - down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)) - - return down_proj - - -def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor: - """ - This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch, - num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim) - """ - batch, num_key_value_heads, slen, head_dim = hidden_states.shape - if n_rep == 1: - return hidden_states - hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim) - return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim) - - -class LlamaAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config: LlamaConfig): - super().__init__() - self.config = config - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.hidden_size // self.num_heads - self.num_key_value_heads = config.num_key_value_heads - self.num_key_value_groups = self.num_heads // self.num_key_value_heads - self.max_position_embeddings = config.max_position_embeddings - self.rope_theta = config.rope_theta - - if (self.head_dim * self.num_heads) != self.hidden_size: - raise ValueError( - f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}" - f" and `num_heads`: {self.num_heads})." - ) - self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.attention_bias) - self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias) - self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias) - self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.attention_bias) - self._init_rope() - - def _init_rope(self): - if self.config.rope_scaling is None: - self.rotary_emb = LlamaRotaryEmbedding( - self.head_dim, - max_position_embeddings=self.max_position_embeddings, - base=self.rope_theta, - ) - else: - scaling_type = self.config.rope_scaling["type"] - scaling_factor = self.config.rope_scaling["factor"] - if scaling_type == "linear": - self.rotary_emb = LlamaLinearScalingRotaryEmbedding( - self.head_dim, - max_position_embeddings=self.max_position_embeddings, - scaling_factor=scaling_factor, - base=self.rope_theta, - ) - elif scaling_type == "dynamic": - self.rotary_emb = LlamaDynamicNTKScalingRotaryEmbedding( - self.head_dim, - max_position_embeddings=self.max_position_embeddings, - scaling_factor=scaling_factor, - base=self.rope_theta, - ) - else: - raise ValueError(f"Unknown RoPE scaling type {scaling_type}") - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - padding_mask: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - if self.config.pretraining_tp > 1: - key_value_slicing = (self.num_key_value_heads * self.head_dim) // self.config.pretraining_tp - query_slices = self.q_proj.weight.split( - (self.num_heads * self.head_dim) // self.config.pretraining_tp, dim=0 - ) - key_slices = self.k_proj.weight.split(key_value_slicing, dim=0) - value_slices = self.v_proj.weight.split(key_value_slicing, dim=0) - - query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.config.pretraining_tp)] - query_states = torch.cat(query_states, dim=-1) - - key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.config.pretraining_tp)] - key_states = torch.cat(key_states, dim=-1) - - value_states = [F.linear(hidden_states, value_slices[i]) for i in range(self.config.pretraining_tp)] - value_states = torch.cat(value_states, dim=-1) - - else: - query_states = self.q_proj(hidden_states) - key_states = self.k_proj(hidden_states) - value_states = self.v_proj(hidden_states) - - query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2) - value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - key_states = repeat_kv(key_states, self.num_key_value_groups) - value_states = repeat_kv(value_states, self.num_key_value_groups) - - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2).contiguous() - - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - if self.config.pretraining_tp > 1: - attn_output = attn_output.split(self.hidden_size // self.config.pretraining_tp, dim=2) - o_proj_slices = self.o_proj.weight.split(self.hidden_size // self.config.pretraining_tp, dim=1) - attn_output = sum([F.linear(attn_output[i], o_proj_slices[i]) for i in range(self.config.pretraining_tp)]) - else: - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - -class LlamaFlashAttention2(LlamaAttention): - """ - Llama flash attention module. This module inherits from `LlamaAttention` as the weights of the module stays - untouched. The only required change would be on the forward pass where it needs to correctly call the public API of - flash attention and deal with padding tokens in case the input contains any of them. - """ - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - padding_mask: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - # LlamaFlashAttention2 attention does not support output_attentions - output_attentions = False - - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states) - key_states = self.k_proj(hidden_states) - value_states = self.v_proj(hidden_states) - - # Flash attention requires the input to have the shape - # batch_size x seq_length x head_dime x hidden_dim - # therefore we just need to keep the original shape - query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2) - value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - - query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - query_states = query_states.transpose(1, 2) - key_states = key_states.transpose(1, 2) - value_states = value_states.transpose(1, 2) - - # TODO: llama does not have dropout in the config?? - # It is recommended to use dropout with FA according to the docs - # when training. - dropout_rate = 0.0 # if not self.training else self.attn_dropout - - # In PEFT, usually we cast the layer norms in float32 for training stability reasons - # therefore the input hidden states gets silently casted in float32. Hence, we need - # cast them back in float16 just to be sure everything works as expected. - # This might slowdown training & inference so it is recommended to not cast the LayerNorms - # in fp32. (LlamaRMSNorm handles it correctly) - input_dtype = query_states.dtype - if input_dtype == torch.float32: - logger.warning_once( - "The input hidden states seems to be silently casted in float32, this might be related to" - " the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in" - " float16." - ) - - query_states = query_states.to(torch.float16) - key_states = key_states.to(torch.float16) - value_states = value_states.to(torch.float16) - - attn_output = self._flash_attention_forward( - query_states, key_states, value_states, padding_mask, q_len, dropout=dropout_rate - ) - - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous() - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - def _flash_attention_forward( - self, query_states, key_states, value_states, padding_mask, query_length, dropout=0.0, softmax_scale=None - ): - """ - Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token - first unpad the input, then computes the attention scores and pad the final attention scores. - - Args: - query_states (`torch.Tensor`): - Input query states to be passed to Flash Attention API - key_states (`torch.Tensor`): - Input key states to be passed to Flash Attention API - value_states (`torch.Tensor`): - Input value states to be passed to Flash Attention API - padding_mask (`torch.Tensor`): - The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the - position of padding tokens and 1 for the position of non-padding tokens. - dropout (`int`, *optional*): - Attention dropout - softmax_scale (`float`, *optional*): - The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim) - """ - # Contains at least one padding token in the sequence - if padding_mask is not None: - batch_size = query_states.shape[0] - query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input( - query_states, key_states, value_states, padding_mask, query_length - ) - - cu_seqlens_q, cu_seqlens_k = cu_seq_lens - max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens - - attn_output_unpad = flash_attn_varlen_func( - query_states, - key_states, - value_states, - cu_seqlens_q=cu_seqlens_q, - cu_seqlens_k=cu_seqlens_k, - max_seqlen_q=max_seqlen_in_batch_q, - max_seqlen_k=max_seqlen_in_batch_k, - dropout_p=dropout, - softmax_scale=softmax_scale, - causal=True, - ) - - attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length) - else: - attn_output = flash_attn_func( - query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=True - ) - - return attn_output - - def _upad_input(self, query_layer, key_layer, value_layer, padding_mask, query_length): - indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(padding_mask) - batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape - - key_layer = index_first_axis( - key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k - ) - value_layer = index_first_axis( - value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k - ) - if query_length == kv_seq_len: - query_layer = index_first_axis( - query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k - ) - cu_seqlens_q = cu_seqlens_k - max_seqlen_in_batch_q = max_seqlen_in_batch_k - indices_q = indices_k - elif query_length == 1: - max_seqlen_in_batch_q = 1 - cu_seqlens_q = torch.arange( - batch_size + 1, dtype=torch.int32, device=query_layer.device - ) # There is a memcpy here, that is very bad. - indices_q = cu_seqlens_q[:-1] - query_layer = query_layer.squeeze(1) - else: - # The -q_len: slice assumes left padding. - padding_mask = padding_mask[:, -query_length:] - query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, padding_mask) - - return ( - query_layer, - key_layer, - value_layer, - indices_q, - (cu_seqlens_q, cu_seqlens_k), - (max_seqlen_in_batch_q, max_seqlen_in_batch_k), - ) - - -class LlamaDecoderLayer(nn.Module): - def __init__(self, config: LlamaConfig): - super().__init__() - self.hidden_size = config.hidden_size - self.self_attn = ( - LlamaAttention(config=config) - if not getattr(config, "_flash_attn_2_enabled", False) - else LlamaFlashAttention2(config=config) - ) - self.mlp = LlamaMLP(config) - self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: Optional[bool] = False, - padding_mask: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`, *optional*): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states - """ - - residual = hidden_states - - hidden_states = self.input_layernorm(hidden_states) - - # Self Attention - hidden_states, self_attn_weights, present_key_value = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - padding_mask=padding_mask, - ) - hidden_states = residual + hidden_states - - # Fully Connected - residual = hidden_states - hidden_states = self.post_attention_layernorm(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights,) - - if use_cache: - outputs += (present_key_value,) - - return outputs - - -LLAMA_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`LlamaConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaPreTrainedModel(PreTrainedModel): - config_class = LlamaConfig - base_model_prefix = "model" - supports_gradient_checkpointing = True - _no_split_modules = ["LlamaDecoderLayer"] - _skip_keys_device_placement = "past_key_values" - _supports_flash_attn_2 = True - - def _init_weights(self, module): - std = self.config.initializer_range - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LlamaModel): - module.gradient_checkpointing = value - - -LLAMA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - If `past_key_values` is used, optionally only the last `input_ids` have to be input (see - `past_key_values`). - - If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`] - and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more - information on the default strategy. - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't - have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids` - of shape `(batch_size, sequence_length)`. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaModel(LlamaPreTrainedModel): - """ - Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`] - - Args: - config: LlamaConfig - """ - - def __init__(self, config: LlamaConfig): - super().__init__(config) - self.padding_idx = config.pad_token_id - self.vocab_size = config.vocab_size - - self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) - self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, - inputs_embeds.dtype, - device=inputs_embeds.device, - past_key_values_length=past_key_values_length, - ) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to( - inputs_embeds.device - ) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - batch_size, seq_length = input_ids.shape - elif inputs_embeds is not None: - batch_size, seq_length, _ = inputs_embeds.shape - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - seq_length_with_past = seq_length - past_key_values_length = 0 - - if past_key_values is not None: - past_key_values_length = past_key_values[0][0].shape[2] - seq_length_with_past = seq_length_with_past + past_key_values_length - - if position_ids is None: - device = input_ids.device if input_ids is not None else inputs_embeds.device - position_ids = torch.arange( - past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device - ) - position_ids = position_ids.unsqueeze(0) - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) - # embed positions - if attention_mask is None: - attention_mask = torch.ones( - (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device - ) - padding_mask = None - else: - if 0 in attention_mask: - padding_mask = attention_mask - else: - padding_mask = None - - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length - ) - - hidden_states = inputs_embeds - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - next_decoder_cache = () if use_cache else None - - for idx, decoder_layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, past_key_value, output_attentions, padding_mask=padding_mask) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), hidden_states, attention_mask, position_ids - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - padding_mask=padding_mask, - ) - - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[2 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - hidden_states = self.norm(hidden_states) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - ) - - -class LlamaForCausalLM(LlamaPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.model = LlamaModel(config) - self.vocab_size = config.vocab_size - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_decoder(self, decoder): - self.model = decoder - - def get_decoder(self): - return self.model - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - Args: - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, LlamaForCausalLM - - >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) - >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER) - - >>> prompt = "Hey, are you conscious? Can you talk to me?" - >>> inputs = tokenizer(prompt, return_tensors="pt") - - >>> # Generate - >>> generate_ids = model.generate(inputs.input_ids, max_length=30) - >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you." - ```""" - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) - outputs = self.model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - if self.config.pretraining_tp > 1: - lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0) - logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)] - logits = torch.cat(logits, dim=-1) - else: - logits = self.lm_head(hidden_states) - logits = logits.float() - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - shift_logits = shift_logits.view(-1, self.config.vocab_size) - shift_labels = shift_labels.view(-1) - # Enable model parallelism - shift_labels = shift_labels.to(shift_logits.device) - loss = loss_fct(shift_logits, shift_labels) - - if not return_dict: - output = (logits,) + outputs[1:] - return (loss,) + output if loss is not None else output - - return CausalLMOutputWithPast( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs - ): - if past_key_values: - input_ids = input_ids[:, -1:] - - position_ids = kwargs.get("position_ids", None) - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "position_ids": position_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "attention_mask": attention_mask, - } - ) - return model_inputs - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += ( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past), - ) - return reordered_past - - -@add_start_docstrings( - """ - The LLaMa Model transformer with a sequence classification head on top (linear layer). - - [`LlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-2) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - LLAMA_START_DOCSTRING, -) -class LlamaForSequenceClassification(LlamaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.model = LlamaModel(config) - self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.model( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size = input_ids.shape[0] - else: - batch_size = inputs_embeds.shape[0] - - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to( - logits.device - ) - else: - sequence_lengths = -1 - - pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] - - loss = None - if labels is not None: - labels = labels.to(logits.device) - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/modules/F0Predictor/F0Predictor.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index 69d8a9bd28729e33d092a5af8e2ce544c1330c3b..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - ''' - pass - - def compute_f0_uv(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - ''' - pass \ No newline at end of file diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh deleted file mode 100644 index 9fd9ba0c239d3e982c17711c9db872de3730decf..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/dev/run_instant_tests.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -BIN="python tools/train_net.py" -OUTPUT="instant_test_output" -NUM_GPUS=2 - -CFG_LIST=( "${@:1}" ) -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*instant_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN --num-gpus $NUM_GPUS --config-file "$cfg" \ - SOLVER.IMS_PER_BATCH $(($NUM_GPUS * 2)) \ - OUTPUT_DIR "$OUTPUT" - rm -rf "$OUTPUT" -done - diff --git a/spaces/yoru-tomosu/Translate_video/app.py b/spaces/yoru-tomosu/Translate_video/app.py deleted file mode 100644 index f1bd6adbb17bd3304072a0aef9a72836dc5c3b2b..0000000000000000000000000000000000000000 --- a/spaces/yoru-tomosu/Translate_video/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import whisper -import deepl -import os - -model = whisper.load_model("base") -deepl_auth_key = os.environ["Deepl_API"] - -def translate(text, target_lang): - translator = deepl.Translator(deepl_auth_key) - translated_text = translator.translate_text(text, target_lang=target_lang) - return translated_text - -def transcribe(audio): - - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - - # detect the spoken language - _, probs = model.detect_language(mel) - - print(f"Detected language: {max(probs, key=probs.get)}") - detect_lang = max(probs, key=probs.get) - - - - # decode the audio - # options = whisper.DecodingOptions() - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - - # if detect_lang == "en": - # print("Text: ", result.text) - # translated_text = translate(result.text, "JA") - # print("translated_text: ", translated_text) - - # generated_video = text_to_speech(translated_text) - # print("generated_video 01: ", generated_video) - - # elif detect_lang == "ja": - # print("Text: ", result.text) - # translated_text = translate(result.text, "EN-US") - - translated_text = translate(result.text, "JA") - return translated_text - - - -import gradio as gr - -title = 'Translator_Video' - -inputs = gr.Video() -outputs = gr.Text() -interface = gr.Interface(title=title, fn=transcribe, inputs=inputs, outputs=outputs) -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/yuan1615/EmpathyVC/app.py b/spaces/yuan1615/EmpathyVC/app.py deleted file mode 100644 index 505f478c9ad450136b1896caa40a3f9bee1bb178..0000000000000000000000000000000000000000 --- a/spaces/yuan1615/EmpathyVC/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import json - -import librosa -import torch -import numpy as np -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from mel_processing import spectrogram_torch, create_wav_header -import gradio as gr -import stream -import os -inference = stream.Inference(onnx_path='./ckpt/model.onnx', lang='chs') - -def cs(a, b): - return np.dot(a, b.reshape(-1, 1)).T / (np.linalg.norm(a, axis=1) * np.linalg.norm(b)) - -devices = "cpu" - - -# 加载配置文件 -hps = utils.get_hparams_from_file("./configs/aishell3_base.json") - -# 加载模型 -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=175, - **hps.model) -if devices == 'cuda': - net_g = net_g.cuda() -_ = net_g.eval() - -_ = utils.load_checkpoint("./ckpt/G_000.pth", net_g, None) -# _ = utils.load_checkpoint("./ckpt/G_910000.pth", net_g, None) - -# 加载声纹信息 -speaker_embedding = [] -with open('speaker_embedding.txt', 'r', encoding='utf-8') as f: - temp = f.readlines() - for t in temp: - speaker_embedding.append(eval(t.split('|')[1])) -speaker_embedding = np.array(speaker_embedding) - -def vc(mic, tag_s, tt): - if tag_s == 'Male': - tag_s = 41 - else: - tag_s = 112 - sr, data = mic - # data 为 numpy 数组 - if sr != 22050: - data = librosa.resample(data.astype(np.float32) / 32767.0, sr, 22050) - else: - data = data.astype(np.float32) / 32767.0 - contents = torch.FloatTensor(data.astype(np.float32)) - audio_norm = contents.unsqueeze(0) - temp_speaker_embedding = inference.extract_embedding_wav(audio_norm) - # 计算余弦相似度,获得最相似的 speaker_id - dist = cs(speaker_embedding, temp_speaker_embedding).reshape(-1) - sid = dist.tolist().index(max(dist.tolist())) - print('最相似的sid为 %d' % sid) - spec = spectrogram_torch(audio_norm, 1024, - 22050, 256, 1024, - center=False) - with torch.no_grad(): - if devices == 'cuda': - spec = spec.cuda() - spec_lengths = torch.LongTensor([spec.shape[2]]).cuda() - sid_src = torch.LongTensor([sid + 1]).cuda() - sid_tgt = torch.LongTensor([tag_s]).cuda() - else: - spec = spec - spec_lengths = torch.LongTensor([spec.shape[2]]) - sid_src = torch.LongTensor([sid + 1]) - sid_tgt = torch.LongTensor([tag_s]) - - audio = net_g.voice_conversion(spec, spec_lengths, sid_src, sid_tgt=sid_tgt)[0][ - 0, 0].data.cpu().float().numpy() - i = np.random.uniform(0.12, 0.35, 1)[0] - space_time = np.zeros(int(i * 22050), dtype=np.int16) - audio = audio * 32767.0 - audio = np.concatenate((audio, space_time)) - audio = audio.astype(np.short) - return 22050, audio - - -demo = gr.Interface( - fn=vc, - inputs=[ - gr.Audio(label='Source Speaker'), - gr.components.Dropdown(label="Target Speaker", choices=['Male', 'Female']), - gr.Audio(label='Target Speaker Audio') - ], - outputs=gr.Audio(label="Output"), - cache_examples=False, - examples=[ - [os.path.join(os.path.dirname(__file__), "audio/AISHELL-3-SSB1863-0001.wav"), 'Male', - os.path.join(os.path.dirname(__file__), "audio/source_man.wav")], - [os.path.join(os.path.dirname(__file__), "audio/AISHELL3-SSB0122-0001.wav"), 'Male', - os.path.join(os.path.dirname(__file__), "audio/source_man.wav")], - [os.path.join(os.path.dirname(__file__), "audio/AISHELL-3-SSB1863-0001.wav"), 'Female', - os.path.join(os.path.dirname(__file__), "audio/source_female.wav")], - [os.path.join(os.path.dirname(__file__), "audio/AISHELL3-SSB0122-0001.wav"), 'Female', - os.path.join(os.path.dirname(__file__), "audio/source_female.wav")], - [os.path.join(os.path.dirname(__file__), "audio/baker-000001.wav"), 'Female', - os.path.join(os.path.dirname(__file__), "audio/source_female.wav")], - [os.path.join(os.path.dirname(__file__), "audio/LJSpeech-001-0001.wav"), 'Male', - os.path.join(os.path.dirname(__file__), "audio/source_man.wav")], - ], - title='Empathy-VC', - description="Note: This space is running on CPU, inference times will be higher." -) - -demo.launch(server_name='0.0.0.0') diff --git a/spaces/yufiofficial/MusicGenQ/audiocraft/quantization/__init__.py b/spaces/yufiofficial/MusicGenQ/audiocraft/quantization/__init__.py deleted file mode 100644 index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/audiocraft/quantization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .vq import ResidualVectorQuantizer -from .base import BaseQuantizer, DummyQuantizer, QuantizedResult diff --git "a/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" "b/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" deleted file mode 100644 index 72ffe6b1a8f2a59a3c5c364e30dfb4949bd6a929..0000000000000000000000000000000000000000 --- "a/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" +++ /dev/null @@ -1,67 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say, llm_kwargs, chatbot, history=history, sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/gte.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/gte.js deleted file mode 100644 index 5aeaa634707a0c464b55c81555779aefc36732bb..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/gte.js +++ /dev/null @@ -1,3 +0,0 @@ -const compare = require('./compare') -const gte = (a, b, loose) => compare(a, b, loose) >= 0 -module.exports = gte diff --git "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" "b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" deleted file mode 100644 index 554c485aa0891f74c57cacfcbe076febe7a11029..0000000000000000000000000000000000000000 --- "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" +++ /dev/null @@ -1,175 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - -def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'(? - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - # <-------- 抽取摘要 ----------> - # if language == 'en': - # abs_extract_inputs = f"Please write an abstract for this paper" - - # # 单线,获取文章meta信息 - # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - # inputs=abs_extract_inputs, - # inputs_show_user=f"正在抽取摘要信息。", - # llm_kwargs=llm_kwargs, - # chatbot=chatbot, history=[], - # sys_prompt="Your job is to collect information from materials。", - # ) - - # <-------- 多线程润色开始 ----------> - if language == 'en->zh': - inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - elif language == 'zh->en': - inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - - -@CatchException -def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh') - - - - - -@CatchException -def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en') \ No newline at end of file diff --git a/spaces/zideliu/styledrop/timm/scheduler/scheduler_factory.py b/spaces/zideliu/styledrop/timm/scheduler/scheduler_factory.py deleted file mode 100644 index 9f7748f42280b846ab159fb18d7cda09d1890123..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/scheduler/scheduler_factory.py +++ /dev/null @@ -1,87 +0,0 @@ -""" Scheduler Factory -Hacked together by / Copyright 2020 Ross Wightman -""" -from .cosine_lr import CosineLRScheduler -from .tanh_lr import TanhLRScheduler -from .step_lr import StepLRScheduler -from .plateau_lr import PlateauLRScheduler - - -def create_scheduler(args, optimizer): - num_epochs = args.epochs - - if getattr(args, 'lr_noise', None) is not None: - lr_noise = getattr(args, 'lr_noise') - if isinstance(lr_noise, (list, tuple)): - noise_range = [n * num_epochs for n in lr_noise] - if len(noise_range) == 1: - noise_range = noise_range[0] - else: - noise_range = lr_noise * num_epochs - else: - noise_range = None - - lr_scheduler = None - if args.sched == 'cosine': - lr_scheduler = CosineLRScheduler( - optimizer, - t_initial=num_epochs, - t_mul=getattr(args, 'lr_cycle_mul', 1.), - lr_min=args.min_lr, - decay_rate=args.decay_rate, - warmup_lr_init=args.warmup_lr, - warmup_t=args.warmup_epochs, - cycle_limit=getattr(args, 'lr_cycle_limit', 1), - t_in_epochs=True, - noise_range_t=noise_range, - noise_pct=getattr(args, 'lr_noise_pct', 0.67), - noise_std=getattr(args, 'lr_noise_std', 1.), - noise_seed=getattr(args, 'seed', 42), - ) - num_epochs = lr_scheduler.get_cycle_length() + args.cooldown_epochs - elif args.sched == 'tanh': - lr_scheduler = TanhLRScheduler( - optimizer, - t_initial=num_epochs, - t_mul=getattr(args, 'lr_cycle_mul', 1.), - lr_min=args.min_lr, - warmup_lr_init=args.warmup_lr, - warmup_t=args.warmup_epochs, - cycle_limit=getattr(args, 'lr_cycle_limit', 1), - t_in_epochs=True, - noise_range_t=noise_range, - noise_pct=getattr(args, 'lr_noise_pct', 0.67), - noise_std=getattr(args, 'lr_noise_std', 1.), - noise_seed=getattr(args, 'seed', 42), - ) - num_epochs = lr_scheduler.get_cycle_length() + args.cooldown_epochs - elif args.sched == 'step': - lr_scheduler = StepLRScheduler( - optimizer, - decay_t=args.decay_epochs, - decay_rate=args.decay_rate, - warmup_lr_init=args.warmup_lr, - warmup_t=args.warmup_epochs, - noise_range_t=noise_range, - noise_pct=getattr(args, 'lr_noise_pct', 0.67), - noise_std=getattr(args, 'lr_noise_std', 1.), - noise_seed=getattr(args, 'seed', 42), - ) - elif args.sched == 'plateau': - mode = 'min' if 'loss' in getattr(args, 'eval_metric', '') else 'max' - lr_scheduler = PlateauLRScheduler( - optimizer, - decay_rate=args.decay_rate, - patience_t=args.patience_epochs, - lr_min=args.min_lr, - mode=mode, - warmup_lr_init=args.warmup_lr, - warmup_t=args.warmup_epochs, - cooldown_t=0, - noise_range_t=noise_range, - noise_pct=getattr(args, 'lr_noise_pct', 0.67), - noise_std=getattr(args, 'lr_noise_std', 1.), - noise_seed=getattr(args, 'seed', 42), - ) - - return lr_scheduler, num_epochs diff --git a/spaces/zomehwh/sovits-goldship/utils.py b/spaces/zomehwh/sovits-goldship/utils.py deleted file mode 100644 index e19cac39c57f213bbf6f1435ab48fe7948a1b17b..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-goldship/utils.py +++ /dev/null @@ -1,501 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() -