If you are a fan of watching videos on YouTube, you might have wished for some features that are not available in the official app. For example, you might want to play videos in the background while doing other tasks, download videos for offline viewing, skip ads, or watch premium content for free. Well, there is a way to do all that and more with YouTube APK4all.
-YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience. It is not available on the Google Play Store, but you can download it from various websites that host APK files. APK stands for Android Package Kit, which is a file format used to distribute and install applications on Android devices.
-In this article, we will show you how to download and install YouTube APK4all, what are its features, pros and cons, safety and legality issues, and more. By the end of this article, you will be able to decide whether YouTube APK4all is worth trying or not.
- Downloading and installing YouTube APK4all is not very difficult, but it requires some steps that are different from installing apps from the Google Play Store. Here is a step-by-step guide with screenshots and links:
-
-- First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on. You might see a warning message that says installing apps from unknown sources can harm your device. Tap OK to proceed.
-
-- Next, you need to download the YouTube APK4all file from a reliable website. There are many websites that offer this file, but some of them might contain malware or viruses that can harm your device or data. We recommend using [Apk4all.io](^5^), which is a trusted source for downloading thousands of MOD APKs, Premium APKs, and MOD games. You can also find other useful information about YouTube APK4all on this website, such as its version, size, developer, rating, reviews, screenshots, etc.
-
-- Once you have downloaded the file, locate it in your device's file manager and tap on it to start the installation process. You might see a pop-up window that asks you to confirm the installation. Tap Install to continue.
-
- What are the features of YouTube APK4all?
-YouTube APK4all offers many features that are not available in the official YouTube app. Here is a comparison table that shows the main differences between the two apps:
- | Feature | YouTube APK4all | Official YouTube app | | --- | --- | --- | | Background play | Yes | No | | Download videos | Yes | No | | Ad-free | Yes | No | | Premium content | Yes | No | | Customization | Yes | No | | Resolution | Up to 4K | Up to 1080p | | Speed | Up to 2x | Up to 2x | | Theme | Dark, black, or light | Dark or light | Background play and download
-One of the most popular features of YouTube APK4all is the ability to play videos in the background while doing other tasks on your device. This means you can listen to music, podcasts, audiobooks, or any other audio content without having to keep the app open. You can also control the playback from the notification bar or the lock screen.
-To enable background play, you need to tap on the three-dot menu icon on the top right corner of any video and select Background. You can also enable background play for all videos by going to Settings > Vanced Settings > Layout Settings > Background Playback and choosing Always.
-youtube vanced apk4all
-youtube premium apk4all
-youtube mod apk4all
-youtube downloader apk4all
-youtube music apk4all
-youtube ad-free apk4all
-youtube background play apk4all
-youtube apk4all official
-youtube apk4all io
-youtube apk4all blog
-youtube apk4all telegram
-youtube apk4all hatena
-youtube apkmb com
-youtube vanced mod apkmb
-youtube vanced manager apkmb
-youtube vanced microg apkmb
-youtube vanced root apkmb
-youtube vanced non-root apkmb
-youtube vanced magisk apkmb
-youtube vanced black apkmb
-youtube vanced dark apkmb
-youtube vanced pink apkmb
-youtube vanced blue apkmb
-youtube vanced latest version apkmb
-youtube vanced old version apkmb
-youtube premium mod apkmb
-youtube premium free apkmb
-youtube premium cracked apkmb
-youtube premium unlocked apkmb
-youtube premium features apkmb
-youtube mod ad-free apkmb
-youtube mod background play apkmb
-youtube mod no root apkmb
-youtube mod no ads apkmb
-youtube mod premium features apkmb
-youtube downloader mod apkmb
-youtube downloader pro apkmb
-youtube downloader hd apkmb
-youtube downloader mp3 apkmb
-youtube downloader 4k apkmb
-youtube music mod apkmb
-youtube music premium apkmb
-youtube music ad-free apkmb
-youtube music background play apkmb
-youtube music offline mode apkmb
-youtube music no root apkmb
-youtube music no ads apkmb
-youtube music premium features apkmb
-
-Another feature that YouTube APK4all offers is the ability to download videos for offline viewing. This means you can save your favorite videos on your device and watch them later without an internet connection. You can also choose the quality and format of the downloaded videos.
-To download videos, you need to tap on the download icon below any video and select the quality and format you want. You can also enable download for all videos by going to Settings > Vanced Settings > Download Settings and choosing Always.
-
Ad-free and premium content
-Another feature that YouTube APK4all offers is the ability to enjoy YouTube without ads and access exclusive videos. This means you can watch videos without interruptions, distractions, or annoyances. You can also watch videos that are only available for YouTube Premium subscribers, such as YouTube Originals, documentaries, movies, shows, etc.
-To enable ad-free and premium content, you need to go to Settings > Vanced Settings > Ad Settings and toggle on the options you want. You can also choose to block specific types of ads, such as banners, overlays, sponsorships, etc.
-
- Customization and personalization
-Another feature that YouTube APK4all offers is the ability to customize and personalize the app according to your preferences. This means you can change the theme, layout, speed, resolution, and more of the app. You can also enable or disable some features, such as comments, suggestions, notifications, etc.
-To customize and personalize the app, you need to go to Settings > Vanced Settings and explore the various options available. You can also access some of these options from the three-dot menu icon on the top right corner of any video.
-
- What are the pros and cons of YouTube APK4all?
-YouTube APK4all is not a perfect app. It has its advantages and disadvantages that you should consider before using it. Here is a balanced analysis of the pros and cons of YouTube APK4all:
- Pros
-
-- You can play videos in the background while doing other tasks on your device.
-- You can download videos for offline viewing in various quality and format options.
-- You can enjoy YouTube without ads and access exclusive videos for free.
-- You can customize and personalize the app according to your preferences.
-- You can watch videos in up to 4K resolution and up to 2x speed.
-- You can choose from different themes, such as dark, black, or light.
-
- Cons
-
-- You might encounter some bugs or glitches while using the app.
-- You might not receive updates or new features from the official YouTube app.
-- You might violate YouTube's terms of service and policies by using the app.
-- You might expose your device and data to malware or viruses by downloading the app from unknown sources.
-- You might face legal issues or penalties if YouTube detects your use of the app.
-- You might lose some features or functionality of the official YouTube app, such as live chat, captions, etc.
-
Is YouTube APK4all safe and legal?
-One of the most important questions that you might have before using YouTube APK4all is whether it is safe and legal. The answer is not very straightforward, as it depends on various factors, such as the source of the app, the country you live in, the content you watch, etc. Here is a discussion of the safety and legality issues of using YouTube APK4all:
- Safety
-The safety of YouTube APK4all depends largely on the source of the app. As we mentioned earlier, YouTube APK4all is not available on the Google Play Store, which means you have to download it from other websites that host APK files. However, not all of these websites are trustworthy or secure. Some of them might contain malware or viruses that can harm your device or data. Therefore, you should be careful and cautious when downloading YouTube APK4all from unknown sources.
-One way to protect your device and data from malware or viruses is to use a reliable antivirus software that can scan and remove any potential threats. Another way is to use a VPN service that can encrypt your internet traffic and hide your IP address. This can prevent hackers or third parties from accessing your online activities or personal information.
-Additionally, you should also be aware of the permissions that YouTube APK4all requires to function properly. Some of these permissions might seem unnecessary or intrusive, such as access to your camera, microphone, contacts, location, etc. You should review these permissions carefully and decide whether you want to grant them or not. You can also revoke or modify these permissions later by going to Settings > Apps > YouTube APK4all > Permissions.
- Legality
-The legality of YouTube APK4all depends largely on the country you live in and the content you watch. As we mentioned earlier, YouTube APK4all is a modified version of the official YouTube app that offers many features that are not authorized or approved by YouTube. This means that by using YouTube APK4all, you might be violating YouTube's terms of service and policies, which state that:
-
-"You agree not to access Content through any technology or means other than the video playback pages of the Service itself, the Embeddable Player, or other explicitly authorized means YouTube may designate."
-"You agree not to use the Service for any of the following commercial uses unless you obtain YouTube's prior written approval: [...] the sale of access to the Service."
-"You agree not to circumvent, disable or otherwise interfere with any security-related features of the Service or features that prevent or restrict use or copying of any Content or enforce limitations on use of the Service or the Content therein."
-These terms of service and policies apply to all users of YouTube, regardless of their location. However, different countries have different laws and regulations regarding online streaming, downloading, and sharing of content. Some countries might have more strict or lenient rules than others. Therefore, you should be aware of the legal implications and consequences of using YouTube APK4all in your country.
-One way to avoid violating YouTube's terms of service and policies is to use YouTube APK4all only for personal and non-commercial purposes. Another way is to use a VPN service that can change your IP address and location. This can prevent YouTube from detecting your use of YouTube APK4all and taking any action against you.
- Conclusion
-Summary
-In this article, we have explained what YouTube APK4all is, how to download and install it, what are its features, pros and cons, safety and legality issues, and more. We have also provided screenshots and links to help you understand better.
-YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience. Some of these features are background play, download videos, ad-free, premium content, customization, resolution, speed, theme, etc.
-However, YouTube APK4all also has some drawbacks and risks that you should consider before using it. Some of these are bugs, glitches, no updates, violation of terms of service and policies, malware, viruses, legal issues, penalties, loss of features or functionality, etc.
- Call to action
-If you are interested in trying YouTube APK4all and enjoying its features for free, you can download it from [Apk4all.io], which is a trusted source for downloading thousands of MOD APKs, Premium APKs and MOD games. You can also find other useful information about YouTube APK4all on this website, such as its version, size, developer, rating, reviews, screenshots, etc.
-However, if you are concerned about the safety and legality of YouTube APK4all, you might want to stick to the official YouTube app and respect its terms of service and policies. You can also use other alternatives that are more secure and legal, such as YouTube Music, YouTube Kids, YouTube TV, etc.
-Whatever you decide, we hope you have enjoyed this article and learned something new. If you have any questions, comments, or feedback, please feel free to share them with us in the comment section below. We would love to hear from you and help you out.
-Thank you for reading and happy watching!
- FAQs
-
-- What is YouTube APK4all?
-YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience.
-- How to download and install YouTube APK4all?
-You can download and install YouTube APK4all by following these steps: 1) Enable the installation of apps from unknown sources on your device. 2) Download the YouTube APK4all file from [Apk4all.io]. 3) Locate the file in your device's file manager and tap on it to start the installation process. 4) Open YouTube APK4all and enjoy its features.
-- What are the features of YouTube APK4all?
-Some of the features of YouTube APK4all are background play, download videos, ad-free, premium content, customization, resolution, speed, theme, etc.
-- What are the pros and cons of YouTube APK4all?
-Some of the pros of YouTube APK4all are playing videos in the background, downloading videos for offline viewing, enjoying YouTube without ads and accessing exclusive videos for free, customizing and personalizing the app according to your preferences, watching videos in up to 4K resolution and up to 2x speed, choosing from different themes, etc. Some of the cons of YouTube APK4all are encountering bugs or glitches, not receiving updates or new features from the official YouTube app, violating YouTube's terms of service and policies, exposing your device and data to malware or viruses, facing legal issues or penalties, losing some features or functionality of the official YouTube app, etc.
-- Is YouTube APK4all safe and legal?
-The safety and legality of YouTube APK4all depend largely on the source of the app, the country you live in, and the content you watch. You should be careful and cautious when downloading YouTube APK4all from unknown sources, as some of them might contain malware or viruses that can harm your device or data. You should also be aware of the legal implications and consequences of using YouTube APK4all in your country, as you might be violating YouTube's terms of service and policies by using the app.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md b/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md
deleted file mode 100644
index eec567e3fa6ac5c9dcb4507a0d19505c657c5516..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-Bus Simulator Ultimate Unlimited Money 1.4 9 Mod Apk: How to Download and Play
-If you are a fan of bus simulation games, you might have heard of Bus Simulator Ultimate, a realistic and immersive game that lets you create your own bus company and drive across different countries. But what if you want to play the game with unlimited money, without having to worry about expenses and profits? Well, there is a way to do that, and it involves downloading and installing a mod apk version of the game. In this article, we will show you how to download and play Bus Simulator Ultimate unlimited money 1.4 9 mod apk, and what benefits it can bring to your gaming experience.
-Introduction
-What is Bus Simulator Ultimate?
-Bus Simulator Ultimate is a popular bus simulation game developed by Zuuks Games, a Turkish game studio. The game was released in 2019 for Android and iOS devices, and has since gained millions of downloads and positive reviews from players. The game features realistic graphics, physics, sounds, and weather effects, as well as a variety of buses, routes, cities, and countries to choose from. You can also customize your bus with different skins, accessories, and logos.
-bus simulator ultimate unlimited money 1.4 9 mod apk
Download File ✔ https://urlca.com/2uO5mp
-What is the mod apk version?
-A mod apk version is a modified version of an original app or game that has been altered by third-party developers or hackers to provide some extra features or advantages that are not available in the official version. For example, a mod apk version of Bus Simulator Ultimate can give you unlimited money, unlock all buses and routes, remove ads, and more. However, using a mod apk version also comes with some risks, such as malware infection, account ban, or legal issues.
-Why would you want to play with unlimited money?
-Playing Bus Simulator Ultimate with unlimited money can make the game more fun and enjoyable, as you can buy any bus you want, upgrade your company, hire more drivers, expand your routes, and more. You can also experiment with different settings and options without worrying about losing money or going bankrupt. You can also skip the grind and progress faster in the game.
-How to download and install the mod apk
-Step 1: Find a reliable source
-The first step to download and install the mod apk version of Bus Simulator Ultimate is to find a reliable source that provides the latest and working version of the file. You can search online for websites or forums that offer mod apk downloads, but be careful of fake or malicious links that can harm your device or steal your data. You can also check the reviews and ratings of other users before downloading anything.
-Step 2: Enable unknown sources on your device
-The next step is to enable unknown sources on your device, which will allow you to install apps or games from sources other than the official app store. To do this, go to your device settings, then security or privacy, then toggle on the option for unknown sources. You might also need to grant permission for your browser or file manager to install apps from unknown sources.
-bus simulator ultimate mod apk unlimited money and gold 1.4 9
-bus simulator ultimate hack mod apk download 1.4 9 unlimited money
-bus simulator ultimate 1.4 9 mod apk free purchase
-bus simulator ultimate latest version mod apk unlimited money 1.4 9
-bus simulator ultimate mod apk download for android unlimited money 1.4 9
-bus simulator ultimate mod apk revdl unlimited money and gold 1.4 9
-bus simulator ultimate mod apk happymod unlimited money and gold 1.4 9
-bus simulator ultimate mod apk rexdl unlimited money and gold 1.4 9
-bus simulator ultimate mod apk android 1 unlimited money and gold 1.4 9
-bus simulator ultimate mod apk an1 unlimited money and gold 1.4 9
-bus simulator ultimate mod apk offline unlimited money and gold 1.4 9
-bus simulator ultimate mod apk obb unlimited money and gold 1.4 9
-bus simulator ultimate mod apk online unlimited money and gold 1.4 9
-bus simulator ultimate mod apk no root unlimited money and gold 1.4 9
-bus simulator ultimate mod apk ios unlimited money and gold 1.4 9
-bus simulator ultimate mod apk pc unlimited money and gold 1.4 9
-bus simulator ultimate mod apk pure unlimited money and gold 1.4 9
-bus simulator ultimate mod apk apkpure unlimited money and gold 1.4 9
-bus simulator ultimate mod apk aptoide unlimited money and gold 1.4 9
-bus simulator ultimate mod apk mob.org unlimited money and gold 1.4 9
-bus simulator ultimate mod apk all unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk all buses unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk all countries unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk all features unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk all skins unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk vip unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk pro unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk premium unlocked unlimited money and gold 1.4 9
-bus simulator ultimate mod apk mega mod unlimited money and gold 1.4 9
-bus simulator ultimate mod apk super mod unlimited money and gold 1.4 9
-bus simulator ultimate cheat mod apk unlimited money and gold hack download for android ios pc latest version update new link free no survey no human verification no password no root required working tested safe secure legal legit original official genuine real authentic verified trusted reliable easy simple fast best top high quality hd high definition high resolution high performance low mb low size small mb small size lightweight low battery consumption low data usage low ram usage low cpu usage low gpu usage low storage space low memory space low disk space low bandwidth low internet speed low network speed low wifi speed low cellular speed low mobile data speed low cellular data speed low hotspot speed low tethering speed low bluetooth speed low nfc speed low infrared speed low usb speed low otg speed low sd card speed low external storage speed low internal storage speed no ads no virus no malware no spyware no ransomware no trojan no worm no phishing no scam no fraud no spam no junk no pop up no pop under no redirect no redirecting no link shortener no link shrinker no link cloaker no link masking no link hiding no link protector no link encrypter no link decrypter no link generator no link converter no captcha no recaptcha
-Step 3: Download and install the mod apk file
-The third step is to download and install the mod apk file from the source you have chosen. You can either use your browser or a file manager app to locate and open the file. Then follow the instructions on the screen to install the mod apk. You might need to overwrite or uninstall the original version of the game if you have it already installed.
-Step 4: Launch the game and enjoy
-The final step is to launch the game and enjoy playing with unlimited money. You should see a mod menu or icon on the screen that lets you access the mod features and settings. You can also check your money balance and see if it has increased to a huge amount. You can now buy and upgrade anything you want in the game.
-How to play the game with unlimited money
-Create your own bus company
-One of the main features of Bus Simulator Ultimate is that you can create your own bus company and manage it. You can choose your company name, logo, color, and headquarters location. You can also hire drivers, assign them buses and routes, and monitor their performance and feedback. With unlimited money, you can hire as many drivers as you want and pay them well.
-Choose your bus and route
-Another feature of the game is that you can choose from a variety of buses and routes to drive. You can select from different bus models, such as double-decker, articulated, school, or coach buses. You can also customize your bus with different skins, accessories, and logos. You can also choose from different routes that span across different countries, such as Germany, France, Italy, Turkey, USA, and more. You can also create your own routes and share them with other players. With unlimited money, you can unlock all buses and routes without having to complete missions or earn coins.
-Drive safely and earn money
-The core gameplay of Bus Simulator Ultimate is driving your bus along the route and picking up and dropping off passengers. You have to follow the traffic rules, signals, signs, and speed limits. You also have to deal with realistic situations, such as traffic jams, accidents, weather changes, road works, and more. You have to drive safely and smoothly to avoid damaging your bus or upsetting your passengers. You also have to interact with your passengers, such as greeting them, selling tickets, answering questions, and more. You can earn money by completing your route successfully and satisfying your passengers. With unlimited money, you can earn even more money by increasing your ticket prices or driving longer routes.
-Upgrade your bus and company
-The last feature of the game is that you can upgrade your bus and company with the money you earn. You can improve your bus performance by upgrading its engine, brakes, suspension, tires, and more. You can also enhance your bus appearance by adding new skins, accessories, logos, and more. You can also upgrade your company by expanding your headquarters, buying new buses, hiring more drivers, opening new routes, and more. With unlimited money, you can upgrade everything to the max level without having to wait or save up.
-Conclusion
-Summary of the main points
-In conclusion, Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you create your own bus company and drive across different countries. If you want to play the game with unlimited money, you can download and install a mod apk version of the game that gives you access to all buses, routes, upgrades, and more. To do this, you have to find a reliable source for the mod apk file, enable unknown sources on your device, download and install the mod apk file, and launch the game and enjoy. Playing with unlimited money can make the game more fun and enjoyable, as you can buy and upgrade anything you want, and experiment with different settings and options.
-Call to action
-If you are interested in trying out Bus Simulator Ultimate unlimited money 1.4 9 mod apk, you can follow the steps we have outlined in this article and download the file from a reliable source. However, we also advise you to be careful of the risks involved in using a mod apk version, such as malware infection, account ban, or legal issues. You should also respect the original developers of the game and support them by buying the official version if you like it. Bus Simulator Ultimate is a great game that deserves your attention and appreciation.
-FAQs
-What is Bus Simulator Ultimate?
-Bus Simulator Ultimate is a realistic and immersive bus simulation game that lets you create your own bus company and drive across different countries.
-What is the mod apk version of Bus Simulator Ultimate?
-The mod apk version of Bus Simulator Ultimate is a modified version of the original game that gives you unlimited money and other advantages that are not available in the official version.
-How to download and install the mod apk version of Bus Simulator Ultimate?
-To download and install the mod apk version of Bus Simulator Ultimate, you have to find a reliable source for the file, enable unknown sources on your device, download and install the file, and launch the game.
-How to play the game with unlimited money?
-To play the game with unlimited money, you can buy and upgrade any bus, route, or company feature you want, without having to worry about expenses or profits. You can also increase your ticket prices or drive longer routes to earn more money.
-What are the benefits and risks of playing with unlimited money?
-The benefits of playing with unlimited money are that you can make the game more fun and enjoyable, as you can experiment with different settings and options, and progress faster in the game. The risks of playing with unlimited money are that you might get infected by malware, banned by the game server, or sued by the game developer.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md
deleted file mode 100644
index 069934dbdb76127957348f055c8fe43f3cb1ff37..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md
+++ /dev/null
@@ -1,197 +0,0 @@
-
-Chicken Gun APK 3.2.05: A Hilarious and Action-Packed Shooter Game
-If you are looking for a fun and quirky shooter game that will make you laugh and challenge you at the same time, then you should try Chicken Gun APK 3.2.05. This game is about chickens with guns who shoot and fight with each other in various modes and maps. You can customize your chicken with different weapons, beaks, sneakers, caps, and more. You can also throw explosive eggs and cause mayhem in the battlefield.
-chicken gun apk 3.2.05
Download File ✏ https://urlca.com/2uOeI1
-In this article, we will tell you everything you need to know about Chicken Gun APK 3.2.05, including what it is, how to download and install it on your device, how to play it on your PC or Mac, how to master it and have more fun, and what are the pros and cons of playing it.
- What is Chicken Gun APK 3.2.05?
-Chicken Gun APK 3.2.05 is the latest version of Chicken Gun, a popular action game developed by ChaloApps for Android devices. The game has over 50 million downloads and a 4.4-star rating on Google Play Store.
- The concept and features of the game
-The concept of Chicken Gun is simple but hilarious: armed chickens shoot and fight with each other in two modes: 5 vs 5 teams or against all. You can choose from different chicken characters with different abilities and personalities.
-chicken gun game download apk 3.2.05
-chicken gun mod apk 3.2.05 unlimited money
-chicken gun apk 3.2.05 latest version
-chicken gun apk 3.2.05 free download
-chicken gun apk 3.2.05 android
-chicken gun apk 3.2.05 offline
-chicken gun apk 3.2.05 update
-chicken gun apk 3.2.05 hack
-chicken gun apk 3.2.05 online
-chicken gun apk 3.2.05 for pc
-chicken gun apk 3.2.05 gameplay
-chicken gun apk 3.2.05 review
-chicken gun apk 3.2.05 features
-chicken gun apk 3.2.05 tips and tricks
-chicken gun apk 3.2.05 cheats
-chicken gun apk 3.2.05 no ads
-chicken gun apk 3.2.05 premium
-chicken gun apk 3.2.05 pro
-chicken gun apk 3.2.05 unlocked
-chicken gun apk 3.2.05 full version
-chicken gun apk 3.2.05 best weapons
-chicken gun apk 3.2.05 skins
-chicken gun apk 3.2.05 maps
-chicken gun apk 3.2.05 modes
-chicken gun apk 3.2.05 teams
-chicken gun apk 3.2.05 eggs
-chicken gun apk 3.2.05 fun
-chicken gun apk 3.2.05 action
-chicken gun apk 3.2.05 shooting
-chicken gun apk 3.2.05 multiplayer
-chicken gun apk 3.2.05 co-op
-chicken gun apk 3.2.05 pvp
-chicken gun apk 3.2.05 vs all
-chicken gun apk 3.2.05 rooster
-chicken gun apk 3.2.05 beak
-chicken gun apk 3.2.05 sneakers
-chicken gun apk 3.2.05 caps
-chicken gun apk 3
-The game also has many features that make it more enjoyable and engaging, such as:
-
-- You can cool your rooster with various weapons, beaks, sneakers, caps, and other accessories.
-- You can throw explosive eggs that can cause massive damage to your enemies.
-- You can chat with other players in the lobby or during the match.
-- You can join clans or create your own clan with your friends.
-- You can play on different maps with different themes and obstacles.
-- You can earn coins by playing matches or watching ads.
-- You can use coins to buy new items or upgrade your existing ones.
-
- The latest version and updates of the game
-The latest version of Chicken Gun APK is 3.2.05, which was released on February 11, 2023. This version has some new features and improvements, such as
Some of the new features and improvements of Chicken Gun APK 3.2.05 are:
-
-- You can now play on a new map called Factory, which has a lot of pipes, crates, and machines to hide behind or use as cover.
-- You can now use a new weapon called Flamethrower, which can set your enemies on fire and deal continuous damage over time.
-- You can now buy a new accessory called Jetpack, which can let you fly in the air and dodge bullets or surprise your enemies from above.
-- You can now see your kill streaks and multi-kills on the screen, which can boost your confidence and motivation.
-- You can now enjoy better graphics, performance, and stability, as well as bug fixes and optimizations.
-
- How to download and install Chicken Gun APK 3.2.05 on your device?
-If you want to play Chicken Gun APK 3.2.05 on your Android device, you have two options: you can either download and install it from Google Play Store or from APKCombo. Here are the steps for both methods:
- The steps to download and install the game from Google Play Store
-
-- Open Google Play Store on your device and search for Chicken Gun.
-- Select the game from the search results and tap on Install.
-- Wait for the game to download and install on your device.
-- Once the installation is complete, tap on Open to launch the game.
-- Enjoy playing Chicken Gun APK 3.2.05 on your device.
-
- The steps to download and install the game from APKCombo
-
-- Open a web browser on your device and go to APKCombo.com.
-- Search for Chicken Gun in the search bar and select the game from the search results.
-- Tap on Download APK (288 MB) to download the game file on your device.
-- Once the download is complete, locate the file in your device's file manager and tap on it to install it.
-- If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device's settings and enable the option to allow installation from unknown sources.
-- After enabling the option, go back to the file manager and tap on the file again to install it.
-- Once the installation is complete, tap on Open to launch the game.
-- Enjoy playing Chicken Gun APK 3.2.05 on your device.
-
- How to play Chicken Gun APK 3.2.05 on your PC or Mac?
-If you want to play Chicken Gun APK 3.2.05 on your PC or Mac, you need to use an Android emulator that can run Android apps and games on your computer. One of the best Android emulators that you can use is BlueStacks, which is free, fast, and easy to use. Here are the steps to play Chicken Gun APK 3.2.05 on your PC or Mac using BlueStacks:
- The benefits of playing the game on PC or Mac
-Playing Chicken Gun APK 3.2.05 on your PC or Mac has some advantages over playing it on your mobile device, such as:
-
-- You can enjoy a bigger screen and better graphics quality.
-- You can use a keyboard and mouse for more precise and comfortable controls.
-- You can save battery life and storage space on your mobile device.
-- You can play with multiple accounts or switch between different devices easily.
-
- The steps to play the game on PC or Mac using BlueStacks emulator
-
-- Go to BlueStacks.com and download the latest version of BlueStacks for your PC or Mac.
-- Install BlueStacks on your computer by following the instructions on the screen.
-- Launch BlueStacks and sign in with your Google account or create a new one if you don't have one.
-- In BlueStacks, go to Google Play Store and search for Chicken Gun.
-- Select the game from the search results and
- Select the game from the search results and tap on Install.
-- Wait for the game to download and install on BlueStacks.
-- Once the installation is complete, tap on Open to launch the game.
-- Enjoy playing Chicken Gun APK 3.2.05 on your PC or Mac.
-
- How to master Chicken Gun APK 3.2.05 and have more fun?
-If you want to master Chicken Gun APK 3.2.05 and have more fun, you need to improve your skills and strategy in the game. You also need to know the best weapons, accessories, and maps to use in the game. Here are some tips and tricks that can help you:
- The tips and tricks to improve your skills and strategy in the game
-Some of the tips and tricks that can help you improve your skills and strategy in Chicken Gun APK 3.2.05 are:
-
-- Aim for the head of your enemies to deal more damage and get headshots.
-- Use the explosive eggs wisely, as they can damage both your enemies and yourself.
-- Move around and avoid staying in one spot for too long, as you can become an easy target.
-- Cover behind objects and walls to protect yourself from enemy fire.
-- Use the jetpack to fly over obstacles and surprise your enemies from above.
-- Switch between different weapons depending on the situation and your preference.
-- Work with your teammates and communicate with them using the chat feature.
-- Join or create a clan to play with other players who share your interests and goals.
-
- The best weapons, accessories, and maps to use in the game
-Some of the best weapons, accessories, and maps to use in Chicken Gun APK 3.2.05 are:
-
-Weapon | Description |
-Flamethrower | This weapon can set your enemies on fire and deal continuous damage over time. It is effective at close range and can spread fire to multiple targets. |
-Rocket Launcher | This weapon can launch rockets that explode on impact and deal splash damage to nearby enemies. It is effective at long range and can destroy objects and walls. |
-Sniper Rifle | This weapon can shoot bullets that pierce through enemies and deal high damage. It is effective at long range and can zoom in for better accuracy. |
-Shotgun | This weapon can shoot pellets that spread out and deal moderate damage. It is effective at close range and can hit multiple targets at once. |
-Pistol | This weapon can shoot bullets that deal low damage but have a high fire rate. It is effective at medium range and can be used as a backup weapon. |
-
-
-Accessory | Description |
-Jetpack | This accessory can let you fly in the air and dodge bullets or surprise your enemies from above. It has a limited fuel capacity that recharges over time. |
-Grenade Belt | This accessory can let you carry more explosive eggs that you can throw at your enemies. It increases your egg capacity by 50%. |
-Bulletproof Vest | This accessory can protect you from enemy fire and reduce the damage you take by 25%. It also increases your health by 25%. |
-Night Vision Goggles | This accessory can help you see better in dark environments and spot hidden enemies. It also increases your accuracy by 10%. |
-Camo Cap | This accessory can help you blend in with the environment and avoid detection by enemies. It also increases your stealth by 10%. |
-
-
-Map | Description |
-Factory | This map has a lot of pipes, crates, and machines to hide behind or use as cover. It also has some conveyor belts that can move you or your enemies around. |
-Farm | This map has a lot This map has a lot of crops, animals, and barns to explore or use as cover. It also has some tractors and hay bales that can move you or your enemies around. |
-City | This map has a lot of buildings, cars, and streets to navigate or use as cover. It also has some bridges and tunnels that can connect you or your enemies to different areas. |
-Desert | This map has a lot of sand, rocks, and cacti to hide behind or use as cover. It also has some oases and wells that can provide you or your enemies with water. |
-Forest | This map has a lot of trees, bushes, and flowers to blend in with or use as cover. It also has some rivers and lakes that can drown you or your enemies. |
-
- What are the pros and cons of Chicken Gun APK 3.2.05?
-Chicken Gun APK 3.2.05 is a fun and quirky shooter game that can make you laugh and challenge you at the same time. However, like any game, it also has some pros and cons that you should be aware of before playing it. Here are some of them:
- The advantages of playing the game
-Some of the advantages of playing Chicken Gun APK 3.2.05 are:
-
-- You can enjoy a hilarious and action-packed gameplay with chickens with guns.
-- You can customize your chicken with various weapons, beaks, sneakers, caps, and other accessories.
-- You can throw explosive eggs that can cause massive damage to your enemies.
-- You can chat with other players in the lobby or during the match.
-- You can join clans or create your own clan with your friends.
-- You can play on different maps with different themes and obstacles.
-- You can earn coins by playing matches or watching ads.
-- You can use coins to buy new items or upgrade your existing ones.
-- You can play the game on your mobile device or on your PC or Mac using an emulator.
-
- The disadvantages of playing the game
-Some of the disadvantages of playing Chicken Gun APK 3.2.05 are:
-
-- You may encounter some bugs, glitches, or crashes while playing the game.
-- You may face some lag or latency issues while playing online matches.
-- You may find some ads annoying or intrusive while playing the game.
-- You may need a lot of coins to unlock or upgrade all the items in the game.
-- You may get addicted to the game and spend too much time or money on it.
-
- Conclusion
-Chicken Gun APK 3.2.05 is a hilarious and action-packed shooter game that will make you laugh and challenge you at the same time. You can play as chickens with guns who shoot and fight with each other in various modes and maps. You can customize your chicken with different weapons, beaks, sneakers, caps, and more. You can also throw explosive eggs and cause mayhem in the battlefield.
-If you want to play Chicken Gun APK 3.2.05 on your device, you can download and install it from Google Play Store or from APKCombo. If you want to play it on your PC or Mac, you can use BlueStacks emulator to run it on your computer. If you want to master it and have more fun, you can follow our tips and tricks to improve your skills and strategy in the game. You can also check out our table of the best weapons, accessories, and maps to use in the game.
-However, before you play Chicken Gun APK 3.2.05, you should also be aware of its pros and cons. The game has many advantages that make it enjoyable and engaging, but it also has some disadvantages that may affect your experience or satisfaction. You should weigh them carefully before deciding whether to play the game or not.
- FAQs
-Here are some frequently asked questions about Chicken Gun APK 3.2.05:
- Q: Is Chicken Gun APK 3.2.05 free to play?
-A: Yes, Chicken Gun APK 3.2.05 is free to play on Android devices. However, it contains some in-app purchases that can enhance your gameplay or remove ads.
- Q: Is Chicken Gun APK 3.2.05 safe to download and install?
-A: Yes, Chicken Gun APK
A: Yes, Chicken Gun APK 3.2.05 is safe to download and install on your device. The game has been verified by Google Play Protect and APKCombo, which are trusted sources for Android apps and games. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain malware or viruses.
- Q: How can I get more coins in Chicken Gun APK 3.2.05?
-A: You can get more coins in Chicken Gun APK 3.2.05 by playing matches or watching ads. You can also buy coins with real money through in-app purchases.
- Q: How can I contact the developer of Chicken Gun APK 3.2.05?
-A: You can contact the developer of Chicken Gun APK 3.2.05 by sending an email to chaloapps@gmail.com or by visiting their Facebook page.
- Q: What are the minimum requirements to play Chicken Gun APK 3.2.05?
-A: The minimum requirements to play Chicken Gun APK 3.2.05 are:
-
-- An Android device with Android 5.0 or higher.
-- At least 288 MB of free storage space on your device.
-- A stable internet connection.
-
- Q: Can I play Chicken Gun APK 3.2.05 offline?
-A: No, you cannot play Chicken Gun APK 3.2.05 offline, as the game requires an internet connection to connect to the servers and other players.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md b/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md
deleted file mode 100644
index dae6914d9547fc98e071b032d3e345544e8f1f41..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-Reward FF Garena APK Download: How to Get Free Rewards in Free Fire
-If you are a fan of Free Fire, the world-famous survival shooter game, you might be interested in getting some free rewards like diamonds, characters, skins, and more. One way to do that is by using Reward FF Garena APK, an app that allows you to redeem codes for various items. In this article, we will tell you everything you need to know about Reward FF Garena APK, how to download and install it, how to use it, and how to get more rewards in Free Fire.
- What is Free Fire?
-A world-famous survival shooter game
-Free Fire is a mobile game that has over 1 billion downloads on Google Play Store. It is a survival shooter game where you are placed on a remote island with 49 other players, and you have to fight for your survival. You can choose your starting point with your parachute, explore the map, loot weapons and items, hide or ambush your enemies, and try to be the last one standing. You can also drive vehicles, use pets, and customize your character with different outfits and accessories.
-reward ff garena apk download
Download File > https://urlca.com/2uO8Jz
- Different game modes and features
-Free Fire offers a variety of game modes for different preferences and play styles. You can play solo, duo, or squad mode in Battle Royale, where you have to survive against other players in a shrinking safe zone. You can also play Clash Squad, a fast-paced 4v4 mode where you have to manage your economy and defeat the enemy team. There are also other modes like Bomb Squad, Gun King, Rampage, etc. that offer different challenges and fun. Free Fire also has many features like Firelink technology that lets you play with all Free Fire players across devices, realistic graphics and animations, in-game voice chat, and more.
- What is Reward FF Garena APK?
-An app that allows users to redeem codes for free rewards
-Reward FF Garena APK is an app that lets you redeem codes for free rewards in Free Fire. These codes are usually given away by Garena, the developer of Free Fire, during live or online events such as live streams, tournaments, collaborations, etc. The rewards vary depending on the code, but they can include diamonds, golds, characters, skins, emotes, vouchers, etc. The codes have 12 or 16 characters consisting of capital letters and numbers. They also have an expiration date, so you have to redeem them before they expire.
- How to download and install the app
-To download and install Reward FF Garena APK on your Android device, you need to follow these steps:
-
-- Go to [this link](^2^) and download the APK file.
-- Enable unknown sources on your device settings if you haven't done so already.
-- Locate the downloaded file on your device and tap on it.
-- Follow the instructions on the screen to install the app.
-- Launch the app and log in with your Free Fire account. You can use Facebook or VK as your login method.
-
- How to use Reward FF Garena APK?
-How to find and enter redemption codes
-To use Reward FF Garena APK to redeem codes for free rewards in Free Fire, you need to follow these steps:
-
-- Find a valid redemption code for Free Fire. You can find them on the official social media accounts of Free Fire, such as Facebook, Instagram, Twitter, YouTube, etc. You can also check out some websites or blogs that share the latest codes, such as [this one].
-- Open the Reward FF Garena APK app and tap on the "Redeem" button.
-- Enter the code in the text box and tap on the "Confirm" button.
-- Wait for a few seconds and you will see a message that says "Redeemed successfully".
-- Open your Free Fire game and check your in-game mail. You will find your rewards there.
-
- What kind of rewards can you get
-The rewards that you can get from redeeming codes using Reward FF Garena APK vary depending on the code. Some of the common rewards are:
-
-- Diamonds: The premium currency of Free Fire that can be used to buy various items and features.
-- Golds: The basic currency of Free Fire that can be used to buy some items and features.
-- Characters: The playable characters in Free Fire that have different skills and abilities.
-- Skins: The cosmetic items that can change the appearance of your characters, weapons, vehicles, pets, etc.
-- Emotes: The gestures and expressions that your characters can perform in the game.
-- Vouchers: The coupons that can be used to get discounts or free spins on some features like Gold Royale and Diamond Royale.
-
- Tips and tricks for getting more rewards in Free Fire
-Follow official social media accounts and live streams
-One of the best ways to get more rewards in Free Fire is to follow the official social media accounts of Free Fire, such as Facebook, Instagram, Twitter, YouTube, etc. These accounts often post updates, news, events, and giveaways that can give you free rewards. You can also watch the live streams of Free Fire on platforms like YouTube, Facebook Gaming, Booyah, etc. These live streams often feature redemption codes, quizzes, lucky draws, and other activities that can give you free rewards.
- Participate in events and challenges
-Another way to get more rewards in Free Fire is to participate in events and challenges that are regularly held in the game. These events and challenges can be found on the main menu or the calendar icon of the game. They usually have different themes, durations, and requirements. Some examples of events and challenges are:
-
-- New Year Event: An event that celebrates the new year with various missions, rewards, and features.
-- Rampage Event: An event that features a new game mode where you can transform into powerful beasts with special abilities.
-- Elite Pass: A monthly pass that gives you access to exclusive rewards by completing daily and weekly missions.
-- Daily Login: A simple challenge that gives you free rewards by logging in to the game every day.
-
- Use in-game features like Gold Royale and Diamond Royale
-A third way to get more rewards in Free Fire is to use some of the in-game features that offer you a chance to win various items. Some of these features are:
-reward ff garena apk latest version
-reward ff garena apk free fire
-reward ff garena apk mod
-reward ff garena apk unlimited diamonds
-reward ff garena apk 2023
-reward ff garena apk redeem code
-reward ff garena apk hack
-reward ff garena apk update
-reward ff garena apk for android
-reward ff garena apk offline
-reward ff garena apk online
-reward ff garena apk no verification
-reward ff garena apk generator
-reward ff garena apk pro
-reward ff garena apk premium
-reward ff garena apk download link
-reward ff garena apk download for pc
-reward ff garena apk download 2021
-reward ff garena apk download free
-reward ff garena apk download latest
-reward ff garena apk download mod
-reward ff garena apk download hack
-reward ff garena apk download unlimited diamonds
-reward ff garena apk download redeem code
-reward ff garena apk download update
-how to install reward ff garena apk
-how to use reward ff garena apk
-how to get reward ff garena apk
-how to redeem reward ff garena apk code
-how to update reward ff garena apk
-how to hack reward ff garena apk
-how to download reward ff garena apk for android
-how to download reward ff garena apk for pc
-how to download reward ff garena apk mod
-how to download reward ff garena apk hack
-how to download reward ff garena apk unlimited diamonds
-how to download reward ff garena apk redeem code
-how to download reward ff garena apk latest version
-is reward ff garena apk safe
-is reward ff garena apk legit
-is reward ff garena apk real
-is reward ff garena apk working
-is reward ff garena apk legal
-is reward ff garena apk banned
-is reward ff garena apk virus free
-what is reward ff garena apk
-what is the best reward ff garena apk
-what is the latest version of reward ff garena apk
-
-- Gold Royale: A feature that lets you spin a wheel using golds to win random items like skins, vouchers, etc.
-- Diamond Royale: A feature that lets you spin a wheel using diamonds to win random items like skins, characters, emotes, etc.
-- Luck Royale: A feature that lets you spin different wheels using tickets or diamonds to win random items like skins, characters, emotes, etc.
-- Mystery Shop: A feature that lets you buy items with discounts up to 90% using diamonds.
-
- Conclusion
-In conclusion, Reward FF Garena APK is an app that allows you to redeem codes for free rewards in Free Fire. You can download and install the app on your Android device and use it to enter valid redemption codes. You can also get more rewards in Free Fire by following official social media accounts and live streams, participating in events and challenges, and using in-game features like Gold Royale and Diamond Royale. We hope this article has helped you learn more about Reward FF Garena APK and how to get free rewards in Free Fire. Happy gaming!
- FAQs
-Here are some frequently asked questions about Reward FF Garena APK:
- Q: Is Reward FF Garena APK safe to use?
-A: Reward FF Garena APK is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, you should always be careful when using third-party apps that are not affiliated with Garena or Free Fire. You should also avoid using any hacks, cheats, or mods that can harm your device or account.
- Q: How often are new codes released for Reward FF Garena APK?
-A: There is no fixed schedule for the release of new codes for Reward FF Garena APK. The codes are usually given away by Garena during special occasions, events, or promotions. You should always keep an eye on the official social media accounts and live streams of Free Fire to get the latest codes as soon as possible.
- Q: Can I use the same code more than once?
-A: No, you cannot use the same code more than once. Each code can only be redeemed by one account and one device. If you try to use a code that has already been used or expired, you will get an error message.
- Q: What should I do if I encounter a problem with Reward FF Garena APK?
-A: If you encounter a problem with Reward FF Garena APK, such as the app not working, the code not being accepted, the reward not being delivered, etc., you should try the following steps:
-
-- Check your internet connection and make sure it is stable and fast.
-- Check your device storage and make sure it has enough space for the app and the game.
-- Check your Free Fire account and make sure it is linked to Facebook or VK.
-- Check the code and make sure it is valid, not expired, and entered correctly.
-- Restart the app and the game and try again.
-- Contact the customer service of Free Fire or Reward FF Garena APK if the problem persists.
-
- Q: Can I share Reward FF Garena APK with my friends?
-A: Yes, you can share Reward FF Garena APK with your friends who also play Free Fire. However, you should only share it from a trusted source and not from any unknown or suspicious links. You should also respect the terms and conditions of Free Fire and Reward FF Garena APK and not abuse or exploit the app for unfair advantages.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md b/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md
deleted file mode 100644
index 89e90e01fe325e81fcea4749173408bda7fbe0cc..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-Feng Shui Kuralları: Evinizi ve Hayatınızı Düzenlemenin Yolları
-Feng shui, yaşadığımız mekanların enerjisiyle uyum içinde olmamızı sağlayan eski bir Çin sanatıdır. Taoizm ile bağlantılı olan bu sanat, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler. Feng shui, evimizi ve hayatımızı daha huzurlu, sağlıklı ve mutlu kılmak için bize bazı ipuçları ve kurallar sunar. Bu yazıda, feng shui'nin ne olduğunu, temel prensiplerini ve evinizin her odası için uygulayabileceğiniz feng shui kurallarını öğreneceksiniz.
- Feng Shui Nedir?
-Feng shui, rüzgar ve su anlamına gelir. Yaşam gücü enerjisi olan qi'nin evimize ve hayatımıza girebilmesi için bir geçit olan ön kapımız gibi mekanlarımızın enerjisini etkiler. Bu nedenle, mekanlarımızda farkındalık yaratmak önemlidir. Günlük yaşamınıza daha iyi bir akış ve zindelik katmaya hazırsanız, aşağıda sizin için derlediğimiz feng shui önerileri ile evinizdeki her odaya yönelik ayrı ayrı müdahalelerde bulunabilirsiniz.
-feng shui kuralları
Download Zip ✦✦✦ https://urlca.com/2uOeEt
- Feng Shui'nin Temel Prensipleri
-Feng shui, beş element sistemi kullanır. Bu sistem, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler. Beş element şunlardır: toprak, metal, su, ahşap ve ateş. Her element, hayatımızda geliştirmek istediğimiz belirli niteliklerle, renklerle ve şekillerle ilişkilidir. Evimizi dekore ederken bu elementleri uygun şekilde kullanarak mekanlarımızın enerjisini dengeleyebiliriz.
- Beş Element
-
-- Toprak elementi kendine bakım, sınırlar ve beslenme ile ilgilidir. Toprak renkleri olan sarı, turuncu ve kahverengi, kare şekilleri ve ağır nesneler toprak elementini temsil eder. Evinize toprak elementi eklemek için kare sarı bir halı veya sağlam dikdörtgen bir masa seçebilirsiniz.
-- Metal elementi neşe, netlik ve iletişim ile ilgilidir. Metal renkleri olan beyaz, gri ve gümüş, yuvarlak şekilleri ve parlak nesneler metal elementini temsil eder. Evinize metal elementi eklemek için yuvarlak beyaz bir vazo veya parlak gümüş bir ayna seçebilirsiniz.
-- Su elementi sezgi, akışkanlık ve duygusallık ile ilgilidir. Su renkleri olan mavi, siyah ve mor, dalgalı şekilleri ve sıvı nesneler su elementini temsil eder. Evinize su elementi eklemek için dalgalı mavi bir perde veya sıvı dolu bir kase seçebilirsiniz.
-- Ahşap elementi büyüme, yaratıcılık ve bolluk ile ilgilidir. Ahşap renkleri olan yeşil, turkuaz ve krem, dikdörtgen şekilleri ve bitkisel nesneler ahşap elementini temsil eder. Evinize ahşap elementi eklemek için dikdörtgen yeşil bir yastık veya canlı bitkiler seçebilirsiniz.
-- Ateş elementi tutku, cesaret ve ilham ile ilgilidir. Ateş renkleri olan kırmızı, pembe ve altın, üçgen şekilleri ve ışıklı nesneler ateş elementini temsil eder. Evinize ateş elementi eklemek için üçgen kırmızı bir mum veya altın ışıklı bir avize seçebilirsiniz.
-
- Hakim Konum
-Hakim konum, evinizin her odasında en önemli mobilyanın yerleştirilmesi gereken yerdir. Bu mobilya genellikle yatak, masa veya koltuk olur. Hakim konum, odaya girdiğinizde kapının karşı duvarında olmalıdır. Ayrıca kapının görüş alanınızda olmasına dikkat edin. Böylece odaya giren herhangi bir enerjiyi kontrol edebilirsiniz.
- Bagua Haritası
-Bagua haritası, evinizin her alanının hayatınızın farklı yönlerine karşılık geldiği bir diyagramdır. Bagua haritasını evinizin planına uygulayarak hangi alanların iyileştirilmeye ihtiyacı olduğunu belirleyebilirsiniz. Bagua haritasında dokuz alan vardır: şöhret ve itibar, ilişkiler ve aşk, yaratıcılık ve çocuklar, yardımsever dostlar ve seyahat, kariyer ve yaşam amacı, bilgelik ve bilgi, aile ve sağlık, zenginlik ve bolluk, merkez ve denge.
- Evinizin Her Odası İçin Feng Shui Kuralları
-Evinizin her odası farklı bir enerjiye sahiptir. Bu nedenle her oda için farklı feng shui kuralları uygulamak gerekir. Aşağıda evinizin her odası için feng shui kurallarını bulabilirsiniz.
- Giriş
-Giriş evinizin enerjisini belirleyen ilk izlenimi oluşturur. Bu nedenle girişi temiz, aydınlık ve davetkar tutmak önemlidir. Girişte şunlara dikkat edin:
-feng shui dekorasyon önerileri
-feng shui ev düzeni nasıl olmalı
-feng shui renkleri ve anlamları
-feng shui ile bereketli bir ev
-feng shui giriş kapısı nasıl olmalı
-feng shui salon düzenlemesi
-feng shui yatak odası ayna
-feng shui mutfak tasarımı
-feng shui yemek odası dekorasyonu
-feng shui oturma odası renkleri
-feng shui banyo düzeni ve aksesuarları
-feng shui çalışma odası nasıl yapılır
-feng shui çocuk odası için ipuçları
-feng shui balkon ve teras dekorasyonu
-feng shui bahçe tasarımı ve bitkileri
-feng shui beş element nedir ve nasıl kullanılır
-feng shui bagua haritası nedir ve nasıl çizilir
-feng shui kompassı nedir ve nasıl okunur
-feng shui hayat alanları ve anlamları
-feng shui enerji akışı nasıl sağlanır
-feng shui ile evde negatif enerjiden kurtulma yolları
-feng shui ile evde pozitif enerji arttırma yöntemleri
-feng shui ile evde huzur ve mutluluk yaratma teknikleri
-feng shui ile evde sağlık ve refah arttırma tavsiyeleri
-feng shui ile evde aşk ve ilişki geliştirme ipuçları
-feng shui ile evde kariyer ve başarı arttırma önerileri
-feng shui ile evde para ve bolluk çekme yolları
-feng shui ile evde yaratıcılık ve ilham arttırma yöntemleri
-feng shui ile evde öğrenme ve bilgelik geliştirme teknikleri
-feng shui ile evde seyahat ve yardımseverlik arttırma tavsiyeleri
-feng shui ile evde aile ve arkadaşlık geliştirme ipuçları
-feng shui ile evde çocuklar ve gelecek planlama önerileri
-feng shui ile evde ünlü olma ve itibar arttırma yolları
-feng shui ile evde ruhsal gelişim ve iç huzur arttırma yöntemleri
-feng shui ile evde uyku kalitesi ve dinlenme geliştirme teknikleri
-feng shui ile evde stres azaltma ve rahatlama tavsiyeleri
-feng shui ile evde doğal malzemeler kullanma ipuçları
-feng shui ile evde ışıklandırma nasıl yapılır
-feng shui ile evde ses etkisi nasıl kullanılır
-feng shui ile evde koku etkisi nasıl kullanılır
-feng shui ile evde sanat eserleri nasıl seçilir ve yerleştirilir
-feng shui ile evde ayna kullanımı nasıl yapılır
-feng shui ile evde bitki kullanımı nasıl yapılır
-feng shui ile evde su öğesi nasıl kullanılır
-feng shui ile evde ateş öğesi nasıl kullanılır
-feng shui ile evde metal öğesi nasıl kullanılır
-feng shui ile evde ahşap öğesi nasıl kullanılır
-feng shui ile evde toprak öğesi nasıl kullanılır
-feng shui ile evde tılsım ve sembol kullanımı nasıl yapılır
-
-- Girişi engelleyecek eşyalardan kaçının. Ayakkabılarınızı düzenli bir şekilde saklayın veya başka bir yere taşıyın.
-- Girişi aydınlatın. Doğal ışık alması için perdeleri açın veya yapay ışık kullanın.
-- Girişi renklendirin. Canlı renkler veya resimler ile girişi canlandırın. Girişinizin enerjisini yükseltmek için kırmızı, turuncu veya sarı gibi ateş elementi renkleri kullanabilirsiniz.
-- Girişe bir ayna ekleyin. Ayna, girişinizi daha geniş ve ferah gösterir. Ayrıca evinize giren qi'yi yansıtır ve dağıtır. Ancak aynayı kapının tam karşısına koymayın, çünkü bu qi'nin geri dönmesine neden olur.
-- Girişe bir bitki ekleyin. Bitki, girişinize canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
-
- Salon
-Salon evinizin sosyal alanıdır. Burada ailenizle ve misafirlerinizle vakit geçirirsiniz. Bu nedenle salonunuzu rahat, sıcak ve uyumlu yapmak önemlidir. Salonunuzda şunlara dikkat edin:
-
-- Salonunuzu düzenleyin. Salonunuzda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Salonunuzu aydınlatın. Salonunuzda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak salonunuzun atmosferini değiştirebilirsiniz.
-- Salonunuzu renklendirin. Salonunuzda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz. Salonunuzda canlılık ve neşe yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz.
-- Salonunuza bir merkez noktası ekleyin. Salonunuzda odaklanabileceğiniz bir merkez noktası oluşturun. Bu merkez noktası bir resim, bir heykel, bir şömine veya bir akvaryum olabilir. Merkez noktanızın salonunuzun enerjisini yansıttığından emin olun.
-- Salonunuza bir bitki ekleyin. Bitki, salonunuza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
-
- Yemek Odası
-Yemek odası evinizin beslenme alanıdır. Burada ailenizle ve misafirlerinizle yemek yersiniz. Bu nedenle yemek odanızı lezzetli, keyifli ve bereketli yapmak önemlidir. Yemek odanızda şunlara dikkat edin:
-
-- Yemek odanızı düzenleyin. Yemek odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Yemek odanızı aydınlatın. Yemek odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak yemek odanızın atmosferini değiştirebilirsiniz.
-- Yemek odanızı renklendirin. Yemek odanızda iştah ve keyif uyandırmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz. Yemek odanızda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
-- Yemek odanıza bir masa örtüsü ekleyin. Masa örtüsü, yemek odanızın görünümünü değiştirebilir. Ayrıca yemek masanızın enerjisini korur ve temiz tutar. Masa örtünüzün rengini ve desenini yemek odanızın tarzına uygun seçin.
-- Yemek odanıza bir çiçek ekleyin. Çiçek, yemek odanıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak çiçeğinizin sağlıklı ve bakımlı olduğundan emin olun.
-
- Mutfak
-Mutfak evinizin beslenme alanıdır. Burada yemek pişirir, hazırlar ve saklarsınız. Bu nedenle mutfağınızı temiz, ferah ve işlevsel yapmak önemlidir. Mutfağınızda şunlara dikkat edin:
-
-- Mutfağınızı düzenleyin. Mutfağınızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Mutfağınızı aydınlatın. Mutfağınızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak mutfağınızın atmosferini değiştirebilirsiniz.
-- Mutfağınızı renklendirin. Mutfağınızda iştah ve keyif uyandırmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz. Mutfağınızda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
-- Mutfağınıza bir bitki ekleyin. Bitki, mutfağınıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
-- Mutfağınıza bir baharatlık ekleyin. Baharatlık, mutfağınıza lezzet ve çeşitlilik katar. Ayrıca metal elementi ile neşe ve iletişim enerjisi verir. Ancak baharatlarınızın taze ve kaliteli olduğundan emin olun.
-
- Oturma Odası
-Oturma odası evinizin dinlenme alanıdır. Burada kitap okur, film izler, müzik dinler veya hobilerinizle ilgilenirsiniz. Bu nedenle oturma odanızı konforlu, rahatlatıcı ve ilgi çekici yapmak önemlidir. Oturma odanızda şunlara dikkat edin:
-
-- Oturma odanızı düzenleyin. Oturma odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Oturma odanızı aydınlatın. Oturma odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak oturma odanızın atmosferini değiştirebilirsiniz.
-- Oturma odanızı renklendirin. Oturma odanızda dinlenme ve huzur yaratmak için su elementi renkleri olan mavi, siyah ve mor gibi soğuk tonlar kullanabilirsiniz. Oturma odanızda ilham ve tutku yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi sıcak tonlar kullanabilirsiniz.
-- Oturma odanıza bir koltuk ekleyin. Koltuk, oturma odanızın en önemli mobilyasıdır. Koltuğunuzu hakim konuma yerleştirin ve kapının görüş alanınızda olmasına dikkat edin. Koltuğunuzun rengini ve dokusunu oturma odanızın tarzına uygun seçin.
-- Oturma odanıza bir kitaplık ekleyin. Kitaplık, oturma odanıza bilgelik ve bilgi katar. Ayrıca metal elementi ile neşe ve iletişim enerjisi verir. Ancak kitaplarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Oturma odanıza bir müzik sistemi ekleyin. Müzik sistemi, oturma odanıza akışkanlık ve duygusallık katar. Ayrıca su elementi ile sezgi ve akış enerjisi verir. Ancak müzik sisteminizi uygun bir ses seviyesinde kullanın ve komşularınızı rahatsız etmeyin.
-
- Yatak Odası
-Yatak odası evinizin romantik alanıdır. Burada uyur, sevişir ve dinlenirsiniz. Bu nedenle yatak odanızı sakin, romantik ve özel yapmak önemlidir. Yatak odanızda şunlara dikkat edin:
-
-- Yatak odanızı düzenleyin. Yatak odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Yatak odanızı aydınlatın. Yatak odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak yatak odanızın atmosferini değiştirebilirsiniz.
-- Yatak odanızı renklendirin. Yatak odanızda romantiklik ve aşk yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi sıcak tonlar kullanabilirsiniz. Yatak odanızda sakinlik ve huzur yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
-- Yatak odanıza bir yatak ekleyin. Yatak, yatak odanızın en önemli mobilyasıdır. Yatağınızı hakim konuma yerleştirin ve kapının görüş alanınızda olmasına dikkat edin. Yatağınızın rengini ve dokusunu yatak odanızın tarzına uygun seçin.
-- Yatak odanıza bir nevresim takımı ekleyin. Nevresim takımı, yatak odanızın görünümünü değiştirebilir. Ayrıca yatağınızın enerjisini korur ve temiz tutar. Nevresim takımınızın rengini ve desenini yatak odanızın tarzına uygun seçin.
-- Yatak odanıza bir çiçek ekleyin. Çiçek, yatak odanıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak çiçeğinizin sağlıklı ve bakımlı olduğundan emin olun.
-
- Banyo
-Banyo evinizin temizlik alanıdır. Burada duş alır, diş fırçalar ve kendinize bakım yaparsınız. Bu nedenle banyonuzu temiz, ferah ve rahatlatıcı yapmak önemlidir. Banyonuzda şunlara dikkat edin:
-
-- Banyonuzu düzenleyin. Banyonuzda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
-- Banyonuzu aydınlatın. Banyonuzda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak banyonuzun atmosferini değiştirebilirsiniz.
-- Banyonuzu renklendirin. Banyonuzda temizlik ve ferahlık yaratmak için metal elementi renkleri olan beyaz, gri ve gümüş gibi soğuk tonlar kullanabilirsiniz. Banyonuzda rahatlama ve sezgi yaratmak için su elementi renkleri olan mavi, siyah ve mor gibi soğuk tonlar kullanabilirsiniz.
-- Banyonuza bir duş perdesi ekleyin. Duş perdesi, banyonuzun görünümünü değiştirebilir. Ayrıca duşunuzun enerjisini korur ve temiz tutar. Duş perdenizin rengini ve desenini banyonuzun tarzına uygun seçin.
-- Banyonuza bir bitki ekleyin. Bitki, banyonuza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
-
- Sonuç
-Feng shui, evimizi ve hayatımızı daha huzurlu, sağlıklı ve mutlu kılmak için bize bazı ipuçları ve kurallar sunar. Feng shui'nin temel prensiplerini öğrendikten sonra evinizin her odası için feng shui kurallarını uygulayabilirsiniz. Böylece evinizdeki enerjiyi dengeleyebilir, yaşam gücü qi'yi çekebilir ve hayatınızın farklı yönlerini iyileştirebilirsiniz.
- Sık Sorulan Sorular
-
-- Feng shui nasıl telaffuz edilir?
Feng shui, "fung şvey" olarak telaffuz edilir.
-- Feng shui nasıl öğrenilir?
Feng shui öğrenmek için kitaplar, videolar, kurslar veya danışmanlar gibi farklı kaynaklardan yararlanabilirsiniz.
-- Feng shui uygulamak için ne kadar para harcamak gerekir?
Feng shui uygulamak için çok fazla para harcamanız gerekmez. Evinizdeki mevcut eşyalarınızı yeniden düzenleyerek veya küçük değişiklikler yaparak feng shui uygulayabilirsiniz.
-- Feng shui hangi kültürden gelir?
Feng shui eski bir Çin sanatıdır. Taoizm ile bağlantılı olan bu sanat, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler.
-- Feng shui sadece ev için mi geçerlidir?
Hayır, feng shui sadece ev için geçerli değildir. Feng shui, iş yeriniz, arabanız, bahçeniz veya herhangi bir mekan için de uygulanabilir.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md b/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md
deleted file mode 100644
index 76a3397ed0079b6b88207bdf4d657f50f2b844bb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-MiniStrike 3.5 APK: A Funny Tribute to Counter-Strike for Android
-If you are a fan of the classic first-person shooter game Counter-Strike, you might want to check out MiniStrike, a funny tribute to the game for Android devices. MiniStrike is a multiplayer online game that lets you play with up to 16 players in different modes and maps. You can choose from various weapons, skins, and accessories to customize your character and show off your skills. In this article, we will tell you what MiniStrike is, how to download and install it, why you should play it, and some tips and tricks for playing it.
- What is MiniStrike?
-MiniStrike is a game developed by Malo The Toad, an independent game developer from France. It is inspired by Counter-Strike, one of the most popular and influential games of all time. MiniStrike aims to recreate the gameplay and atmosphere of Counter-Strike in a simplified and humorous way. You can play as either a terrorist or a counter-terrorist, and try to complete objectives such as planting or defusing bombs, rescuing hostages, or eliminating the enemy team. You can also play in deathmatch mode, where the only goal is to kill as many opponents as possible.
-ministrike 3.5 apk
Download Zip ––– https://urlca.com/2uO7V5
- Features of MiniStrike
-Some of the features of MiniStrike are:
-
-- It is free to play and does not require any registration or account.
-- It has low system requirements and can run on most Android devices.
-- It has simple and intuitive controls that are easy to learn and use.
-- It has colorful and cartoonish graphics that give it a unique charm.
-- It has a variety of weapons, skins, and accessories that you can unlock and use.
-- It has multiple game modes and maps that offer different challenges and experiences.
-- It has online multiplayer support that lets you play with up to 16 players from around the world.
-- It has a chat system that lets you communicate with your teammates and opponents.
-
- How to download and install MiniStrike 3.5 APK
-To download and install MiniStrike 3.5 APK on your Android device, follow these steps:
-ministrike 3.5 apk download
-ministrike 3.5 apk mod
-ministrike 3.5 apk free
-ministrike 3.5 apk latest version
-ministrike 3.5 apk offline
-ministrike 3.5 apk android
-ministrike 3.5 apk pure
-ministrike 3.5 apk hack
-ministrike 3.5 apk unlimited money
-ministrike 3.5 apk obb
-ministrike 3.5 apk full
-ministrike 3.5 apk no ads
-ministrike 3.5 apk update
-ministrike 3.5 apk old version
-ministrike 3.5 apk revdl
-ministrike 3.5 apk mirror
-ministrike 3.5 apk data
-ministrike 3.5 apk online
-ministrike 3.5 apk for pc
-ministrike 3.5 apk gameplay
-ministrike 3.5 apk review
-ministrike 3.5 apk cheats
-ministrike 3.5 apk tips and tricks
-ministrike 3.5 apk features
-ministrike 3.5 apk requirements
-ministrike 3.5 apk size
-ministrike 3.5 apk install
-ministrike 3.5 apk direct link
-ministrike 3.5 apk file
-ministrike 3.5 apk android oyun club
-ministrike 3.5 apk uptodown
-ministrike 3.5 apk rexdl
-ministrike 3.5 apk apkpure.com
-ministrike 3.5 apk malavida.com
-ministrike 3.5 apk androidapksfree.com
-ministrike 3.5 apk apkmirror.com
-ministrike 3.5 apk apkmody.io
-ministrike 3.5 apk happymod.com
-ministrike 3.5 apk modapkdown.com
-ministrike 3.5 apk an1.com
-
-- Go to https://apkpure.com/ministrike/com.malothetoad.ministrike/download/36-XAPK and click on the "Download APK" button.
-- Wait for the download to finish and then open the file.
-- If prompted, enable the installation of apps from unknown sources in your device settings.
-- Follow the instructions on the screen to install the app.
-- Launch the app and enjoy playing MiniStrike.
-
- Why play MiniStrike?
-MiniStrike is a fun and addictive game that will appeal to both casual and hardcore gamers. Here are some of the reasons why you should play it:
- Pros of MiniStrike
-
-- It is a great way to kill some time and have some fun with your friends or strangers online.
-- It is a good way to practice your reflexes, strategy, and teamwork skills.
-- It is a homage to Counter-Strike that will make you nostalgic and appreciate the original game more.
-- It is constantly updated with new features, improvements, and bug fixes.
-
- Cons of MiniStrike
-
-- It can be frustrating and unfair at times due to lag, hackers, or unbalanced teams.
-- It can be repetitive and boring after a while if you play the same mode and map over and over again.
-- It can be hard to find a suitable server or match depending on your region and time zone.
-- It can be annoying and distracting to deal with toxic or spamming players in the chat.
-
- Tips and tricks for playing MiniStrike
-If you want to improve your performance and enjoyment of MiniStrike, here are some tips and tricks that you can use:
- Choose your weapon wisely
-MiniStrike has a wide range of weapons that you can choose from, each with its own advantages and disadvantages. You should pick a weapon that suits your playstyle, preference, and situation. For example, if you prefer to play aggressively and rush into the enemy territory, you might want to use a shotgun or a submachine gun that have high fire rate and damage. If you prefer to play defensively and snipe from a distance, you might want to use a rifle or a sniper that have high accuracy and range. You should also consider the cost, recoil, and reload time of each weapon.
- Use the map to your advantage
-MiniStrike has various maps that have different layouts, features, and secrets. You should familiarize yourself with the map that you are playing on and use it to your advantage. For example, you should know where the bomb sites, hostages, weapons, health packs, and ammo are located. You should also know where the best spots, hiding places, shortcuts, and ambush points are. You should also pay attention to the sound cues and visual indicators that can help you locate your enemies or allies.
- Communicate with your teammates
-MiniStrike is a team-based game that requires coordination and cooperation among your teammates. You should communicate with your teammates using the chat system or voice chat if available. You should share information, plans, strategies, and warnings with your teammates. You should also support, assist, and cover your teammates when needed. You should also respect, encourage, and compliment your teammates when appropriate. You should avoid flaming, blaming, or insulting your teammates when things go wrong.
- Conclusion
-MiniStrike is a fun and funny tribute to Counter-Strike that you can play on your Android device. It has simple and colorful graphics, easy and intuitive controls, various weapons, skins, and accessories, multiple game modes and maps, online multiplayer support, and chat system. It is free to play and does not require any registration or account. It is a great way to kill some time and have some fun with your friends or strangers online. It is also a good way to practice your reflexes, strategy, and teamwork skills. It is also a homage to Counter-Strike that will make you nostalgic and appreciate the original game more.
- FAQs
-Here are some frequently asked questions about MiniStrike:
-
-- Q: Is MiniStrike safe to download and install?
-- A: Yes, MiniStrike is safe to download and install from the official source or trusted third-party websites. It does not contain any viruses, malware, or spyware that can harm your device or data.
-- Q: Is MiniStrike compatible with my device?
-- A: MiniStrike is compatible with most Android devices that have Android 4.1 or higher. However, some devices may experience performance issues or crashes due to low memory or storage space.
-- Q: How can I update MiniStrike?
-- A: You can update MiniStrike by downloading and installing the latest version from the official source or trusted third-party websites. You can also enable the auto-update feature in your device settings to get notified when a new version is available.
-- Q: How can I report a bug or a problem in MiniStrike?
-- A: You can report a bug or a problem in MiniStrike by contacting the developer via email at malothetoad@gmail.com or via Facebook at https://www.facebook.com/ministrikegame/. You can also leave a comment or a review on the app store or website where you downloaded the game.
-- Q: How can I support the development of MiniStrike?
-- A: You can support the development of MiniStrike by rating and reviewing the game on the app store or website where you downloaded it. You can also share the game with your friends and family via social media or word of mouth. You can also donate to the developer via PayPal at https://www.paypal.me/malothetoad/.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md b/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md
deleted file mode 100644
index 78fb6f51ba54a2573c94c467380661b2fa989f19..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-Sniper 3D Game Download for Computer: How to Play and Enjoy this Amazing Shooting Game
-If you are a fan of shooting games, you might have heard of Sniper 3D, one of the most popular and addictive sniper games on mobile devices. But did you know that you can also play this game on your computer? In this article, we will show you how to download and install Sniper 3D game on your PC, as well as how to play and enjoy this amazing shooting game with better graphics, performance, and controls.
-sniper 3d game download for computer
Download File ⚡ https://urlca.com/2uObIG
-Introduction
-Sniper 3D is an action game developed by Fun Games For Free, a studio that also created other hit games like Block Craft 3D, Castle Crush, and Colorfy. In Sniper 3D, you play as a contract assassin who has to complete various missions using your sniper skills. You can choose from hundreds of weapons, customize them, and upgrade them as you progress. You can also explore different locations and scenarios, from urban cities to tropical islands, from helicopters to humvees. The game has realistic graphics, easy-to-use controls, and immersive sound effects that will make you feel like a real sniper.
-What is Sniper 3D Game?
-Sniper 3D is a game that combines elements of shooting, simulation, and strategy. You have to use your aim, precision, and timing to take down your targets in one shot. You also have to plan your moves carefully, as some missions require stealth, speed, or accuracy. You can choose from over 850+ thrilling missions and 13 different worlds as you dive into this addictive game. You can also challenge other players online in PvP mode or join a clan and compete with other clans for rewards and glory.
-Why play Sniper 3D Game on PC?
-While Sniper 3D is a great game to play on your mobile device, playing it on your PC can give you some advantages. Here are some reasons why you should play Sniper 3D game on your computer:
-
-- You can enjoy better graphics and sound quality on a bigger screen.
-- You can use your mouse and keyboard or a gamepad for more precise and comfortable controls.
-- You can avoid battery drain, overheating, or lag issues that might affect your mobile device.
-- You can access more features and enhancements that are available only on PC platforms.
-
-How to Download and Install Sniper 3D Game on PC
-There are two main methods that you can use to download and install Sniper 3D game on your PC. The first one is using an Android emulator, which is a software that allows you to run Android apps on your computer. The second one is using a gaming platform, which is a service that offers a variety of games for PC users. We will explain both methods in detail below.
-Method 1: Using BlueStacks Emulator
-BlueStacks is one of the most popular and trusted Android emulators that you can use to play Sniper 3D game on your PC. It has over 500 million users and supports thousands of Android games and apps. It also has some features and enhancements that can improve your gaming experience, such as shooting mode, high FPS, script, free look, and more. Here are the steps to download and install Sniper 3D game on your PC using BlueStacks emulator:
-Step 1: Download and install BlueStacks on your PC
-You can download BlueStacks from its official website . The installation process is simple and straightforward. Just follow the instructions on the screen and wait for the installation to finish.
-Step 2: Complete Google sign-in to access the Play Store, or do it later
-After installing BlueStacks, you need to sign in with your Google account to access the Google Play Store. You can do this by clicking on the Google icon on the home screen of BlueStacks. If you don't have a Google account, you can create one for free. You can also skip this step and do it later.
-Step 3: Look for Sniper 3D:Gun Shooting Games in the search bar at the top right corner
-Once you have signed in with your Google account, you can search for Sniper 3D game in the Play Store. You can do this by typing "Sniper 3D:Gun Shooting Games" in the search bar at the top right corner of the BlueStacks home screen. You will see a list of results that match your query.
-Step 4: Click to install Sniper 3D:Gun Shooting Games from the search results
-From the list of results, look for the game that has the same name and icon as shown below:
-sniper 3d gun shooting games pc
-sniper 3d assassin free to play steam
-sniper champions 3d shooting bluestacks
-sniper 3d game for pc windows 10
-sniper 3d assassin fun games for free
-sniper champions 3d shooting gameloft
-sniper 3d game for mac download
-sniper 3d assassin steam reviews
-sniper champions 3d shooting sports game
-sniper 3d game for pc online
-sniper 3d assassin system requirements
-sniper champions 3d shooting android
-sniper 3d game for pc emulator
-sniper 3d assassin gameplay video
-sniper champions 3d shooting tips and tricks
-sniper 3d game for pc free download
-sniper 3d assassin mature content description
-sniper champions 3d shooting best rifle
-sniper 3d game for pc full version
-sniper 3d assassin download size
-sniper champions 3d shooting multiplayer mode
-sniper 3d game for pc offline
-sniper 3d assassin graphics settings
-sniper champions 3d shooting realistic physics
-sniper 3d game for pc no bluestacks
-sniper 3d assassin missions guide
-sniper champions 3d shooting latest update
-sniper 3d game for pc with keyboard and mouse
-sniper 3d assassin cheats and hacks
-sniper champions 3d shooting apk download
-sniper 3d game for pc without emulator
-sniper 3d assassin achievements list
-sniper champions 3d shooting mod apk unlimited money and gems
-sniper 3d game for pc highly compressed
-sniper 3d assassin community hub
-sniper champions 3d shooting ios download
-sniper 3d game for pc crack download
-sniper 3d assassin weapons upgrade cost
-sniper champions 3d shooting facebook login
-sniper 3d game for pc windows 7 free download full version
-
-Click on the game to open its page in the Play Store. Then, click on the green Install button to start downloading and installing the game on your PC.
-Step 5: Complete Google sign-in (if you skipped step 2) to install Sniper 3D:Gun Shooting Games
-If you skipped step 2 earlier, you will need to complete Google sign-in before you can install Sniper 3D game on your PC. You will see a pop-up window asking you to sign in with your Google account. Just follow the instructions on the screen and enter your Google credentials.
-Step 6: Click the Sniper 3D:Gun Shooting Games icon on the home screen to start playing
-After installing Sniper 3D game on your PC, you can launch it by clicking on its icon on the home screen of BlueStacks. You will see a brief tutorial on how to play the game using your mouse and keyboard or a gamepad. You can also customize your controls by clicking on the keyboard icon at the bottom right corner of the screen. Enjoy playing Sniper 3D game on your PC with BlueStacks!
-Method 2: Using Steam Platform
-Steam is another option that you can use to play Sniper 3D game on your PC. Steam is a gaming platform that offers a variety of games for PC users, including some free-to-play titles like Sniper 3D Assassin: Free to Play. This is a different version of Sniper 3D game that has some differences from the mobile version, such as graphics, gameplay, and content. However, it still offers a fun and exciting shooting experience that you can enjoy on your computer. Here are the steps to download and install Sniper 3D Assassin: Free to Play on your PC using Steam platform:
-Step 1 : Download and install Steam on your PC
-You can download Steam from its official website . The installation process is simple and straightforward. Just follow the instructions on the screen and wait for the installation to finish.
-Step 2: Create a Steam account or sign in with your existing one
-After installing Steam, you need to create a Steam account or sign in with your existing one to access the Steam store. You can do this by clicking on the Steam icon on your desktop or taskbar. If you don't have a Steam account, you can create one for free. You can also skip this step and do it later.
-Step 3: Search for Sniper 3D Assassin: Free to Play in the Steam store
-Once you have signed in with your Steam account, you can search for Sniper 3D Assassin: Free to Play in the Steam store. You can do this by typing "Sniper 3D Assassin: Free to Play" in the search bar at the top of the Steam window. You will see a list of results that match your query.
-Step 4: Click on the Play Game button to download and install the game for free
-From the list of results, look for the game that has the same name and icon as shown below:
-
-Click on the game to open its page in the Steam store. Then, click on the green Play Game button to download and install the game for free on your PC.
-Step 5: Launch the game from your Steam library and enjoy
-After downloading and installing Sniper 3D Assassin: Free to Play on your PC, you can launch it by clicking on its icon in your Steam library. You will see a brief tutorial on how to play the game using your mouse and keyboard or a gamepad. You can also customize your settings by clicking on the gear icon at the top right corner of the screen. Enjoy playing Sniper 3D Assassin: Free to Play on your PC with Steam!
-How to Play and Enjoy Sniper 3D Game on PC
-Now that you have downloaded and installed Sniper 3D game on your PC, you might be wondering how to play and enjoy this amazing shooting game. In this section, we will give you some tips and tricks on how to make the most out of your gaming experience.
-Game Features and Enhancements
-One of the benefits of playing Sniper 3D game on PC is that you can access more features and enhancements that are not available on mobile devices. Here are some of them:
-Shooting Mode
-If you are using BlueStacks emulator, you can activate shooting mode by pressing F1 on your keyboard. This will allow you to aim and shoot with your mouse, just like in a PC shooter game. You can also adjust the sensitivity and speed of your mouse cursor by clicking on the gear icon at the bottom right corner of the screen.
-High FPS
-If you want to enjoy smoother and faster gameplay, you can enable high FPS mode by clicking on the menu icon at the top right corner of BlueStacks home screen. Then, go to Settings > Engine > FPS and select 60 or higher. This will increase the frame rate of Sniper 3D game and make it more responsive and realistic.
-Script
-If you want to automate some actions or tasks in Sniper 3D game, you can use script mode by pressing Ctrl + Shift + A on your keyboard. This will open a window where you can write or record a script that will execute commands for you. For example, you can create a script that will automatically reload your weapon, switch between weapons, or zoom in and out.
-Free Look
-If you want to explore your surroundings more freely, you can use free look mode by pressing Alt + F1 on your keyboard. This will allow you to move your camera around with your mouse, without affecting your aim or position. You can also zoom in and out with your mouse wheel.
-Game Tips and Tricks
-Besides using these features and enhancements, you can also improve your skills and performance in Sniper 3D game by following these tips and tricks:
-How to aim and shoot
-The most important skill in Sniper 3D game is aiming and shooting. You have to be accurate and quick to take down your targets in one shot. You can use your mouse or gamepad to aim and shoot, depending on your preference. Here are some tips on how to aim and shoot better:
-
-- Use the scope to zoom in and out and adjust your aim. You can also use the mouse wheel or the right trigger on your gamepad to zoom in and out.
-- Pay attention to the wind direction and speed, as they will affect the trajectory of your bullet. You can see the wind indicator at the top of the screen. You can also use the bullet drop indicator to compensate for the gravity effect.
-- Use the breath button to steady your aim and slow down time. You can press the spacebar on your keyboard or the left trigger on your gamepad to activate this feature. You can also upgrade your skills to increase the duration and effectiveness of this feature.
-- Use the silencer to reduce the noise and flash of your shots. This will help you avoid detection and alerting other enemies. You can equip a silencer on your weapon by clicking on the gear icon at the bottom left corner of the screen.
-
-How to upgrade weapons
-Another important aspect of Sniper 3D game is upgrading your weapons. You have to upgrade your weapons to increase their damage, range, stability, capacity, and fire rate. You can also unlock new weapons and customize them with different skins, scopes, muzzles, magazines, and grips. Here are some tips on how to upgrade weapons:
-
-- Use coins and diamonds to buy and upgrade weapons. You can earn coins and diamonds by completing missions, watching ads, or buying them with real money.
-- Use blueprints to unlock new weapons. You can collect blueprints by completing missions, participating in events, or opening crates.
-- Use parts to customize your weapons. You can obtain parts by dismantling unwanted weapons, opening crates, or buying them with coins or diamonds.
-- Use weapon tiers to compare and choose the best weapon for each mission. You can see the weapon tier at the top of the weapon card. The higher the tier, the better the weapon.
-
-How to complete missions
-The main mode of Sniper 3D game is completing missions. You have to complete various missions using your sniper skills and strategy. You can choose from different types of missions, such as primary, wanted, spec ops, daily, or PvP. Here are some tips on how to complete missions:
-
-- Read the mission briefing carefully and follow the instructions. You will see the mission briefing at the start of each mission. It will tell you the objective, target, location, reward, and other details of the mission.
-- Choose the right weapon for each mission. You will see a recommended weapon for each mission at the bottom of the mission briefing. You can also see the weapon stats and compare them with other weapons by clicking on them.
-- Use hints and clues to find and identify your target. You will see hints and clues on the screen during some missions. They will help you locate and recognize your target among other people or objects.
-- Use cover and stealth to avoid detection and enemy fire. You will see a cover indicator at the bottom of the screen during some missions. It will show you how much cover you have from enemy sight and bullets.
-
-Conclusion
-Sniper 3D is an amazing shooting game that you can play on your computer with better graphics, performance, and controls. You can download and install Sniper 3D game on your PC using either BlueStacks emulator or Steam platform. You can also play and enjoy Sniper 3D game on your PC with more features and enhancements that are not available on mobile devices. You can also improve your skills and performance in Sniper 3D game by following some tips and tricks that we have shared in this article.
-We hope that this article has helped you learn how to download, install, play, and enjoy Sniper 3D game on your PC. If you have any questions or feedback, please feel free to leave a comment below. Happy sniping!
- Frequently Asked Questions
- Q: Is Sniper 3D game free to play?
-A: Yes, Sniper 3D game is free to play on both mobile devices and PC platforms. However, it also offers some in-app purchases that can enhance your gaming experience.
- Q: Is Sniper 3D game online or offline?
-A: Sniper 3D game is both online and offline. You can play most of the missions offline without an internet connection. However, you need an internet connection to access some features such as PvP mode, clan wars, events, leaderboards, and updates.
- Q: How can I get more coins and diamonds in Sniper 3D game?
-A: You can get more coins and diamonds in Sniper 3D game by completing missions, watching ads, or buying them with real money. You can also get some coins and diamonds for free by logging in daily, participating in events, or joining a clan.
- Q: How can I change the language of Sniper 3D game?
-A: You can change the language of Sniper 3D game by going to the settings menu and selecting the language option. You can choose from over 20 languages, including English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, and more.
- Q: How can I contact the support team of Sniper 3D game?
-A: You can contact the support team of Sniper 3D game by going to the settings menu and selecting the help option. You can also visit the official website or the Facebook page of Sniper 3D game for more information and assistance.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md b/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md
deleted file mode 100644
index 0cba1fa9db745e0f37ecf08f06f765d3ccbd9274..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-How to Get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download
-Adobe Photoshop Lightroom CC is one of the most popular and powerful photo editing software that allows you to organize, edit, and share your photos with ease. However, it is not a cheap software and requires a monthly or yearly subscription fee to use it. If you are looking for a way to get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you have come to the right place.
-Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] free download
Download ✯✯✯ https://ssurll.com/2uzyhd
-In this article, we will show you how to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, which is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.
-By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.
-If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.
-What are the features and benefits of Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?
-Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has many features and benefits that make it a great choice for photo editing enthusiasts and professionals alike. Some of them are:
-
-- It has a user-friendly and intuitive interface that allows you to easily navigate and access your photos, tools, and settings.
-- It has a powerful and comprehensive photo editing engine that allows you to adjust various aspects of your photos, such as exposure, contrast, color, tone, sharpness, noise, lens correction, and more.
-- It has a smart and flexible photo organization system that allows you to import, sort, filter, rate, tag, and manage your photos in various ways.
-- It has a seamless and tight integration with Adobe Photoshop that allows you to edit your photos in both programs with ease.
-- It has a creative and versatile photo sharing system that allows you to export, print, publish, and share your photos in various formats and platforms.
-- It has a patch that allows you to use the software for free without any limitations or restrictions.
-
-How to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?
-If you want to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can follow these steps:
-
-
-- Go to the website of SadeemPC by following this link.
-- Search for Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download or click on this link.
-- Click on the download button or the magnet link to start downloading the file.
-- Unzip the file using WinRAR or any other program that can extract compressed files.
-- Run the setup file named Setup.run this.(ask4pc).exe inside the folder Lightroom.setup (ask4pc).
-- Follow the instructions on the screen to install the software.
-- Run the patch file named Patch.Lightroom.6.(ask4pc).exe inside the folder Updates (ask4pc).
-- Select Adobe Photoshop Lightroom CC from the list of programs and click on the patch button.
-- Wait for the patching process to finish.
-- Congratulations! You have successfully installed Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download.
-
-How to use Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download effectively?
-If you want to use Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download effectively, you can follow these tips and tricks:
-
-- Learn the basics of photo editing by watching some tutorials or reading some guides online.
-- Explore the different modules and tools of Adobe Photoshop Lightroom CC by clicking on them and seeing what they do.
-- Create a workflow that suits your needs and preferences by customizing your settings and preferences.
-- Use presets and profiles to apply some ready-made effects and adjustments to your photos.
-- Use keywords and collections to organize your photos in a logical and convenient way.
-- Use sync and cloud services to access your photos across different devices and platforms.
-- Use plugins and extensions to enhance the functionality and compatibility of Adobe Photoshop Lightroom CC with other programs and services.
-
-
-Conclusion
-
-In conclusion, Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.
-
-By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.
-
-If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.
-
-We hope this article helped you learn more about Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download
-and how to get it for free.
-What are some alternatives to Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?
-If you are not satisfied with Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, or you want to try some other options, you can check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper. Some of them are:
-
-- Skylum Luminar: This is a powerful and versatile photo editing software that uses artificial intelligence (AI) to enhance your photos. It has a user-friendly interface, a smart photo organization system, and a creative photo sharing system. It also has some unique features, such as sky replacement, portrait enhancers, and augmented sky. You can buy it for a one-time fee of $79, or get a free trial for 7 days.
-- ON1 Photo RAW: This is a comprehensive and fast photo editing software that offers raw processing, photo organization, and photo effects. It has a customizable interface, a flexible photo management system, and a seamless integration with Adobe Photoshop and Lightroom. It also has some advanced features, such as HDR merging, focus stacking, and panorama stitching. You can buy it for a one-time fee of $99, or get a free trial for 14 days.
-- DxO PhotoLab: This is a professional and precise photo editing software that offers raw conversion, photo correction, and photo enhancement. It has a clear interface, a powerful photo editing engine, and a superior noise reduction technology. It also has some exclusive features, such as optical corrections, haze removal, and local adjustments. You can buy it for a one-time fee of $129, or get a free trial for 30 days.
-- Capture One: This is a premium and sophisticated photo editing software that offers raw processing, photo organization, and tethered shooting. It has a feature-rich interface, a customizable workflow system, and an excellent raw file conversion quality. It also has some professional features, such as color grading, layer editing, and annotations. You can buy it for a one-time fee of $299, or get a free trial for 30 days.
-- Darktable: This is a free and open-source photo editing software that offers raw processing, photo management, and non-destructive editing. It has a modular interface, a comprehensive photo editing toolkit, and a robust performance system. It also has some unique features, such as multiple exposure blending, perspective correction, and watermarking.
-
-How to compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar?
-If you want to compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar, you can consider some of the following aspects:
-
-- Price: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a pirated version of Adobe Photoshop Lightroom CC that can be downloaded for free from SadeemPC website. However, this is illegal and risky, as you may face legal issues, security issues, lack of updates, and lack of support. Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.
-- Features: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has a user-friendly and intuitive interface, a powerful and comprehensive photo editing engine, a smart and flexible photo organization system, a seamless and tight integration with Adobe Photoshop, and a creative and versatile photo sharing system. Skylum Luminar has similar features, but also has some unique features, such as sky replacement, portrait enhancers, augmented sky, AI tools, and presets.
-- Performance: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has a fast and smooth performance, as it can import, edit, and export photos quickly and efficiently. However, it may also have some bugs, errors, or crashes, as it is not an official version and may not be compatible with your system or other programs. Skylum Luminar has a fast and smooth performance as well, as it can import, edit, and export photos quickly and efficiently. It also has a stable and reliable performance, as it is an official version and is regularly updated and optimized.
-
-Conclusion
-In conclusion, Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download and Skylum Luminar are both powerful and versatile photo editing software that have similar features and benefits. However, they also have some differences in terms of price, legality, safety, uniqueness, and stability.
-
-Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a pirated version of Adobe Photoshop Lightroom CC that can be downloaded for free from SadeemPC website. However, this is illegal and risky, as you may face legal issues, security issues, lack of updates, and lack of support.
-
-Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.
-
-We hope this article helped you compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar
-and decide which one is better for you.
-
In this article, we have shown you how to get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, which is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.
-
-By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.
-
-If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.
-
-We have also compared Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar, which is another powerful and versatile photo editing software that has similar features and benefits. However, they also have some differences in terms of price, legality, safety, uniqueness, and stability.
-
-Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.
-
-We hope this article helped you learn more about Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download and how to get it for free. We also hope it helped you compare it with Skylum Luminar and decide which one is better for you.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md b/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md
deleted file mode 100644
index 19e9678fd83923eba16d147a5136e973f3ae8dcf..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Defection full movie in italian free download mp4
Download Zip ✪✪✪ https://ssurll.com/2uzxba
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md b/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md
deleted file mode 100644
index 15cbfb7c11ec94f9edf7979574116af65b829e95..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Ein Buch von Amazon auf das iPad herunterladen
Download File ✫ https://ssurll.com/2uzxNs
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py
deleted file mode 100644
index 93bc129b685e4a3efca2cc891729981b2865900d..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .builder import build_pixel_sampler
-from .sampler import BasePixelSampler, OHEMPixelSampler
-
-__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler']
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py
deleted file mode 100644
index 3d15d1ee534d35f80525d5a9c8a7437dad5c7463..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation
-# https://github.com/baegwangbin/surface_normal_uncertainty
-
-import os
-import types
-import torch
-import numpy as np
-
-from einops import rearrange
-from .models.NNET import NNET
-from .utils import utils
-from annotator.util import annotator_ckpts_path
-import torchvision.transforms as transforms
-
-
-class NormalBaeDetector:
- def __init__(self):
- remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/scannet.pt"
- modelpath = os.path.join(annotator_ckpts_path, "scannet.pt")
- if not os.path.exists(modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path)
- args = types.SimpleNamespace()
- args.mode = 'client'
- args.architecture = 'BN'
- args.pretrained = 'scannet'
- args.sampling_ratio = 0.4
- args.importance_ratio = 0.7
- model = NNET(args)
- model = utils.load_checkpoint(modelpath, model)
-# model = model.cuda()
- model = model.cpu()
- model.eval()
- self.model = model
- self.norm = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
- def __call__(self, input_image):
- assert input_image.ndim == 3
- image_normal = input_image
- with torch.no_grad():
-# image_normal = torch.from_numpy(image_normal).float().cuda()
- image_normal = torch.from_numpy(image_normal).float().cpu()
- image_normal = image_normal / 255.0
- image_normal = rearrange(image_normal, 'h w c -> 1 c h w')
- image_normal = self.norm(image_normal)
-
- normal = self.model(image_normal)
- normal = normal[0][-1][:, :3]
- # d = torch.sum(normal ** 2.0, dim=1, keepdim=True) ** 0.5
- # d = torch.maximum(d, torch.ones_like(d) * 1e-5)
- # normal /= d
- normal = ((normal + 1) * 0.5).clip(0, 1)
-
- normal = rearrange(normal[0], 'c h w -> h w c').cpu().numpy()
- normal_image = (normal * 255.0).clip(0, 255).astype(np.uint8)
-
- return normal_image
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py
deleted file mode 100644
index 9e2bdb9aefe68ca4439f41eff3bba722c49fb976..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# MIT License
-
-# Copyright (c) 2022 Intelligent Systems Lab Org
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-# File author: Shariq Farooq Bhat
-
-import os
-
-import numpy as np
-import torch
-from PIL import Image
-from torch.utils.data import DataLoader, Dataset
-from torchvision import transforms
-
-
-class ToTensor(object):
- def __init__(self):
- # self.normalize = transforms.Normalize(
- # mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
- self.normalize = lambda x : x
-
- def __call__(self, sample):
- image, depth = sample['image'], sample['depth']
- image = self.to_tensor(image)
- image = self.normalize(image)
- depth = self.to_tensor(depth)
-
- return {'image': image, 'depth': depth, 'dataset': "sunrgbd"}
-
- def to_tensor(self, pic):
-
- if isinstance(pic, np.ndarray):
- img = torch.from_numpy(pic.transpose((2, 0, 1)))
- return img
-
- # # handle PIL Image
- if pic.mode == 'I':
- img = torch.from_numpy(np.array(pic, np.int32, copy=False))
- elif pic.mode == 'I;16':
- img = torch.from_numpy(np.array(pic, np.int16, copy=False))
- else:
- img = torch.ByteTensor(
- torch.ByteStorage.from_buffer(pic.tobytes()))
- # PIL image mode: 1, L, P, I, F, RGB, YCbCr, RGBA, CMYK
- if pic.mode == 'YCbCr':
- nchannel = 3
- elif pic.mode == 'I;16':
- nchannel = 1
- else:
- nchannel = len(pic.mode)
- img = img.view(pic.size[1], pic.size[0], nchannel)
-
- img = img.transpose(0, 1).transpose(0, 2).contiguous()
- if isinstance(img, torch.ByteTensor):
- return img.float()
- else:
- return img
-
-
-class SunRGBD(Dataset):
- def __init__(self, data_dir_root):
- # test_file_dirs = loadmat(train_test_file)['alltest'].squeeze()
- # all_test = [t[0].replace("/n/fs/sun3d/data/", "") for t in test_file_dirs]
- # self.all_test = [os.path.join(data_dir_root, t) for t in all_test]
- import glob
- self.image_files = glob.glob(
- os.path.join(data_dir_root, 'rgb', 'rgb', '*'))
- self.depth_files = [
- r.replace("rgb/rgb", "gt/gt").replace("jpg", "png") for r in self.image_files]
- self.transform = ToTensor()
-
- def __getitem__(self, idx):
- image_path = self.image_files[idx]
- depth_path = self.depth_files[idx]
-
- image = np.asarray(Image.open(image_path), dtype=np.float32) / 255.0
- depth = np.asarray(Image.open(depth_path), dtype='uint16') / 1000.0
- depth[depth > 8] = -1
- depth = depth[..., None]
- return self.transform(dict(image=image, depth=depth))
-
- def __len__(self):
- return len(self.image_files)
-
-
-def get_sunrgbd_loader(data_dir_root, batch_size=1, **kwargs):
- dataset = SunRGBD(data_dir_root)
- return DataLoader(dataset, batch_size, **kwargs)
diff --git a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py b/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py
deleted file mode 100644
index d7ce0f4c6d21660bc22e318687015cfd26c36be5..0000000000000000000000000000000000000000
--- a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Sep 30 17:45:24 2021
-
-@author: SENSETIME\jixinya1
-"""
-
-import torch
-
-
-def calc_mean_std(feat, eps=1e-5):
- # eps is a small value added to the variance to avoid divide-by-zero.
- size = feat.size()
- assert (len(size) == 4)
- N, C = size[:2]
- feat_var = feat.view(N, C, -1).var(dim=2) + eps
- feat_std = feat_var.sqrt().view(N, C, 1, 1)
- feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1)
- return feat_mean, feat_std
-
-
-def adaptive_instance_normalization(content_feat, style_feat):
- assert (content_feat.size()[:2] == style_feat.size()[:2])
- size = content_feat.size()
- style_mean, style_std = calc_mean_std(style_feat)
- content_mean, content_std = calc_mean_std(content_feat)
-
- normalized_feat = (content_feat - content_mean.expand(
- size)) / content_std.expand(size)
- return normalized_feat * style_std.expand(size) + style_mean.expand(size)
-
-
-def _calc_feat_flatten_mean_std(feat):
- # takes 3D feat (C, H, W), return mean and std of array within channels
- assert (feat.size()[0] == 3)
- assert (isinstance(feat, torch.FloatTensor))
- feat_flatten = feat.view(3, -1)
- mean = feat_flatten.mean(dim=-1, keepdim=True)
- std = feat_flatten.std(dim=-1, keepdim=True)
- return feat_flatten, mean, std
-
-
-def _mat_sqrt(x):
- U, D, V = torch.svd(x)
- return torch.mm(torch.mm(U, D.pow(0.5).diag()), V.t())
-
-
-def coral(source, target):
- # assume both source and target are 3D array (C, H, W)
- # Note: flatten -> f
-
- source_f, source_f_mean, source_f_std = _calc_feat_flatten_mean_std(source)
- source_f_norm = (source_f - source_f_mean.expand_as(
- source_f)) / source_f_std.expand_as(source_f)
- source_f_cov_eye = \
- torch.mm(source_f_norm, source_f_norm.t()) + torch.eye(3)
-
- target_f, target_f_mean, target_f_std = _calc_feat_flatten_mean_std(target)
- target_f_norm = (target_f - target_f_mean.expand_as(
- target_f)) / target_f_std.expand_as(target_f)
- target_f_cov_eye = \
- torch.mm(target_f_norm, target_f_norm.t()) + torch.eye(3)
-
- source_f_norm_transfer = torch.mm(
- _mat_sqrt(target_f_cov_eye),
- torch.mm(torch.inverse(_mat_sqrt(source_f_cov_eye)),
- source_f_norm)
- )
-
- source_f_transfer = source_f_norm_transfer * \
- target_f_std.expand_as(source_f_norm) + \
- target_f_mean.expand_as(source_f_norm)
-
- return source_f_transfer.view(source.size())
\ No newline at end of file
diff --git a/spaces/datien228/text-summarizer/static/js/particles.js b/spaces/datien228/text-summarizer/static/js/particles.js
deleted file mode 100644
index 325d8349960022a3a6aaef3d1ca94938be622a68..0000000000000000000000000000000000000000
--- a/spaces/datien228/text-summarizer/static/js/particles.js
+++ /dev/null
@@ -1,1541 +0,0 @@
-/* -----------------------------------------------
-/* Author : Vincent Garreau - vincentgarreau.com
-/* MIT license: http://opensource.org/licenses/MIT
-/* Demo / Generator : vincentgarreau.com/particles.js
-/* GitHub : github.com/VincentGarreau/particles.js
-/* How to use? : Check the GitHub README
-/* v2.0.0
-/* ----------------------------------------------- */
-
-var pJS = function(tag_id, params){
-
- var canvas_el = document.querySelector('#'+tag_id+' > .particles-js-canvas-el');
-
- /* particles.js variables with default values */
- this.pJS = {
- canvas: {
- el: canvas_el,
- w: canvas_el.offsetWidth,
- h: canvas_el.offsetHeight
- },
- particles: {
- number: {
- value: 400,
- density: {
- enable: true,
- value_area: 800
- }
- },
- color: {
- value: '#fff'
- },
- shape: {
- type: 'circle',
- stroke: {
- width: 0,
- color: '#ff0000'
- },
- polygon: {
- nb_sides: 5
- },
- image: {
- src: '',
- width: 100,
- height: 100
- }
- },
- opacity: {
- value: 1,
- random: false,
- anim: {
- enable: false,
- speed: 2,
- opacity_min: 0,
- sync: false
- }
- },
- size: {
- value: 20,
- random: false,
- anim: {
- enable: false,
- speed: 20,
- size_min: 0,
- sync: false
- }
- },
- line_linked: {
- enable: true,
- distance: 100,
- color: '#fff',
- opacity: 1,
- width: 1
- },
- move: {
- enable: true,
- speed: 2,
- direction: 'none',
- random: false,
- straight: false,
- out_mode: 'out',
- bounce: false,
- attract: {
- enable: false,
- rotateX: 3000,
- rotateY: 3000
- }
- },
- array: []
- },
- interactivity: {
- detect_on: 'canvas',
- events: {
- onhover: {
- enable: true,
- mode: 'grab'
- },
- onclick: {
- enable: true,
- mode: 'push'
- },
- resize: true
- },
- modes: {
- grab:{
- distance: 100,
- line_linked:{
- opacity: 1
- }
- },
- bubble:{
- distance: 200,
- size: 80,
- duration: 0.4
- },
- repulse:{
- distance: 200,
- duration: 0.4
- },
- push:{
- particles_nb: 4
- },
- remove:{
- particles_nb: 2
- }
- },
- mouse:{}
- },
- retina_detect: false,
- fn: {
- interact: {},
- modes: {},
- vendors:{}
- },
- tmp: {}
- };
-
- var pJS = this.pJS;
-
- /* params settings */
- if(params){
- Object.deepExtend(pJS, params);
- }
-
- pJS.tmp.obj = {
- size_value: pJS.particles.size.value,
- size_anim_speed: pJS.particles.size.anim.speed,
- move_speed: pJS.particles.move.speed,
- line_linked_distance: pJS.particles.line_linked.distance,
- line_linked_width: pJS.particles.line_linked.width,
- mode_grab_distance: pJS.interactivity.modes.grab.distance,
- mode_bubble_distance: pJS.interactivity.modes.bubble.distance,
- mode_bubble_size: pJS.interactivity.modes.bubble.size,
- mode_repulse_distance: pJS.interactivity.modes.repulse.distance
- };
-
-
- pJS.fn.retinaInit = function(){
-
- if(pJS.retina_detect && window.devicePixelRatio > 1){
- pJS.canvas.pxratio = window.devicePixelRatio;
- pJS.tmp.retina = true;
- }
- else{
- pJS.canvas.pxratio = 1;
- pJS.tmp.retina = false;
- }
-
- pJS.canvas.w = pJS.canvas.el.offsetWidth * pJS.canvas.pxratio;
- pJS.canvas.h = pJS.canvas.el.offsetHeight * pJS.canvas.pxratio;
-
- pJS.particles.size.value = pJS.tmp.obj.size_value * pJS.canvas.pxratio;
- pJS.particles.size.anim.speed = pJS.tmp.obj.size_anim_speed * pJS.canvas.pxratio;
- pJS.particles.move.speed = pJS.tmp.obj.move_speed * pJS.canvas.pxratio;
- pJS.particles.line_linked.distance = pJS.tmp.obj.line_linked_distance * pJS.canvas.pxratio;
- pJS.interactivity.modes.grab.distance = pJS.tmp.obj.mode_grab_distance * pJS.canvas.pxratio;
- pJS.interactivity.modes.bubble.distance = pJS.tmp.obj.mode_bubble_distance * pJS.canvas.pxratio;
- pJS.particles.line_linked.width = pJS.tmp.obj.line_linked_width * pJS.canvas.pxratio;
- pJS.interactivity.modes.bubble.size = pJS.tmp.obj.mode_bubble_size * pJS.canvas.pxratio;
- pJS.interactivity.modes.repulse.distance = pJS.tmp.obj.mode_repulse_distance * pJS.canvas.pxratio;
-
- };
-
-
-
- /* ---------- pJS functions - canvas ------------ */
-
- pJS.fn.canvasInit = function(){
- pJS.canvas.ctx = pJS.canvas.el.getContext('2d');
- };
-
- pJS.fn.canvasSize = function(){
-
- pJS.canvas.el.width = pJS.canvas.w;
- pJS.canvas.el.height = pJS.canvas.h;
-
- if(pJS && pJS.interactivity.events.resize){
-
- window.addEventListener('resize', function(){
-
- pJS.canvas.w = pJS.canvas.el.offsetWidth;
- pJS.canvas.h = pJS.canvas.el.offsetHeight;
-
- /* resize canvas */
- if(pJS.tmp.retina){
- pJS.canvas.w *= pJS.canvas.pxratio;
- pJS.canvas.h *= pJS.canvas.pxratio;
- }
-
- pJS.canvas.el.width = pJS.canvas.w;
- pJS.canvas.el.height = pJS.canvas.h;
-
- /* repaint canvas on anim disabled */
- if(!pJS.particles.move.enable){
- pJS.fn.particlesEmpty();
- pJS.fn.particlesCreate();
- pJS.fn.particlesDraw();
- pJS.fn.vendors.densityAutoParticles();
- }
-
- /* density particles enabled */
- pJS.fn.vendors.densityAutoParticles();
-
- });
-
- }
-
- };
-
-
- pJS.fn.canvasPaint = function(){
- pJS.canvas.ctx.fillRect(0, 0, pJS.canvas.w, pJS.canvas.h);
- };
-
- pJS.fn.canvasClear = function(){
- pJS.canvas.ctx.clearRect(0, 0, pJS.canvas.w, pJS.canvas.h);
- };
-
-
- /* --------- pJS functions - particles ----------- */
-
- pJS.fn.particle = function(color, opacity, position){
-
- /* size */
- this.radius = (pJS.particles.size.random ? Math.random() : 1) * pJS.particles.size.value;
- if(pJS.particles.size.anim.enable){
- this.size_status = false;
- this.vs = pJS.particles.size.anim.speed / 100;
- if(!pJS.particles.size.anim.sync){
- this.vs = this.vs * Math.random();
- }
- }
-
- /* position */
- this.x = position ? position.x : Math.random() * pJS.canvas.w;
- this.y = position ? position.y : Math.random() * pJS.canvas.h;
-
- /* check position - into the canvas */
- if(this.x > pJS.canvas.w - this.radius*2) this.x = this.x - this.radius;
- else if(this.x < this.radius*2) this.x = this.x + this.radius;
- if(this.y > pJS.canvas.h - this.radius*2) this.y = this.y - this.radius;
- else if(this.y < this.radius*2) this.y = this.y + this.radius;
-
- /* check position - avoid overlap */
- if(pJS.particles.move.bounce){
- pJS.fn.vendors.checkOverlap(this, position);
- }
-
- /* color */
- this.color = {};
- if(typeof(color.value) == 'object'){
-
- if(color.value instanceof Array){
- var color_selected = color.value[Math.floor(Math.random() * pJS.particles.color.value.length)];
- this.color.rgb = hexToRgb(color_selected);
- }else{
- if(color.value.r != undefined && color.value.g != undefined && color.value.b != undefined){
- this.color.rgb = {
- r: color.value.r,
- g: color.value.g,
- b: color.value.b
- }
- }
- if(color.value.h != undefined && color.value.s != undefined && color.value.l != undefined){
- this.color.hsl = {
- h: color.value.h,
- s: color.value.s,
- l: color.value.l
- }
- }
- }
-
- }
- else if(color.value == 'random'){
- this.color.rgb = {
- r: (Math.floor(Math.random() * (255 - 0 + 1)) + 0),
- g: (Math.floor(Math.random() * (255 - 0 + 1)) + 0),
- b: (Math.floor(Math.random() * (255 - 0 + 1)) + 0)
- }
- }
- else if(typeof(color.value) == 'string'){
- this.color = color;
- this.color.rgb = hexToRgb(this.color.value);
- }
-
- /* opacity */
- this.opacity = (pJS.particles.opacity.random ? Math.random() : 1) * pJS.particles.opacity.value;
- if(pJS.particles.opacity.anim.enable){
- this.opacity_status = false;
- this.vo = pJS.particles.opacity.anim.speed / 100;
- if(!pJS.particles.opacity.anim.sync){
- this.vo = this.vo * Math.random();
- }
- }
-
- /* animation - velocity for speed */
- var velbase = {}
- switch(pJS.particles.move.direction){
- case 'top':
- velbase = { x:0, y:-1 };
- break;
- case 'top-right':
- velbase = { x:0.5, y:-0.5 };
- break;
- case 'right':
- velbase = { x:1, y:-0 };
- break;
- case 'bottom-right':
- velbase = { x:0.5, y:0.5 };
- break;
- case 'bottom':
- velbase = { x:0, y:1 };
- break;
- case 'bottom-left':
- velbase = { x:-0.5, y:1 };
- break;
- case 'left':
- velbase = { x:-1, y:0 };
- break;
- case 'top-left':
- velbase = { x:-0.5, y:-0.5 };
- break;
- default:
- velbase = { x:0, y:0 };
- break;
- }
-
- if(pJS.particles.move.straight){
- this.vx = velbase.x;
- this.vy = velbase.y;
- if(pJS.particles.move.random){
- this.vx = this.vx * (Math.random());
- this.vy = this.vy * (Math.random());
- }
- }else{
- this.vx = velbase.x + Math.random()-0.5;
- this.vy = velbase.y + Math.random()-0.5;
- }
-
- // var theta = 2.0 * Math.PI * Math.random();
- // this.vx = Math.cos(theta);
- // this.vy = Math.sin(theta);
-
- this.vx_i = this.vx;
- this.vy_i = this.vy;
-
-
-
- /* if shape is image */
-
- var shape_type = pJS.particles.shape.type;
- if(typeof(shape_type) == 'object'){
- if(shape_type instanceof Array){
- var shape_selected = shape_type[Math.floor(Math.random() * shape_type.length)];
- this.shape = shape_selected;
- }
- }else{
- this.shape = shape_type;
- }
-
- if(this.shape == 'image'){
- var sh = pJS.particles.shape;
- this.img = {
- src: sh.image.src,
- ratio: sh.image.width / sh.image.height
- }
- if(!this.img.ratio) this.img.ratio = 1;
- if(pJS.tmp.img_type == 'svg' && pJS.tmp.source_svg != undefined){
- pJS.fn.vendors.createSvgImg(this);
- if(pJS.tmp.pushing){
- this.img.loaded = false;
- }
- }
- }
-
-
-
- };
-
-
- pJS.fn.particle.prototype.draw = function() {
-
- var p = this;
-
- if(p.radius_bubble != undefined){
- var radius = p.radius_bubble;
- }else{
- var radius = p.radius;
- }
-
- if(p.opacity_bubble != undefined){
- var opacity = p.opacity_bubble;
- }else{
- var opacity = p.opacity;
- }
-
- if(p.color.rgb){
- var color_value = 'rgba('+p.color.rgb.r+','+p.color.rgb.g+','+p.color.rgb.b+','+opacity+')';
- }else{
- var color_value = 'hsla('+p.color.hsl.h+','+p.color.hsl.s+'%,'+p.color.hsl.l+'%,'+opacity+')';
- }
-
- pJS.canvas.ctx.fillStyle = color_value;
- pJS.canvas.ctx.beginPath();
-
- switch(p.shape){
-
- case 'circle':
- pJS.canvas.ctx.arc(p.x, p.y, radius, 0, Math.PI * 2, false);
- break;
-
- case 'edge':
- pJS.canvas.ctx.rect(p.x-radius, p.y-radius, radius*2, radius*2);
- break;
-
- case 'triangle':
- pJS.fn.vendors.drawShape(pJS.canvas.ctx, p.x-radius, p.y+radius / 1.66, radius*2, 3, 2);
- break;
-
- case 'polygon':
- pJS.fn.vendors.drawShape(
- pJS.canvas.ctx,
- p.x - radius / (pJS.particles.shape.polygon.nb_sides/3.5), // startX
- p.y - radius / (2.66/3.5), // startY
- radius*2.66 / (pJS.particles.shape.polygon.nb_sides/3), // sideLength
- pJS.particles.shape.polygon.nb_sides, // sideCountNumerator
- 1 // sideCountDenominator
- );
- break;
-
- case 'star':
- pJS.fn.vendors.drawShape(
- pJS.canvas.ctx,
- p.x - radius*2 / (pJS.particles.shape.polygon.nb_sides/4), // startX
- p.y - radius / (2*2.66/3.5), // startY
- radius*2*2.66 / (pJS.particles.shape.polygon.nb_sides/3), // sideLength
- pJS.particles.shape.polygon.nb_sides, // sideCountNumerator
- 2 // sideCountDenominator
- );
- break;
-
- case 'image':
-
- function draw(){
- pJS.canvas.ctx.drawImage(
- img_obj,
- p.x-radius,
- p.y-radius,
- radius*2,
- radius*2 / p.img.ratio
- );
- }
-
- if(pJS.tmp.img_type == 'svg'){
- var img_obj = p.img.obj;
- }else{
- var img_obj = pJS.tmp.img_obj;
- }
-
- if(img_obj){
- draw();
- }
-
- break;
-
- }
-
- pJS.canvas.ctx.closePath();
-
- if(pJS.particles.shape.stroke.width > 0){
- pJS.canvas.ctx.strokeStyle = pJS.particles.shape.stroke.color;
- pJS.canvas.ctx.lineWidth = pJS.particles.shape.stroke.width;
- pJS.canvas.ctx.stroke();
- }
-
- pJS.canvas.ctx.fill();
-
- };
-
-
- pJS.fn.particlesCreate = function(){
- for(var i = 0; i < pJS.particles.number.value; i++) {
- pJS.particles.array.push(new pJS.fn.particle(pJS.particles.color, pJS.particles.opacity.value));
- }
- };
-
- pJS.fn.particlesUpdate = function(){
-
- for(var i = 0; i < pJS.particles.array.length; i++){
-
- /* the particle */
- var p = pJS.particles.array[i];
-
- // var d = ( dx = pJS.interactivity.mouse.click_pos_x - p.x ) * dx + ( dy = pJS.interactivity.mouse.click_pos_y - p.y ) * dy;
- // var f = -BANG_SIZE / d;
- // if ( d < BANG_SIZE ) {
- // var t = Math.atan2( dy, dx );
- // p.vx = f * Math.cos(t);
- // p.vy = f * Math.sin(t);
- // }
-
- /* move the particle */
- if(pJS.particles.move.enable){
- var ms = pJS.particles.move.speed/2;
- p.x += p.vx * ms;
- p.y += p.vy * ms;
- }
-
- /* change opacity status */
- if(pJS.particles.opacity.anim.enable) {
- if(p.opacity_status == true) {
- if(p.opacity >= pJS.particles.opacity.value) p.opacity_status = false;
- p.opacity += p.vo;
- }else {
- if(p.opacity <= pJS.particles.opacity.anim.opacity_min) p.opacity_status = true;
- p.opacity -= p.vo;
- }
- if(p.opacity < 0) p.opacity = 0;
- }
-
- /* change size */
- if(pJS.particles.size.anim.enable){
- if(p.size_status == true){
- if(p.radius >= pJS.particles.size.value) p.size_status = false;
- p.radius += p.vs;
- }else{
- if(p.radius <= pJS.particles.size.anim.size_min) p.size_status = true;
- p.radius -= p.vs;
- }
- if(p.radius < 0) p.radius = 0;
- }
-
- /* change particle position if it is out of canvas */
- if(pJS.particles.move.out_mode == 'bounce'){
- var new_pos = {
- x_left: p.radius,
- x_right: pJS.canvas.w,
- y_top: p.radius,
- y_bottom: pJS.canvas.h
- }
- }else{
- var new_pos = {
- x_left: -p.radius,
- x_right: pJS.canvas.w + p.radius,
- y_top: -p.radius,
- y_bottom: pJS.canvas.h + p.radius
- }
- }
-
- if(p.x - p.radius > pJS.canvas.w){
- p.x = new_pos.x_left;
- p.y = Math.random() * pJS.canvas.h;
- }
- else if(p.x + p.radius < 0){
- p.x = new_pos.x_right;
- p.y = Math.random() * pJS.canvas.h;
- }
- if(p.y - p.radius > pJS.canvas.h){
- p.y = new_pos.y_top;
- p.x = Math.random() * pJS.canvas.w;
- }
- else if(p.y + p.radius < 0){
- p.y = new_pos.y_bottom;
- p.x = Math.random() * pJS.canvas.w;
- }
-
- /* out of canvas modes */
- switch(pJS.particles.move.out_mode){
- case 'bounce':
- if (p.x + p.radius > pJS.canvas.w) p.vx = -p.vx;
- else if (p.x - p.radius < 0) p.vx = -p.vx;
- if (p.y + p.radius > pJS.canvas.h) p.vy = -p.vy;
- else if (p.y - p.radius < 0) p.vy = -p.vy;
- break;
- }
-
- /* events */
- if(isInArray('grab', pJS.interactivity.events.onhover.mode)){
- pJS.fn.modes.grabParticle(p);
- }
-
- if(isInArray('bubble', pJS.interactivity.events.onhover.mode) || isInArray('bubble', pJS.interactivity.events.onclick.mode)){
- pJS.fn.modes.bubbleParticle(p);
- }
-
- if(isInArray('repulse', pJS.interactivity.events.onhover.mode) || isInArray('repulse', pJS.interactivity.events.onclick.mode)){
- pJS.fn.modes.repulseParticle(p);
- }
-
- /* interaction auto between particles */
- if(pJS.particles.line_linked.enable || pJS.particles.move.attract.enable){
- for(var j = i + 1; j < pJS.particles.array.length; j++){
- var p2 = pJS.particles.array[j];
-
- /* link particles */
- if(pJS.particles.line_linked.enable){
- pJS.fn.interact.linkParticles(p,p2);
- }
-
- /* attract particles */
- if(pJS.particles.move.attract.enable){
- pJS.fn.interact.attractParticles(p,p2);
- }
-
- /* bounce particles */
- if(pJS.particles.move.bounce){
- pJS.fn.interact.bounceParticles(p,p2);
- }
-
- }
- }
-
-
- }
-
- };
-
- pJS.fn.particlesDraw = function(){
-
- /* clear canvas */
- pJS.canvas.ctx.clearRect(0, 0, pJS.canvas.w, pJS.canvas.h);
-
- /* update each particles param */
- pJS.fn.particlesUpdate();
-
- /* draw each particle */
- for(var i = 0; i < pJS.particles.array.length; i++){
- var p = pJS.particles.array[i];
- p.draw();
- }
-
- };
-
- pJS.fn.particlesEmpty = function(){
- pJS.particles.array = [];
- };
-
- pJS.fn.particlesRefresh = function(){
-
- /* init all */
- cancelRequestAnimFrame(pJS.fn.checkAnimFrame);
- cancelRequestAnimFrame(pJS.fn.drawAnimFrame);
- pJS.tmp.source_svg = undefined;
- pJS.tmp.img_obj = undefined;
- pJS.tmp.count_svg = 0;
- pJS.fn.particlesEmpty();
- pJS.fn.canvasClear();
-
- /* restart */
- pJS.fn.vendors.start();
-
- };
-
-
- /* ---------- pJS functions - particles interaction ------------ */
-
- pJS.fn.interact.linkParticles = function(p1, p2){
-
- var dx = p1.x - p2.x,
- dy = p1.y - p2.y,
- dist = Math.sqrt(dx*dx + dy*dy);
-
- /* draw a line between p1 and p2 if the distance between them is under the config distance */
- if(dist <= pJS.particles.line_linked.distance){
-
- var opacity_line = pJS.particles.line_linked.opacity - (dist / (1/pJS.particles.line_linked.opacity)) / pJS.particles.line_linked.distance;
-
- if(opacity_line > 0){
-
- /* style */
- var color_line = pJS.particles.line_linked.color_rgb_line;
- pJS.canvas.ctx.strokeStyle = 'rgba('+color_line.r+','+color_line.g+','+color_line.b+','+opacity_line+')';
- pJS.canvas.ctx.lineWidth = pJS.particles.line_linked.width;
- //pJS.canvas.ctx.lineCap = 'round'; /* performance issue */
-
- /* path */
- pJS.canvas.ctx.beginPath();
- pJS.canvas.ctx.moveTo(p1.x, p1.y);
- pJS.canvas.ctx.lineTo(p2.x, p2.y);
- pJS.canvas.ctx.stroke();
- pJS.canvas.ctx.closePath();
-
- }
-
- }
-
- };
-
-
- pJS.fn.interact.attractParticles = function(p1, p2){
-
- /* condensed particles */
- var dx = p1.x - p2.x,
- dy = p1.y - p2.y,
- dist = Math.sqrt(dx*dx + dy*dy);
-
- if(dist <= pJS.particles.line_linked.distance){
-
- var ax = dx/(pJS.particles.move.attract.rotateX*1000),
- ay = dy/(pJS.particles.move.attract.rotateY*1000);
-
- p1.vx -= ax;
- p1.vy -= ay;
-
- p2.vx += ax;
- p2.vy += ay;
-
- }
-
-
- }
-
-
- pJS.fn.interact.bounceParticles = function(p1, p2){
-
- var dx = p1.x - p2.x,
- dy = p1.y - p2.y,
- dist = Math.sqrt(dx*dx + dy*dy),
- dist_p = p1.radius+p2.radius;
-
- if(dist <= dist_p){
- p1.vx = -p1.vx;
- p1.vy = -p1.vy;
-
- p2.vx = -p2.vx;
- p2.vy = -p2.vy;
- }
-
- }
-
-
- /* ---------- pJS functions - modes events ------------ */
-
- pJS.fn.modes.pushParticles = function(nb, pos){
-
- pJS.tmp.pushing = true;
-
- for(var i = 0; i < nb; i++){
- pJS.particles.array.push(
- new pJS.fn.particle(
- pJS.particles.color,
- pJS.particles.opacity.value,
- {
- 'x': pos ? pos.pos_x : Math.random() * pJS.canvas.w,
- 'y': pos ? pos.pos_y : Math.random() * pJS.canvas.h
- }
- )
- )
- if(i == nb-1){
- if(!pJS.particles.move.enable){
- pJS.fn.particlesDraw();
- }
- pJS.tmp.pushing = false;
- }
- }
-
- };
-
-
- pJS.fn.modes.removeParticles = function(nb){
-
- pJS.particles.array.splice(0, nb);
- if(!pJS.particles.move.enable){
- pJS.fn.particlesDraw();
- }
-
- };
-
-
- pJS.fn.modes.bubbleParticle = function(p){
-
- /* on hover event */
- if(pJS.interactivity.events.onhover.enable && isInArray('bubble', pJS.interactivity.events.onhover.mode)){
-
- var dx_mouse = p.x - pJS.interactivity.mouse.pos_x,
- dy_mouse = p.y - pJS.interactivity.mouse.pos_y,
- dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse),
- ratio = 1 - dist_mouse / pJS.interactivity.modes.bubble.distance;
-
- function init(){
- p.opacity_bubble = p.opacity;
- p.radius_bubble = p.radius;
- }
-
- /* mousemove - check ratio */
- if(dist_mouse <= pJS.interactivity.modes.bubble.distance){
-
- if(ratio >= 0 && pJS.interactivity.status == 'mousemove'){
-
- /* size */
- if(pJS.interactivity.modes.bubble.size != pJS.particles.size.value){
-
- if(pJS.interactivity.modes.bubble.size > pJS.particles.size.value){
- var size = p.radius + (pJS.interactivity.modes.bubble.size*ratio);
- if(size >= 0){
- p.radius_bubble = size;
- }
- }else{
- var dif = p.radius - pJS.interactivity.modes.bubble.size,
- size = p.radius - (dif*ratio);
- if(size > 0){
- p.radius_bubble = size;
- }else{
- p.radius_bubble = 0;
- }
- }
-
- }
-
- /* opacity */
- if(pJS.interactivity.modes.bubble.opacity != pJS.particles.opacity.value){
-
- if(pJS.interactivity.modes.bubble.opacity > pJS.particles.opacity.value){
- var opacity = pJS.interactivity.modes.bubble.opacity*ratio;
- if(opacity > p.opacity && opacity <= pJS.interactivity.modes.bubble.opacity){
- p.opacity_bubble = opacity;
- }
- }else{
- var opacity = p.opacity - (pJS.particles.opacity.value-pJS.interactivity.modes.bubble.opacity)*ratio;
- if(opacity < p.opacity && opacity >= pJS.interactivity.modes.bubble.opacity){
- p.opacity_bubble = opacity;
- }
- }
-
- }
-
- }
-
- }else{
- init();
- }
-
-
- /* mouseleave */
- if(pJS.interactivity.status == 'mouseleave'){
- init();
- }
-
- }
-
- /* on click event */
- else if(pJS.interactivity.events.onclick.enable && isInArray('bubble', pJS.interactivity.events.onclick.mode)){
-
-
- if(pJS.tmp.bubble_clicking){
- var dx_mouse = p.x - pJS.interactivity.mouse.click_pos_x,
- dy_mouse = p.y - pJS.interactivity.mouse.click_pos_y,
- dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse),
- time_spent = (new Date().getTime() - pJS.interactivity.mouse.click_time)/1000;
-
- if(time_spent > pJS.interactivity.modes.bubble.duration){
- pJS.tmp.bubble_duration_end = true;
- }
-
- if(time_spent > pJS.interactivity.modes.bubble.duration*2){
- pJS.tmp.bubble_clicking = false;
- pJS.tmp.bubble_duration_end = false;
- }
- }
-
-
- function process(bubble_param, particles_param, p_obj_bubble, p_obj, id){
-
- if(bubble_param != particles_param){
-
- if(!pJS.tmp.bubble_duration_end){
- if(dist_mouse <= pJS.interactivity.modes.bubble.distance){
- if(p_obj_bubble != undefined) var obj = p_obj_bubble;
- else var obj = p_obj;
- if(obj != bubble_param){
- var value = p_obj - (time_spent * (p_obj - bubble_param) / pJS.interactivity.modes.bubble.duration);
- if(id == 'size') p.radius_bubble = value;
- if(id == 'opacity') p.opacity_bubble = value;
- }
- }else{
- if(id == 'size') p.radius_bubble = undefined;
- if(id == 'opacity') p.opacity_bubble = undefined;
- }
- }else{
- if(p_obj_bubble != undefined){
- var value_tmp = p_obj - (time_spent * (p_obj - bubble_param) / pJS.interactivity.modes.bubble.duration),
- dif = bubble_param - value_tmp;
- value = bubble_param + dif;
- if(id == 'size') p.radius_bubble = value;
- if(id == 'opacity') p.opacity_bubble = value;
- }
- }
-
- }
-
- }
-
- if(pJS.tmp.bubble_clicking){
- /* size */
- process(pJS.interactivity.modes.bubble.size, pJS.particles.size.value, p.radius_bubble, p.radius, 'size');
- /* opacity */
- process(pJS.interactivity.modes.bubble.opacity, pJS.particles.opacity.value, p.opacity_bubble, p.opacity, 'opacity');
- }
-
- }
-
- };
-
-
- pJS.fn.modes.repulseParticle = function(p){
-
- if(pJS.interactivity.events.onhover.enable && isInArray('repulse', pJS.interactivity.events.onhover.mode) && pJS.interactivity.status == 'mousemove') {
-
- var dx_mouse = p.x - pJS.interactivity.mouse.pos_x,
- dy_mouse = p.y - pJS.interactivity.mouse.pos_y,
- dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse);
-
- var normVec = {x: dx_mouse/dist_mouse, y: dy_mouse/dist_mouse},
- repulseRadius = pJS.interactivity.modes.repulse.distance,
- velocity = 100,
- repulseFactor = clamp((1/repulseRadius)*(-1*Math.pow(dist_mouse/repulseRadius,2)+1)*repulseRadius*velocity, 0, 50);
-
- var pos = {
- x: p.x + normVec.x * repulseFactor,
- y: p.y + normVec.y * repulseFactor
- }
-
- if(pJS.particles.move.out_mode == 'bounce'){
- if(pos.x - p.radius > 0 && pos.x + p.radius < pJS.canvas.w) p.x = pos.x;
- if(pos.y - p.radius > 0 && pos.y + p.radius < pJS.canvas.h) p.y = pos.y;
- }else{
- p.x = pos.x;
- p.y = pos.y;
- }
-
- }
-
-
- else if(pJS.interactivity.events.onclick.enable && isInArray('repulse', pJS.interactivity.events.onclick.mode)) {
-
- if(!pJS.tmp.repulse_finish){
- pJS.tmp.repulse_count++;
- if(pJS.tmp.repulse_count == pJS.particles.array.length){
- pJS.tmp.repulse_finish = true;
- }
- }
-
- if(pJS.tmp.repulse_clicking){
-
- var repulseRadius = Math.pow(pJS.interactivity.modes.repulse.distance/6, 3);
-
- var dx = pJS.interactivity.mouse.click_pos_x - p.x,
- dy = pJS.interactivity.mouse.click_pos_y - p.y,
- d = dx*dx + dy*dy;
-
- var force = -repulseRadius / d * 1;
-
- function process(){
-
- var f = Math.atan2(dy,dx);
- p.vx = force * Math.cos(f);
- p.vy = force * Math.sin(f);
-
- if(pJS.particles.move.out_mode == 'bounce'){
- var pos = {
- x: p.x + p.vx,
- y: p.y + p.vy
- }
- if (pos.x + p.radius > pJS.canvas.w) p.vx = -p.vx;
- else if (pos.x - p.radius < 0) p.vx = -p.vx;
- if (pos.y + p.radius > pJS.canvas.h) p.vy = -p.vy;
- else if (pos.y - p.radius < 0) p.vy = -p.vy;
- }
-
- }
-
- // default
- if(d <= repulseRadius){
- process();
- }
-
- // bang - slow motion mode
- // if(!pJS.tmp.repulse_finish){
- // if(d <= repulseRadius){
- // process();
- // }
- // }else{
- // process();
- // }
-
-
- }else{
-
- if(pJS.tmp.repulse_clicking == false){
-
- p.vx = p.vx_i;
- p.vy = p.vy_i;
-
- }
-
- }
-
- }
-
- }
-
-
- pJS.fn.modes.grabParticle = function(p){
-
- if(pJS.interactivity.events.onhover.enable && pJS.interactivity.status == 'mousemove'){
-
- var dx_mouse = p.x - pJS.interactivity.mouse.pos_x,
- dy_mouse = p.y - pJS.interactivity.mouse.pos_y,
- dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse);
-
- /* draw a line between the cursor and the particle if the distance between them is under the config distance */
- if(dist_mouse <= pJS.interactivity.modes.grab.distance){
-
- var opacity_line = pJS.interactivity.modes.grab.line_linked.opacity - (dist_mouse / (1/pJS.interactivity.modes.grab.line_linked.opacity)) / pJS.interactivity.modes.grab.distance;
-
- if(opacity_line > 0){
-
- /* style */
- var color_line = pJS.particles.line_linked.color_rgb_line;
- pJS.canvas.ctx.strokeStyle = 'rgba('+color_line.r+','+color_line.g+','+color_line.b+','+opacity_line+')';
- pJS.canvas.ctx.lineWidth = pJS.particles.line_linked.width;
- //pJS.canvas.ctx.lineCap = 'round'; /* performance issue */
-
- /* path */
- pJS.canvas.ctx.beginPath();
- pJS.canvas.ctx.moveTo(p.x, p.y);
- pJS.canvas.ctx.lineTo(pJS.interactivity.mouse.pos_x, pJS.interactivity.mouse.pos_y);
- pJS.canvas.ctx.stroke();
- pJS.canvas.ctx.closePath();
-
- }
-
- }
-
- }
-
- };
-
-
-
- /* ---------- pJS functions - vendors ------------ */
-
- pJS.fn.vendors.eventsListeners = function(){
-
- /* events target element */
- if(pJS.interactivity.detect_on == 'window'){
- pJS.interactivity.el = window;
- }else{
- pJS.interactivity.el = pJS.canvas.el;
- }
-
-
- /* detect mouse pos - on hover / click event */
- if(pJS.interactivity.events.onhover.enable || pJS.interactivity.events.onclick.enable){
-
- /* el on mousemove */
- pJS.interactivity.el.addEventListener('mousemove', function(e){
-
- if(pJS.interactivity.el == window){
- var pos_x = e.clientX,
- pos_y = e.clientY;
- }
- else{
- var pos_x = e.offsetX || e.clientX,
- pos_y = e.offsetY || e.clientY;
- }
-
- pJS.interactivity.mouse.pos_x = pos_x;
- pJS.interactivity.mouse.pos_y = pos_y;
-
- if(pJS.tmp.retina){
- pJS.interactivity.mouse.pos_x *= pJS.canvas.pxratio;
- pJS.interactivity.mouse.pos_y *= pJS.canvas.pxratio;
- }
-
- pJS.interactivity.status = 'mousemove';
-
- });
-
- /* el on onmouseleave */
- pJS.interactivity.el.addEventListener('mouseleave', function(e){
-
- pJS.interactivity.mouse.pos_x = null;
- pJS.interactivity.mouse.pos_y = null;
- pJS.interactivity.status = 'mouseleave';
-
- });
-
- }
-
- /* on click event */
- if(pJS.interactivity.events.onclick.enable){
-
- pJS.interactivity.el.addEventListener('click', function(){
-
- pJS.interactivity.mouse.click_pos_x = pJS.interactivity.mouse.pos_x;
- pJS.interactivity.mouse.click_pos_y = pJS.interactivity.mouse.pos_y;
- pJS.interactivity.mouse.click_time = new Date().getTime();
-
- if(pJS.interactivity.events.onclick.enable){
-
- switch(pJS.interactivity.events.onclick.mode){
-
- case 'push':
- if(pJS.particles.move.enable){
- pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb, pJS.interactivity.mouse);
- }else{
- if(pJS.interactivity.modes.push.particles_nb == 1){
- pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb, pJS.interactivity.mouse);
- }
- else if(pJS.interactivity.modes.push.particles_nb > 1){
- pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb);
- }
- }
- break;
-
- case 'remove':
- pJS.fn.modes.removeParticles(pJS.interactivity.modes.remove.particles_nb);
- break;
-
- case 'bubble':
- pJS.tmp.bubble_clicking = true;
- break;
-
- case 'repulse':
- pJS.tmp.repulse_clicking = true;
- pJS.tmp.repulse_count = 0;
- pJS.tmp.repulse_finish = false;
- setTimeout(function(){
- pJS.tmp.repulse_clicking = false;
- }, pJS.interactivity.modes.repulse.duration*1000)
- break;
-
- }
-
- }
-
- });
-
- }
-
-
- };
-
- pJS.fn.vendors.densityAutoParticles = function(){
-
- if(pJS.particles.number.density.enable){
-
- /* calc area */
- var area = pJS.canvas.el.width * pJS.canvas.el.height / 1000;
- if(pJS.tmp.retina){
- area = area/(pJS.canvas.pxratio*2);
- }
-
- /* calc number of particles based on density area */
- var nb_particles = area * pJS.particles.number.value / pJS.particles.number.density.value_area;
-
- /* add or remove X particles */
- var missing_particles = pJS.particles.array.length - nb_particles;
- if(missing_particles < 0) pJS.fn.modes.pushParticles(Math.abs(missing_particles));
- else pJS.fn.modes.removeParticles(missing_particles);
-
- }
-
- };
-
-
- pJS.fn.vendors.checkOverlap = function(p1, position){
- for(var i = 0; i < pJS.particles.array.length; i++){
- var p2 = pJS.particles.array[i];
-
- var dx = p1.x - p2.x,
- dy = p1.y - p2.y,
- dist = Math.sqrt(dx*dx + dy*dy);
-
- if(dist <= p1.radius + p2.radius){
- p1.x = position ? position.x : Math.random() * pJS.canvas.w;
- p1.y = position ? position.y : Math.random() * pJS.canvas.h;
- pJS.fn.vendors.checkOverlap(p1);
- }
- }
- };
-
-
- pJS.fn.vendors.createSvgImg = function(p){
-
- /* set color to svg element */
- var svgXml = pJS.tmp.source_svg,
- rgbHex = /#([0-9A-F]{3,6})/gi,
- coloredSvgXml = svgXml.replace(rgbHex, function (m, r, g, b) {
- if(p.color.rgb){
- var color_value = 'rgba('+p.color.rgb.r+','+p.color.rgb.g+','+p.color.rgb.b+','+p.opacity+')';
- }else{
- var color_value = 'hsla('+p.color.hsl.h+','+p.color.hsl.s+'%,'+p.color.hsl.l+'%,'+p.opacity+')';
- }
- return color_value;
- });
-
- /* prepare to create img with colored svg */
- var svg = new Blob([coloredSvgXml], {type: 'image/svg+xml;charset=utf-8'}),
- DOMURL = window.URL || window.webkitURL || window,
- url = DOMURL.createObjectURL(svg);
-
- /* create particle img obj */
- var img = new Image();
- img.addEventListener('load', function(){
- p.img.obj = img;
- p.img.loaded = true;
- DOMURL.revokeObjectURL(url);
- pJS.tmp.count_svg++;
- });
- img.src = url;
-
- };
-
-
- pJS.fn.vendors.destroypJS = function(){
- cancelAnimationFrame(pJS.fn.drawAnimFrame);
- canvas_el.remove();
- pJSDom = null;
- };
-
-
- pJS.fn.vendors.drawShape = function(c, startX, startY, sideLength, sideCountNumerator, sideCountDenominator){
-
- // By Programming Thomas - https://programmingthomas.wordpress.com/2013/04/03/n-sided-shapes/
- var sideCount = sideCountNumerator * sideCountDenominator;
- var decimalSides = sideCountNumerator / sideCountDenominator;
- var interiorAngleDegrees = (180 * (decimalSides - 2)) / decimalSides;
- var interiorAngle = Math.PI - Math.PI * interiorAngleDegrees / 180; // convert to radians
- c.save();
- c.beginPath();
- c.translate(startX, startY);
- c.moveTo(0,0);
- for (var i = 0; i < sideCount; i++) {
- c.lineTo(sideLength,0);
- c.translate(sideLength,0);
- c.rotate(interiorAngle);
- }
- //c.stroke();
- c.fill();
- c.restore();
-
- };
-
- pJS.fn.vendors.exportImg = function(){
- window.open(pJS.canvas.el.toDataURL('image/png'), '_blank');
- };
-
-
- pJS.fn.vendors.loadImg = function(type){
-
- pJS.tmp.img_error = undefined;
-
- if(pJS.particles.shape.image.src != ''){
-
- if(type == 'svg'){
-
- var xhr = new XMLHttpRequest();
- xhr.open('GET', pJS.particles.shape.image.src);
- xhr.onreadystatechange = function (data) {
- if(xhr.readyState == 4){
- if(xhr.status == 200){
- pJS.tmp.source_svg = data.currentTarget.response;
- pJS.fn.vendors.checkBeforeDraw();
- }else{
- console.log('Error pJS - Image not found');
- pJS.tmp.img_error = true;
- }
- }
- }
- xhr.send();
-
- }else{
-
- var img = new Image();
- img.addEventListener('load', function(){
- pJS.tmp.img_obj = img;
- pJS.fn.vendors.checkBeforeDraw();
- });
- img.src = pJS.particles.shape.image.src;
-
- }
-
- }else{
- console.log('Error pJS - No image.src');
- pJS.tmp.img_error = true;
- }
-
- };
-
-
- pJS.fn.vendors.draw = function(){
-
- if(pJS.particles.shape.type == 'image'){
-
- if(pJS.tmp.img_type == 'svg'){
-
- if(pJS.tmp.count_svg >= pJS.particles.number.value){
- pJS.fn.particlesDraw();
- if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame);
- else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw);
- }else{
- //console.log('still loading...');
- if(!pJS.tmp.img_error) pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw);
- }
-
- }else{
-
- if(pJS.tmp.img_obj != undefined){
- pJS.fn.particlesDraw();
- if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame);
- else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw);
- }else{
- if(!pJS.tmp.img_error) pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw);
- }
-
- }
-
- }else{
- pJS.fn.particlesDraw();
- if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame);
- else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw);
- }
-
- };
-
-
- pJS.fn.vendors.checkBeforeDraw = function(){
-
- // if shape is image
- if(pJS.particles.shape.type == 'image'){
-
- if(pJS.tmp.img_type == 'svg' && pJS.tmp.source_svg == undefined){
- pJS.tmp.checkAnimFrame = requestAnimFrame(check);
- }else{
- //console.log('images loaded! cancel check');
- cancelRequestAnimFrame(pJS.tmp.checkAnimFrame);
- if(!pJS.tmp.img_error){
- pJS.fn.vendors.init();
- pJS.fn.vendors.draw();
- }
-
- }
-
- }else{
- pJS.fn.vendors.init();
- pJS.fn.vendors.draw();
- }
-
- };
-
-
- pJS.fn.vendors.init = function(){
-
- /* init canvas + particles */
- pJS.fn.retinaInit();
- pJS.fn.canvasInit();
- pJS.fn.canvasSize();
- pJS.fn.canvasPaint();
- pJS.fn.particlesCreate();
- pJS.fn.vendors.densityAutoParticles();
-
- /* particles.line_linked - convert hex colors to rgb */
- pJS.particles.line_linked.color_rgb_line = hexToRgb(pJS.particles.line_linked.color);
-
- };
-
-
- pJS.fn.vendors.start = function(){
-
- if(isInArray('image', pJS.particles.shape.type)){
- pJS.tmp.img_type = pJS.particles.shape.image.src.substr(pJS.particles.shape.image.src.length - 3);
- pJS.fn.vendors.loadImg(pJS.tmp.img_type);
- }else{
- pJS.fn.vendors.checkBeforeDraw();
- }
-
- };
-
-
-
-
- /* ---------- pJS - start ------------ */
-
-
- pJS.fn.vendors.eventsListeners();
-
- pJS.fn.vendors.start();
-
-
-
-};
-
-/* ---------- global functions - vendors ------------ */
-
-Object.deepExtend = function(destination, source) {
- for (var property in source) {
- if (source[property] && source[property].constructor &&
- source[property].constructor === Object) {
- destination[property] = destination[property] || {};
- arguments.callee(destination[property], source[property]);
- } else {
- destination[property] = source[property];
- }
- }
- return destination;
-};
-
-window.requestAnimFrame = (function(){
- return window.requestAnimationFrame ||
- window.webkitRequestAnimationFrame ||
- window.mozRequestAnimationFrame ||
- window.oRequestAnimationFrame ||
- window.msRequestAnimationFrame ||
- function(callback){
- window.setTimeout(callback, 1000 / 60);
- };
-})();
-
-window.cancelRequestAnimFrame = ( function() {
- return window.cancelAnimationFrame ||
- window.webkitCancelRequestAnimationFrame ||
- window.mozCancelRequestAnimationFrame ||
- window.oCancelRequestAnimationFrame ||
- window.msCancelRequestAnimationFrame ||
- clearTimeout
-} )();
-
-function hexToRgb(hex){
- // By Tim Down - http://stackoverflow.com/a/5624139/3493650
- // Expand shorthand form (e.g. "03F") to full form (e.g. "0033FF")
- var shorthandRegex = /^#?([a-f\d])([a-f\d])([a-f\d])$/i;
- hex = hex.replace(shorthandRegex, function(m, r, g, b) {
- return r + r + g + g + b + b;
- });
- var result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
- return result ? {
- r: parseInt(result[1], 16),
- g: parseInt(result[2], 16),
- b: parseInt(result[3], 16)
- } : null;
-};
-
-function clamp(number, min, max) {
- return Math.min(Math.max(number, min), max);
-};
-
-function isInArray(value, array) {
- return array.indexOf(value) > -1;
-}
-
-
-/* ---------- particles.js functions - start ------------ */
-
-window.pJSDom = [];
-
-window.particlesJS = function(tag_id, params){
-
- //console.log(params);
-
- /* no string id? so it's object params, and set the id with default id */
- if(typeof(tag_id) != 'string'){
- params = tag_id;
- tag_id = 'particles-js';
- }
-
- /* no id? set the id to default id */
- if(!tag_id){
- tag_id = 'particles-js';
- }
-
- /* pJS elements */
- var pJS_tag = document.getElementById(tag_id),
- pJS_canvas_class = 'particles-js-canvas-el',
- exist_canvas = pJS_tag.getElementsByClassName(pJS_canvas_class);
-
- /* remove canvas if exists into the pJS target tag */
- if(exist_canvas.length){
- while(exist_canvas.length > 0){
- pJS_tag.removeChild(exist_canvas[0]);
- }
- }
-
- /* create canvas element */
- var canvas_el = document.createElement('canvas');
- canvas_el.className = pJS_canvas_class;
-
- /* set size canvas */
- canvas_el.style.width = "100%";
- canvas_el.style.height = "100%";
-
- /* append canvas */
- var canvas = document.getElementById(tag_id).appendChild(canvas_el);
-
- /* launch particle.js */
- if(canvas != null){
- pJSDom.push(new pJS(tag_id, params));
- }
-
-};
-
-window.particlesJS.load = function(tag_id, path_config_json, callback){
-
- /* load json config */
- var xhr = new XMLHttpRequest();
- xhr.open('GET', path_config_json);
- xhr.onreadystatechange = function (data) {
- if(xhr.readyState == 4){
- if(xhr.status == 200){
- var params = JSON.parse(data.currentTarget.response);
- window.particlesJS(tag_id, params);
- if(callback) callback();
- }else{
- console.log('Error pJS - XMLHttpRequest status: '+xhr.status);
- console.log('Error pJS - File config not found');
- }
- }
- };
- xhr.send();
-
-};
\ No newline at end of file
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py
deleted file mode 100644
index edbb0b92f77e3198b55920879271f481082131ea..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from io import BytesIO
-from fontTools.ttLib.tables.C_F_F_ import table_C_F_F_
-
-
-class table_C_F_F__2(table_C_F_F_):
- def decompile(self, data, otFont):
- self.cff.decompile(BytesIO(data), otFont, isCFF2=True)
- assert len(self.cff) == 1, "can't deal with multi-font CFF tables."
-
- def compile(self, otFont):
- f = BytesIO()
- self.cff.compile(f, otFont, isCFF2=True)
- return f.getvalue()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py
deleted file mode 100644
index fbcd6ca6e7bc0640263ddab74e1e1c89ea61bbfb..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.fixedTools import floatToFixedToStr
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-from . import grUtils
-import struct
-
-Feat_hdr_format = """
- >
- version: 16.16F
-"""
-
-
-class table_F__e_a_t(DefaultTable.DefaultTable):
- """The ``Feat`` table is used exclusively by the Graphite shaping engine
- to store features and possible settings specified in GDL. Graphite features
- determine what rules are applied to transform a glyph stream.
-
- Not to be confused with ``feat``, or the OpenType Layout tables
- ``GSUB``/``GPOS``."""
-
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.features = {}
-
- def decompile(self, data, ttFont):
- (_, data) = sstruct.unpack2(Feat_hdr_format, data, self)
- self.version = float(floatToFixedToStr(self.version, precisionBits=16))
- (numFeats,) = struct.unpack(">H", data[:2])
- data = data[8:]
- allfeats = []
- maxsetting = 0
- for i in range(numFeats):
- if self.version >= 2.0:
- (fid, nums, _, offset, flags, lid) = struct.unpack(
- ">LHHLHH", data[16 * i : 16 * (i + 1)]
- )
- offset = int((offset - 12 - 16 * numFeats) / 4)
- else:
- (fid, nums, offset, flags, lid) = struct.unpack(
- ">HHLHH", data[12 * i : 12 * (i + 1)]
- )
- offset = int((offset - 12 - 12 * numFeats) / 4)
- allfeats.append((fid, nums, offset, flags, lid))
- maxsetting = max(maxsetting, offset + nums)
- data = data[16 * numFeats :]
- allsettings = []
- for i in range(maxsetting):
- if len(data) >= 4 * (i + 1):
- (val, lid) = struct.unpack(">HH", data[4 * i : 4 * (i + 1)])
- allsettings.append((val, lid))
- for i, f in enumerate(allfeats):
- (fid, nums, offset, flags, lid) = f
- fobj = Feature()
- fobj.flags = flags
- fobj.label = lid
- self.features[grUtils.num2tag(fid)] = fobj
- fobj.settings = {}
- fobj.default = None
- fobj.index = i
- for i in range(offset, offset + nums):
- if i >= len(allsettings):
- continue
- (vid, vlid) = allsettings[i]
- fobj.settings[vid] = vlid
- if fobj.default is None:
- fobj.default = vid
-
- def compile(self, ttFont):
- fdat = b""
- vdat = b""
- offset = 0
- for f, v in sorted(self.features.items(), key=lambda x: x[1].index):
- fnum = grUtils.tag2num(f)
- if self.version >= 2.0:
- fdat += struct.pack(
- ">LHHLHH",
- grUtils.tag2num(f),
- len(v.settings),
- 0,
- offset * 4 + 12 + 16 * len(self.features),
- v.flags,
- v.label,
- )
- elif fnum > 65535: # self healing for alphabetic ids
- self.version = 2.0
- return self.compile(ttFont)
- else:
- fdat += struct.pack(
- ">HHLHH",
- grUtils.tag2num(f),
- len(v.settings),
- offset * 4 + 12 + 12 * len(self.features),
- v.flags,
- v.label,
- )
- for s, l in sorted(
- v.settings.items(), key=lambda x: (-1, x[1]) if x[0] == v.default else x
- ):
- vdat += struct.pack(">HH", s, l)
- offset += len(v.settings)
- hdr = sstruct.pack(Feat_hdr_format, self)
- return hdr + struct.pack(">HHL", len(self.features), 0, 0) + fdat + vdat
-
- def toXML(self, writer, ttFont):
- writer.simpletag("version", version=self.version)
- writer.newline()
- for f, v in sorted(self.features.items(), key=lambda x: x[1].index):
- writer.begintag(
- "feature",
- fid=f,
- label=v.label,
- flags=v.flags,
- default=(v.default if v.default else 0),
- )
- writer.newline()
- for s, l in sorted(v.settings.items()):
- writer.simpletag("setting", value=s, label=l)
- writer.newline()
- writer.endtag("feature")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "version":
- self.version = float(safeEval(attrs["version"]))
- elif name == "feature":
- fid = attrs["fid"]
- fobj = Feature()
- fobj.flags = int(safeEval(attrs["flags"]))
- fobj.label = int(safeEval(attrs["label"]))
- fobj.default = int(safeEval(attrs.get("default", "0")))
- fobj.index = len(self.features)
- self.features[fid] = fobj
- fobj.settings = {}
- for element in content:
- if not isinstance(element, tuple):
- continue
- tag, a, c = element
- if tag == "setting":
- fobj.settings[int(safeEval(a["value"]))] = int(safeEval(a["label"]))
-
-
-class Feature(object):
- pass
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py
deleted file mode 100644
index 5c0c2337dc048dd9ef164ac5cb92e4bf5e62d764..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-# limitations under the License.
-
-# NOTE: This file is deprecated and will be removed in a future version.
-# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works
-
-from .pipelines import DiffusionPipeline, ImagePipelineOutput # noqa: F401
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py b/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py
deleted file mode 100644
index ef099fe948c42b6ccfd8cbacdda0a7efa255de59..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from metagpt.tools.search_engine import SearchEngine
-
-
-async def google_search(query: str, max_results: int = 6, **kwargs):
- """Perform a web search and retrieve search results.
-
- :param query: The search query.
- :param max_results: The number of search results to retrieve
- :return: The web search results in markdown format.
- """
- resluts = await SearchEngine().run(query, max_results=max_results, as_string=False)
- return "\n".join(f"{i}. [{j['title']}]({j['link']}): {j['snippet']}" for i, j in enumerate(resluts, 1))
diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py
deleted file mode 100644
index 8d2bb64942f521af45edf60df2c4e6e9d9d36fab..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/27 22:18
-@Author : alexanderwu
-@File : test_search_engine_meilisearch.py
-"""
-import subprocess
-import time
-
-import pytest
-
-from metagpt.logs import logger
-from metagpt.tools.search_engine_meilisearch import DataSource, MeilisearchEngine
-
-MASTER_KEY = '116Qavl2qpCYNEJNv5-e0RC9kncev1nr1gt7ybEGVLk'
-
-
-@pytest.fixture()
-def search_engine_server():
- meilisearch_process = subprocess.Popen(["meilisearch", "--master-key", f"{MASTER_KEY}"], stdout=subprocess.PIPE)
- time.sleep(3)
- yield
- meilisearch_process.terminate()
- meilisearch_process.wait()
-
-
-def test_meilisearch(search_engine_server):
- search_engine = MeilisearchEngine(url="http://localhost:7700", token=MASTER_KEY)
-
- # 假设有一个名为"books"的数据源,包含要添加的文档库
- books_data_source = DataSource(name='books', url='https://example.com/books')
-
- # 假设有一个名为"documents"的文档库,包含要添加的文档
- documents = [
- {"id": 1, "title": "Book 1", "content": "This is the content of Book 1."},
- {"id": 2, "title": "Book 2", "content": "This is the content of Book 2."},
- {"id": 3, "title": "Book 1", "content": "This is the content of Book 1."},
- {"id": 4, "title": "Book 2", "content": "This is the content of Book 2."},
- {"id": 5, "title": "Book 1", "content": "This is the content of Book 1."},
- {"id": 6, "title": "Book 2", "content": "This is the content of Book 2."},
- ]
-
- # 添加文档库到搜索引擎
- search_engine.add_documents(books_data_source, documents)
- logger.info(search_engine.search('Book 1'))
diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md b/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md
deleted file mode 100644
index f25fb3d03ceca0d043eb013aad2d7cf85a890618..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-HD Online Player (Modiac Video Converter 2.5.0.4164 ke)
Download ⇔ https://gohhs.com/2uFTaP
-
-AMV format, so my son could watch them on his inexpensive MP3/Video player. Many online conversion tools I checked out didn't have the right ... 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md b/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md
deleted file mode 100644
index 82f3bc97c9ddb3c9eb68edba63b94d9f2ed91ff3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 amantes marchas prel
Download File 🗹 https://gohhs.com/2uFUS4
-
-Set during the 1930's on the Greek island of Andros, a Cyclades archipelago with a long history of military embroilment and seafaring turmoil, Little England is ... 4d29de3e1b
-
-
-
diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py
deleted file mode 100644
index ecbdbc8be941a962046fc11fd6739b093112123e..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py
+++ /dev/null
@@ -1,343 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from torch.nn.utils import weight_norm, remove_weight_norm
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- if isflow:
- cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1)
- self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1)
- self.cond_layer = weight_norm(cond_layer, name='weight')
- self.gin_channels = 256
- self.cond_layer_idx = self.n_layers
- if 'gin_channels' in kwargs:
- self.gin_channels = kwargs['gin_channels']
- if self.gin_channels != 0:
- self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels)
- # vits2 says 3rd block, so idx is 2 by default
- self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2
- print(self.gin_channels, self.cond_layer_idx)
- assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers'
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
- def forward(self, x, x_mask, g=None):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- if i == self.cond_layer_idx and g is not None:
- g = self.spk_emb_linear(g.transpose(1, 2))
- g = g.transpose(1, 2)
- x = x + g
- x = x * x_mask
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py
deleted file mode 100644
index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import torch
-from transformers import AutoTokenizer, AutoModelForMaskedLM
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large")
-model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device)
-
-def get_bert_feature(text, word2ph):
- with torch.no_grad():
- inputs = tokenizer(text, return_tensors='pt')
- for i in inputs:
- inputs[i] = inputs[i].to(device)
- res = model(**inputs, output_hidden_states=True)
- res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu()
-
- assert len(word2ph) == len(text)+2
- word2phone = word2ph
- phone_level_feature = []
- for i in range(len(word2phone)):
- repeat_feature = res[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
-
-
- return phone_level_feature.T
-
-if __name__ == '__main__':
- # feature = get_bert_feature('你好,我是说的道理。')
- import torch
-
- word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征
- word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1]
-
- # 计算总帧数
- total_frames = sum(word2phone)
- print(word_level_feature.shape)
- print(word2phone)
- phone_level_feature = []
- for i in range(len(word2phone)):
- print(word_level_feature[i].shape)
-
- # 对每个词重复word2phone[i]次
- repeat_feature = word_level_feature[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
- print(phone_level_feature.shape) # torch.Size([36, 1024])
-
diff --git a/spaces/docs-demos/pegasus_paraphrase/app.py b/spaces/docs-demos/pegasus_paraphrase/app.py
deleted file mode 100644
index 998642b0cb36411262c3c5aac9d58b3892edb9a7..0000000000000000000000000000000000000000
--- a/spaces/docs-demos/pegasus_paraphrase/app.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import gradio as gr
-
-title = "Pegasus"
-
-description = "Gradio Demo for Pegasus. To use it, simply add your text, or click one of the examples to load them. Read more at the links below."
-
-article = "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
"
-
-examples = [
- ['The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.',"pegasus-xsum"]
-]
-
-io1 = gr.Interface.load("huggingface/google/pegasus-xsum")
-
-io2 = gr.Interface.load("huggingface/google/pegasus-large")
-
-def inference(text, model):
- if model == "pegasus-xsum":
- outtext = io1(text)
- else:
- outtext = io2(text)
- return outtext
-
-
-gr.Interface(
- inference,
- [gr.inputs.Textbox(label="Input",lines=10),gr.inputs.Dropdown(choices=["pegasus-xsum","pegasus-large"], type="value", default="pegasus-xsum", label="model")],
- [gr.outputs.Textbox(label="Output")],
- examples=examples,
- article=article,
- title=title,
- description=description).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py b/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py
deleted file mode 100644
index a37e790671f513b6a5744cc469424a967a75d43b..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copied from https://github.com/johnsmith0031/alpaca_lora_4bit
-
-import sys
-from pathlib import Path
-
-sys.path.insert(0, str(Path("repositories/alpaca_lora_4bit")))
-
-import autograd_4bit
-from amp_wrapper import AMPWrapper
-from autograd_4bit import (Autograd4bitQuantLinear,
- load_llama_model_4bit_low_ram)
-from monkeypatch.peft_tuners_lora_monkey_patch import (
- Linear4bitLt, replace_peft_model_with_gptq_lora_model)
-
-from modules import shared
-from modules.GPTQ_loader import find_quantized_model_file
-
-replace_peft_model_with_gptq_lora_model()
-
-
-def load_model_llama(model_name):
- config_path = str(Path(f'{shared.args.model_dir}/{model_name}'))
- model_path = str(find_quantized_model_file(model_name))
- model, tokenizer = load_llama_model_4bit_low_ram(config_path, model_path, groupsize=shared.args.groupsize, is_v1_model=False)
- for n, m in model.named_modules():
- if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
- m.scales = m.scales.half()
- m.bias = m.bias.half()
-
- autograd_4bit.use_new = True
- autograd_4bit.auto_switch = True
-
- model.half()
- wrapper = AMPWrapper(model)
- wrapper.apply_generate()
-
- return model, tokenizer
diff --git a/spaces/ds520/bingo/src/lib/bots/bing/utils.ts b/spaces/ds520/bingo/src/lib/bots/bing/utils.ts
deleted file mode 100644
index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/lib/bots/bing/utils.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import { ChatResponseMessage, BingChatResponse } from './types'
-
-export function convertMessageToMarkdown(message: ChatResponseMessage): string {
- if (message.messageType === 'InternalSearchQuery') {
- return message.text
- }
- for (const card of message.adaptiveCards??[]) {
- for (const block of card.body) {
- if (block.type === 'TextBlock') {
- return block.text
- }
- }
- }
- return ''
-}
-
-const RecordSeparator = String.fromCharCode(30)
-
-export const websocketUtils = {
- packMessage(data: any) {
- return `${JSON.stringify(data)}${RecordSeparator}`
- },
- unpackMessage(data: string | ArrayBuffer | Blob) {
- if (!data) return {}
- return data
- .toString()
- .split(RecordSeparator)
- .filter(Boolean)
- .map((s) => {
- try {
- return JSON.parse(s)
- } catch (e) {
- return {}
- }
- })
- },
-}
-
-export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise {
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
- {
- method: 'HEAD',
- headers,
- redirect: 'manual'
- },
- );
-
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
- throw new Error('请求异常,请检查 cookie 是否有效')
- }
-
- const resultId = RegExp.$1;
- let count = 0
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
-
- do {
- await sleep(3000);
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
-
- // @ts-ignore
- if (content.headers.get('content-length') > 1) {
- const text = await content.text()
- return (text?.match(/
target?.split('src="').pop()?.replace(/&/g, '&'))
- .map(img => ``).join(' ')
- }
- } while(count ++ < 10);
-}
-
-
-export async function* streamAsyncIterable(stream: ReadableStream) {
- const reader = stream.getReader()
- try {
- while (true) {
- const { done, value } = await reader.read()
- if (done) {
- return
- }
- yield value
- }
- } finally {
- reader.releaseLock()
- }
-}
-
-export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
-
diff --git a/spaces/dylanebert/gaussian-viewer/LICENSE.md b/spaces/dylanebert/gaussian-viewer/LICENSE.md
deleted file mode 100644
index 212de3c72ef023c30539016f33482fe59dbd24f7..0000000000000000000000000000000000000000
--- a/spaces/dylanebert/gaussian-viewer/LICENSE.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
\ No newline at end of file
diff --git a/spaces/eaglelandsonce/weatherQnA/app.py b/spaces/eaglelandsonce/weatherQnA/app.py
deleted file mode 100644
index 6fa2a98bc1de8df1f6db2f54766e497b7a8b3409..0000000000000000000000000000000000000000
--- a/spaces/eaglelandsonce/weatherQnA/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import streamlit as st
-from langchain.llms import OpenAI
-from langchain.agents import load_tools, initialize_agent, AgentType
-import os
-
-# Set up Streamlit interface
-st.title('Weather Q&A using Langchain')
-# Adding the markdown message
-st.markdown("""
-I'm genuinely impressed. Leveraging prompt engineering, I was able to craft this program in just 5 minutes, and it's fully functional! All I did was instruct ChatGPT to integrate langchain and streamlit, set up inputs for the API keys, pose a weather-related question, and use the details from the [LangChain OpenWeatherMap link](https://python.langchain.com/docs/integrations/tools/openweathermap) as a coding and output guide. Now, envisioning a solution is all it takes. It's auto-magical! I may have been a terrible programmer, but I\'m an amazing prompt engineer, bless the Lord!
-""")
-
-st.sidebar.header('API Configuration')
-
-# Input for OpenAI API key and OpenWeather API key in the Streamlit sidebar
-os.environ["OPENAI_API_KEY"] = st.sidebar.text_input('OpenAI API Key:', value='', type='password')
-os.environ["OPENWEATHERMAP_API_KEY"] = st.sidebar.text_input('OpenWeather API Key:', value='', type='password')
-
-# Input for question about the weather
-question = st.text_input('Ask a question about the weather (e.g., "What\'s the weather like in London?"):')
-
-# Initialize Langchain's OpenAI and agent_chain only once API keys are provided
-if os.environ["OPENAI_API_KEY"] and os.environ["OPENWEATHERMAP_API_KEY"]:
- try:
- llm = OpenAI(temperature=0)
- tools = load_tools(["openweathermap-api"], llm)
- agent_chain = initialize_agent(
- tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
- )
-
- # If a question is provided, proceed to get an answer
- if question:
- response = agent_chain.run(question)
- st.write(response)
- except Exception as e:
- st.warning("There was an error processing your request.")
- st.write(f"Details: {e}")
- st.write("Please provide more specific information. For example, you may need to provide the country sucn as Florence Kentucky US.")
-else:
- st.warning("Please provide your API keys in the left sidebar!")
diff --git a/spaces/edugp/embedding-lenses/README.md b/spaces/edugp/embedding-lenses/README.md
deleted file mode 100644
index 3d89bebefa130d73b4955f11ac55bca6bbea542a..0000000000000000000000000000000000000000
--- a/spaces/edugp/embedding-lenses/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Embedding Lenses
-emoji: 😻
-colorFrom: red
-colorTo: yellow
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py
deleted file mode 100644
index 4141436fb3edee8ab5f7576fde0c0e53b529ef66..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py
+++ /dev/null
@@ -1,32 +0,0 @@
-"""
-This file runs the main training/val loop
-"""
-import os
-import json
-import sys
-import pprint
-
-sys.path.append(".")
-sys.path.append("..")
-
-from mapper.options.train_options import TrainOptions
-from mapper.training.coach import Coach
-
-
-def main(opts):
- if os.path.exists(opts.exp_dir):
- raise Exception('Oops... {} already exists'.format(opts.exp_dir))
- os.makedirs(opts.exp_dir, exist_ok=True)
-
- opts_dict = vars(opts)
- pprint.pprint(opts_dict)
- with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f:
- json.dump(opts_dict, f, indent=4, sort_keys=True)
-
- coach = Coach(opts)
- coach.train()
-
-
-if __name__ == '__main__':
- opts = TrainOptions().parse()
- main(opts)
diff --git a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py b/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py
deleted file mode 100644
index 7ee3fc77f7ab54809df831b3bca8511be9aa467d..0000000000000000000000000000000000000000
--- a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-
-
-def write_manifest_out_line(
- f_manifest_out, utt_obj,
-):
-
- data = {"audio_filepath": utt_obj.audio_filepath}
- if not utt_obj.text is None:
- data["text"] = utt_obj.text
-
- if not utt_obj.pred_text is None:
- data["pred_text"] = utt_obj.pred_text
-
- for key, val in utt_obj.saved_output_files.items():
- data[key] = val
-
- new_line = json.dumps(data)
- f_manifest_out.write(f"{new_line}\n")
-
- return None
diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
deleted file mode 100644
index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000
--- "a/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
+++ /dev/null
@@ -1,194 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file, get_conf
-import re, requests, unicodedata, os
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-def download_arxiv_(url_pdf):
- if 'arxiv.org' not in url_pdf:
- if ('.' in url_pdf) and ('/' not in url_pdf):
- new_url = 'https://arxiv.org/abs/'+url_pdf
- print('下载编号:', url_pdf, '自动定位:', new_url)
- # download_arxiv_(new_url)
- return download_arxiv_(new_url)
- else:
- print('不能识别的URL!')
- return None
- if 'abs' in url_pdf:
- url_pdf = url_pdf.replace('abs', 'pdf')
- url_pdf = url_pdf + '.pdf'
-
- url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
- title, other_info = get_name(_url_=url_abs)
-
- paper_id = title.split()[0] # '[1712.00559]'
- if '2' in other_info['year']:
- title = other_info['year'] + ' ' + title
-
- known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
- for k in known_conf:
- if k in other_info['comment']:
- title = k + ' ' + title
-
- download_dir = './gpt_log/arxiv/'
- os.makedirs(download_dir, exist_ok=True)
-
- title_str = title.replace('?', '?')\
- .replace(':', ':')\
- .replace('\"', '“')\
- .replace('\n', '')\
- .replace(' ', ' ')\
- .replace(' ', ' ')
-
- requests_pdf_url = url_pdf
- file_path = download_dir+title_str
- # if os.path.exists(file_path):
- # print('返回缓存文件')
- # return './gpt_log/arxiv/'+title_str
-
- print('下载中')
- proxies, = get_conf('proxies')
- r = requests.get(requests_pdf_url, proxies=proxies)
- with open(file_path, 'wb+') as f:
- f.write(r.content)
- print('下载完成')
-
- # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf))
- # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True)
-
- x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
- x = x.replace('?', '?')\
- .replace(':', ':')\
- .replace('\"', '“')\
- .replace('\n', '')\
- .replace(' ', ' ')\
- .replace(' ', ' ')
- return './gpt_log/arxiv/'+title_str, other_info
-
-
-def get_name(_url_):
- import os
- from bs4 import BeautifulSoup
- print('正在获取文献名!')
- print(_url_)
-
- # arxiv_recall = {}
- # if os.path.exists('./arxiv_recall.pkl'):
- # with open('./arxiv_recall.pkl', 'rb') as f:
- # arxiv_recall = pickle.load(f)
-
- # if _url_ in arxiv_recall:
- # print('在缓存中')
- # return arxiv_recall[_url_]
-
- proxies, = get_conf('proxies')
- res = requests.get(_url_, proxies=proxies)
-
- bs = BeautifulSoup(res.text, 'html.parser')
- other_details = {}
-
- # get year
- try:
- year = bs.find_all(class_='dateline')[0].text
- year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
- other_details['year'] = year
- abstract = bs.find_all(class_='abstract mathjax')[0].text
- other_details['abstract'] = abstract
- except:
- other_details['year'] = ''
- print('年份获取失败')
-
- # get author
- try:
- authors = bs.find_all(class_='authors')[0].text
- authors = authors.split('Authors:')[1]
- other_details['authors'] = authors
- except:
- other_details['authors'] = ''
- print('authors获取失败')
-
- # get comment
- try:
- comment = bs.find_all(class_='metatable')[0].text
- real_comment = None
- for item in comment.replace('\n', ' ').split(' '):
- if 'Comments' in item:
- real_comment = item
- if real_comment is not None:
- other_details['comment'] = real_comment
- else:
- other_details['comment'] = ''
- except:
- other_details['comment'] = ''
- print('年份获取失败')
-
- title_str = BeautifulSoup(
- res.text, 'html.parser').find('title').contents[0]
- print('获取成功:', title_str)
- # arxiv_recall[_url_] = (title_str+'.pdf', other_details)
- # with open('./arxiv_recall.pkl', 'wb') as f:
- # pickle.dump(arxiv_recall, f)
-
- return title_str+'.pdf', other_details
-
-
-
-@CatchException
-def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
-
- CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
- import glob
- import os
-
- # 基本信息:功能、贡献者
- chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import pdfminer, bs4
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 提取摘要,下载PDF文档
- try:
- pdf_path, info = download_arxiv_(txt)
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"下载pdf文件未成功")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 翻译摘要等
- i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
- i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- msg = '正常'
- # ** gpt request **
- # 单线,获取文章meta信息
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot, history=[],
- sys_prompt="Your job is to collect information from materials and translate to Chinese。",
- )
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- # 写入文件
- import shutil
- # 重置文件的创建时间
- shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path)
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md b/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md
deleted file mode 100644
index ef3819e5d04d0da56803a9cb5cc147a776133c8b..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-How to Find the Best Dispensary Management Software for Free
-If you are running a dispensary or a pharmacy, you know how important it is to have a reliable and efficient software system to manage your inventory, sales, billing, and reporting. However, you may also be aware of how expensive some of the commercial software solutions can be. Fortunately, there are some free and open-source alternatives that can help you run your dispensary smoothly and save money.
-dispensary management software free download
Download Zip ===> https://urlca.com/2uDcq1
-In this article, we will introduce you to some of the best free and open-source dispensary management software solutions that you can download and use right away. We will also highlight their features, benefits, and drawbacks, so you can make an informed decision.
-What is Dispensary Management Software?
-Dispensary management software is a type of software that helps dispensary owners and managers to manage various aspects of their business, such as:
-
-- Inventory management: track the stock levels, expiry dates, batch numbers, and locations of your products.
-- Sales management: process orders, generate invoices, accept payments, and issue receipts.
-- Billing management: create and send bills to customers, insurance companies, or third-party payers.
-- Reporting management: generate and analyze reports on sales, expenses, profits, taxes, and more.
-- Customer management: store and access customer information, such as name, address, phone number, medical history, prescriptions, etc.
-- Employee management: manage employee schedules, salaries, commissions, attendance, and performance.
-
-Dispensary management software can help you improve your operational efficiency, reduce errors and wastage, increase customer satisfaction and loyalty, comply with legal and regulatory requirements, and grow your business.
-
-What are the Benefits of Free and Open-Source Dispensary Management Software?
-Free and open-source dispensary management software solutions have some advantages over their paid counterparts, such as:
-
-- Cost-effectiveness: you don't have to pay any license fees or subscription fees to use them. You can also save on maintenance and support costs.
-- Customizability: you can modify the source code of the software to suit your specific needs and preferences. You can also add new features or integrate with other applications.
-- Community support: you can benefit from the knowledge and experience of other users and developers who use the same software. You can also contribute to the improvement of the software by reporting bugs or suggesting enhancements.
-
-What are the Drawbacks of Free and Open-Source Dispensary Management Software?
-Free and open-source dispensary management software solutions also have some disadvantages compared to their paid counterparts, such as:
-
-- Limited functionality: some of the free and open-source software may not have all the features that you need or want. You may have to compromise on some aspects of your business operations or look for additional tools.
-- Lack of technical support: some of the free and open-source software may not have a dedicated customer service team or a professional technical support team. You may have to rely on online forums or documentation for help.
-- Security risks: some of the free and open-source software may not have adequate security measures or updates to protect your data from hackers or malware. You may have to take extra precautions to safeguard your data.
-
-What are Some of the Best Free and Open-Source Dispensary Management Software Solutions?
-There are many free and open-source dispensary management software solutions available online. However, not all of them are suitable for your business needs. Here are some of the best ones that we recommend:
-
-RxBLU
-RxBLU is a free pharmacy management software that helps pharmacies handle prescriptions, deliveries, point-of-sale processes, and more[^1^]. It has a simple and easy-to-use interface, powerful reporting tools,
-invoicing capabilities, monthly statistics tracking,
-and advanced item tracking. RxBLU is compatible with Windows
-and macOS operating systems[^1^].
-
-RxVantage
-RxVantage is a free rep management platform that helps practices stay at
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md b/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md
deleted file mode 100644
index 6587c1e6149ab3e6da19af83f4d3873cbadf5af1..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-How to Download and Install Driver SHARP AR-5618S for Windows XP 32 bit for Free
-If you are looking for a fast, fully featured A3 MFP that can handle your daily B/W printing, colour scanning and copying needs, you might want to consider the SHARP AR-5618S. This machine has a robust and compact design, advanced functionality and impressive performance. However, to make the most of its features, you need to install the correct driver for your operating system.
-In this article, we will show you how to download and install Driver SHARP AR-5618S for Windows XP 32 bit for free. This driver is compatible with Windows XP/Vista/7/8/8.1/10 32 bit SPLC & TWAIN drivers[^1^]. It also supports the Button Manager AA software, which is required for USB scanning[^2^]. Follow these simple steps to get your SHARP AR-5618S up and running in no time.
-DriverSHARPAR5618SforWindowsXP32bitfree
Download File ✪✪✪ https://urlca.com/2uDc72
-Step 1: Download the Driver
-The first step is to download the driver from the official SHARP website. You can find the driver by searching for "AR-5618" in the Driver Downloads section[^1^]. Alternatively, you can use this direct link[^3^] to download the ZIP file that contains the driver. The file size is about 10 MB and it was last updated on March 17, 2016.
-Step 2: Extract the Driver
-Once you have downloaded the driver, you need to extract it to a folder on your computer. You can use any ZIP extraction software, such as WinZip or 7-Zip, to do this. Just right-click on the ZIP file and choose "Extract All" or "Extract Here". You will see a folder named "SMON42_2201a_ALL" that contains the driver files.
-Step 3: Install the Driver
-Now that you have extracted the driver, you can proceed to install it on your computer. To do this, follow these steps:
-
-- Open the "SMON42_2201a_ALL" folder and double-click on the "Setup.exe" file.
-- Follow the on-screen instructions to complete the installation process. You may need to restart your computer after the installation.
-- Connect your SHARP AR-5618S to your computer via USB cable or network cable.
-- Go to "Control Panel" > "Printers and Faxes" and select your SHARP AR-5618S printer.
-- Right-click on the printer icon and choose "Properties".
-- Go to the "Ports" tab and make sure that the correct port is selected for your printer.
-- Click "OK" to save the settings.
-
-Congratulations! You have successfully installed Driver SHARP AR-5618S for Windows XP 32 bit for free. You can now enjoy all the features and functions of your SHARP AR-5618S printer.
-Troubleshooting Tips
-If you encounter any problems or errors while downloading or installing the driver, here are some troubleshooting tips that might help:
-
-
-- Make sure that your computer meets the minimum system requirements for the driver. You need at least Windows XP SP3 32 bit, 512 MB of RAM and 100 MB of free disk space.
-- Make sure that your internet connection is stable and fast enough to download the driver without interruption.
-- Make sure that your antivirus software or firewall does not block or interfere with the driver download or installation.
-- Make sure that you download the driver from a trusted source, such as the official SHARP website[^1^]. Do not download or install any drivers from unknown or suspicious websites.
-- Make sure that you extract the driver files properly and do not delete or modify any files in the folder.
-- Make sure that you follow the installation instructions carefully and do not skip any steps.
-- If you have any questions or need any assistance, you can contact SHARP customer support d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md b/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md
deleted file mode 100644
index 798e8408086b7a11c572e9286c2d867e686f7ec0..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md
+++ /dev/null
@@ -1,6 +0,0 @@
-HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD
Download Zip — https://urlca.com/2uDcg3
-
-This software will have its flash tool removed soon, as the free flashhack tool ... ON WINDOWS ( 7 , 8 , 10 X32 & X64) >>> Download & Install & Activate with. ... Insert the Flash Files CD or DVD into the CD drive or DVD drive of your computer. ... Cat will not let you go from a 475 hp or even factory 550 hp to a 600 or 625 hp. 4d29de3e1b
-
-
-
diff --git a/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md b/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md
deleted file mode 100644
index d7f510a588fc301a06ec0a2960a477582765f260..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-Basket Manager 2017 APK: A Free Basketball Game for Android
-If you are a fan of basketball and want to experience the thrill of managing a professional team, then you should try Basket Manager 2017 APK. This is a free basketball game for Android devices that lets you take control of a team in a realistic league. You can choose from over 30 teams, sign players, train them, compete in matches, and win trophies. In this article, we will tell you everything you need to know about Basket Manager 2017 APK, including its features, how to download and install it, why you should play it, and some tips and tricks to help you succeed.
- What is Basket Manager 2017 APK?
-Basket Manager 2017 APK is a modified version of the original game Basket Manager 2017 Free, which is available on Google Play Store. The APK version has some advantages over the official version, such as:
-basket manager 2017 apk
Download Zip ——— https://urllie.com/2uNAUe
-
-- It does not require an internet connection to play.
-- It has no ads or in-app purchases.
-- It has updated rosters and ratings for the 2020-2021 season.
-- It has a new app icon and name.
-
-The game was created by SubtleLies, a Reddit user who shared it on the r/basketmanager subreddit. You can download it from MediaFire or Aptoide. The game is compatible with Android 4.0.3 and up.
- Features of Basket Manager 2017 APK
-Basket Manager 2017 APK has many features that make it an enjoyable and challenging basketball game. Some of these features are:
-
-- You can choose from over 30 teams from different countries, such as USA, Spain, France, Italy, Germany, Turkey, Greece, and more.
-- You can sign players from a pool of over 1000 real players, with their names, photos, positions, skills, salaries, and contracts.
-- You can train your players and improve their attributes, such as shooting, passing, rebounding, defense, speed, and stamina.
-- You can manage your budget and decide how to spend your money on salaries, transfers, facilities, staff, and marketing.
-- You can compete in different leagues and tournaments, such as the NBA, Euroleague, FIBA World Cup, Olympics, and more.
-- You can view detailed statistics and rankings of your team and players.
-- You can customize your team's name, logo, colors, and uniforms.
-
- How to download and install Basket Manager 2017 APK
-To download and install Basket Manager 2017 APK on your Android device, follow these steps:
-
-- Go to MediaFire or Aptoide and download the APK file.
-- Go to your device's settings and enable the option to install apps from unknown sources.
-- Locate the downloaded APK file on your device and tap on it to start the installation process.
-- Follow the instructions on the screen and wait for the installation to finish.
-- Launch the game and enjoy!
-
- Why play Basket Manager 2017 APK?
-Basket Manager 2017 APK is a fun and addictive basketball game that will keep you entertained for hours. Here are some reasons why you should play it:
- Pros of Basket Manager 2017 APK
-
-- It is free and does not require an internet connection to play.
-- It has realistic and updated graphics and sounds.
-- It has a simple and intuitive user interface and controls.
-- It has a high replay value and offers many options and challenges.
-- It is suitable for all ages and skill levels.
-
- Cons of Basket Manager 2017 APK
-
-- It may not work on some devices or cause crashes or errors.
-- It may not be compatible with some Android versions or updates.
-- It may have some bugs or glitches that affect the gameplay or performance.
-- It may not have all the features or content of the official version or other similar games.
-- It may not be updated or supported by the developer in the future.
-
- Tips and tricks for playing Basket Manager 2017 APK
-If you want to become a successful basketball manager, you need to have some skills and strategies. Here are some tips and tricks that can help you play Basket Manager 2017 APK better:
-basket manager 2017 apk download
-basket manager 2017 apk mod
-basket manager 2017 apk free
-basket manager 2017 apk full version
-basket manager 2017 apk android
-basket manager 2017 apk latest
-basket manager 2017 apk offline
-basket manager 2017 apk unlimited money
-basket manager 2017 apk cracked
-basket manager 2017 apk hack
-basket manager 2017 apk update
-basket manager 2017 apk premium
-basket manager 2017 apk pro
-basket manager 2017 apk online
-basket manager 2017 apk data
-basket manager 2017 apk obb
-basket manager 2017 apk revdl
-basket manager 2017 apk rexdl
-basket manager 2017 apk aptoide
-basket manager 2017 apk pure
-basket manager 2017 apk mirror
-basket manager 2017 apk mob.org
-basket manager 2017 apk uptodown
-basket manager 2017 apk apkpure
-basket manager 2017 apk apkmirror
-basket manager 2017 apk for pc
-basket manager 2017 apk for ios
-basket manager 2017 apk for windows
-basket manager 2017 apk for mac
-basket manager 2017 apk for laptop
-basket manager 2017 apk for tablet
-basket manager 2017 apk for iphone
-basket manager 2017 apk for ipad
-basket manager 2017 apk for android tv
-basket manager 2017 apk for firestick
-basket manager 2017 apk for chromebook
-basket manager 2017 apk for bluestacks
-basket manager 2017 apk for nox player
-basket manager 2017 apk for memu play
-basket manager 2017 apk for ldplayer
-how to install basket manager 2017 apk
-how to play basket manager 2017 apk
-how to update basket manager 2017 apk
-how to hack basket manager 2017 apk
-how to mod basket manager 2017 apk
-how to get basket manager 2017 apk
-how to download basket manager 2017 apk
-how to uninstall basket manager 2017 apk
-how to use basket manager 2017 apk
- Choose your team wisely
-The first thing you need to do is to choose your team. You can either select one of the existing teams or create your own custom team. You should consider the following factors when choosing your team:
-
-- The country and league of your team. Different countries and leagues have different rules, regulations, budgets, and competitions. You should choose a team that suits your preferences and goals.
-- The roster and ratings of your team. You should check the players' names, positions, skills, salaries, and contracts. You should look for players that fit your style of play, have high potential, and are affordable.
-- The facilities and staff of your team. You should check the quality and level of your team's facilities, such as the stadium, training center, medical center, and academy. You should also check the staff's roles, skills, and salaries. You should look for facilities and staff that can improve your team's performance, development, and income.
-
- Manage your budget and players
-The next thing you need to do is to manage your budget and players. You have a limited amount of money to spend on salaries, transfers, facilities, staff, and marketing. You should balance your income and expenses wisely and avoid going bankrupt. You should also manage your players' contracts, morale, fitness, injuries, suspensions, and form. You should keep your players happy, healthy, motivated, and in shape. You should also make smart decisions on who to sign, sell, loan, or release.
- Train your players and improve their skills
-The third thing you need to do is to train your players and improve their skills. You can assign different training programs to your players based on their positions, attributes, weaknesses, and goals. You can also hire coaches to help you with the training. You should monitor your players' progress and feedback regularly and adjust the training accordingly. You should also reward your players with bonuses or promotions when they perform well or improve their skills.
- Compete in different leagues and tournaments
-The last thing you need to do is to compete in different leagues and tournaments. You can play in various competitions, such as the NBA, Euroleague, FIBA World Cup, Olympics, and more. You can also create your own custom tournaments with your own rules and teams. You should prepare well for each match by scouting your opponents, setting your lineup, choosing your tactics, and making substitutions. You should also analyze your results and statistics after each match and learn from your mistakes or successes.
- Conclusion
-Basket Manager 2017 APK is a free basketball game for Android devices that lets you take control of a team in a realistic league. You can choose from over 30 teams, sign players, train them, compete in matches, and win trophies. The game has many features that make it an enjoyable and challenging basketball game. However, it also has some drawbacks that may affect its gameplay or performance. If you want to play Basket Manager 2017 APK, you can download it from MediaFire or Aptoide. You can also follow these tips and tricks to help you play better:
- Summary of the article
-
-- Basket Manager 2017 APK is a modified version of the original game Basket Manager 2017 Free.
-- The game lets you manage a basketball team in a realistic league.
-- The game has many features that make it fun and addictive.
-- The game also has some drawbacks that may affect its gameplay or performance.
-- You can download the game from MediaFire or Aptoide.
-- You can follow these tips and tricks to help you play better:
-
-- Choose your team wisely.
-- Manage your budget and players.
-- Train your players and improve their skills.
-- Compete in different leagues and tournaments.
-
-
- FAQs
-Here are some frequently asked questions about Basket Manager 2017 APK:
-
-- Is Basket Manager 2017 APK safe to download and install?
-Yes, Basket Manager 2017 APK is safe to download and install, as long as you get it from a trusted source, such as MediaFire or Aptoide. However, you should always scan the APK file with an antivirus or malware detector before installing it, just to be sure.
-- Is Basket Manager 2017 APK legal to play?
-Yes, Basket Manager 2017 APK is legal to play, as it is a fan-made modification of the original game Basket Manager 2017 Free, which is free and available on Google Play Store. However, you should not use the game for any commercial or illegal purposes, such as selling it, hacking it, or cheating in it.
-- How can I update Basket Manager 2017 APK?
-Basket Manager 2017 APK does not have an automatic update feature, so you will have to manually check for updates on the r/basketmanager subreddit or the developer's website. If there is a new version available, you will have to download and install it again, following the same steps as before.
-- How can I contact the developer of Basket Manager 2017 APK?
-You can contact the developer of Basket Manager 2017 APK by sending a message to SubtleLies on Reddit or by visiting his website. You can also join the r/basketmanager subreddit and interact with other players and fans of the game.
-- How can I support the development of Basket Manager 2017 APK?
-You can support the development of Basket Manager 2017 APK by giving feedback, suggestions, bug reports, or donations to the developer. You can also share the game with your friends, family, or social media followers. You can also rate and review the game on Aptoide or other platforms.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py"
deleted file mode 100644
index 72ffe6b1a8f2a59a3c5c364e30dfb4949bd6a929..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py"
+++ /dev/null
@@ -1,67 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-fast_debug = False
-
-
-def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say, llm_kwargs, chatbot, history=history, sys_prompt=system_prompt) # 带超时倒计时
-
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-
-@CatchException
-def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py
deleted file mode 100644
index 45e95237da65e44f35a172c25ac6dc4e313e4eae..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from easydict import EasyDict as edict
-
-# configs for test speed
-
-config = edict()
-config.loss = "arcface"
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "synthetic"
-config.num_classes = 100 * 10000
-config.num_epoch = 30
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = []
diff --git a/spaces/fclong/summary/fengshen/examples/tcbert/example.py b/spaces/fclong/summary/fengshen/examples/tcbert/example.py
deleted file mode 100644
index 5eff218461c65f40ec88e9ea2c7e0cdbe1d05082..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/tcbert/example.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import argparse
-from fengshen.pipelines.tcbert import TCBertPipelines
-from pytorch_lightning import seed_everything
-
-def main():
- seed_everything(123)
- total_parser = argparse.ArgumentParser("Topic Classification")
- total_parser = TCBertPipelines.piplines_args(total_parser)
- args = total_parser.parse_args()
-
- pretrained_model_path = 'IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese'
- args.learning_rate = 2e-5
- args.max_length = 512
- args.max_epochs = 5
- args.batchsize = 4
- args.train = 'train'
- args.default_root_dir = './'
- # args.gpus = 1 #注意:目前使用CPU进行训练,取消注释会使用GPU,但需要配置相应GPU环境版本
- args.fixed_lablen = 2 #注意:可以设置固定标签长度,由于样本对应的标签长度可能不一致,建议选择适中的数值表示标签长度
-
- train_data = [ # 训练数据
- {"content": "真正的放养教育,放的是孩子的思维,养的是孩子的习惯", "label": "故事"},
- {"content": "《唐人街探案》捧红了王宝强跟刘昊然,唯独戏份不少的他发展最差", "label": "娱乐"},
- {"content": "油价攀升 阿曼经济加速增长", "label": "财经"},
- {"content": "日本男篮近期动作频频,中国队的未来劲敌会是他们吗?", "label": "体育"},
- {"content": "教育部:坚决防止因撤并乡村小规模学校导致学生上学困难", "label": "教育"},
- {"content": "LOL设计最完美的三个英雄,玩家们都很认可!", "label": "电竞"},
- {"content": "上联:浅看红楼终是梦,怎么对下联?", "label": "文化"},
- {"content": "楼市再出新政!北京部分限房价项目或转为共有产权房", "label": "房产"},
- {"content": "企业怎样选云服务器?云服务器哪家比较好?", "label": "科技"},
- {"content": "贝纳利的三缸车TRE899K、TRE1130K华丽转身", "label": "汽车"},
- {"content": "如何评价:刘姝威的《严惩做空中国股市者》?", "label": "股票"},
- {"content": "宁夏邀深圳市民共赴“寻找穿越”之旅", "label": "旅游"},
- {"content": "日本自民党又一派系力挺安倍 称会竭尽全力", "label": "国际"},
- {"content": "农村养老保险每年交5000,交满15年退休后能每月领多少钱?", "label": "农业"},
- {"content": "国产舰载机首次现身,进度超过预期,将率先在滑跃航母测试", "label": "军事"}
- ]
-
- dev_data = [ # 验证数据
- {"content": "西游记后传中,灵儿最爱的女人是谁?不是碧游!", "label": "故事"},
- {"content": "小李子莱奥纳多有特别的提袋子技能,这些年他还有过哪些神奇的造型?", "label": "娱乐"},
- {"content": "现在手上有钱是投资买房还是存钱,为什么?", "label": "财经"},
- {"content": "迪卡侬的衣服值得购买吗?", "label": "体育"},
- {"content": "黑龙江省旅游委在齐齐哈尔组织举办导游培训班", "label": "教育"},
- {"content": "《王者荣耀》中,哪些英雄的大招最“废柴”?", "label": "电竞"},
- {"content": "上交演绎马勒《复活》,用音乐带来抚慰和希望", "label": "文化"},
- {"content": "All in服务业,58集团在租房、住房市场的全力以赋", "label": "房产"},
- {"content": "为什么有的人宁愿选择骁龙660的X21,也不买骁龙845的小米MIX2S?", "label": "科技"},
- {"content": "众泰大型SUV来袭,售13.98万,2.0T榨出231马力,汉兰达要危险了", "label": "汽车"},
- {"content": "股票放量下趺,大资金出逃谁在接盘?", "label": "股票"},
- {"content": "广西博白最大的特色是什么?", "label": "旅游"},
- {"content": "特朗普退出《伊朗核协议》,对此你怎么看?", "label": "国际"},
- {"content": "卖水果利润怎么样?", "label": "农业"},
- {"content": "特种兵都是身材高大的猛男么?别再被电视骗了,超过1米8都不合格", "label": "军事"}
- ]
-
- test_data = [ # 测试数据
- {"content": "廖凡重出“江湖”再争影帝 亮相戛纳红毯霸气有型"},
- {"content": "《绝地求生: 刺激战场》越玩越卡?竟是手机厂商没交“保护费”!"},
- {"content": "买涡轮增压还是自然吸气车?今天终于有答案了!"},
- ]
-
- #标签映射 将真实标签可以映射为更合适prompt的标签
- prompt_label = {
- "体育":"体育", "军事":"军事", "农业":"农业", "国际":"国际",
- "娱乐":"娱乐", "房产":"房产", "故事":"故事", "教育":"教育",
- "文化":"文化", "旅游":"旅游", "汽车":"汽车", "电竞":"电竞",
- "科技":"科技", "股票":"股票", "财经":"财经"
- }
-
- #不同的prompt会影响模型效果
- #prompt = "这一句描述{}的内容如下:"
- prompt = "下面是一则关于{}的新闻:"
-
- model = TCBertPipelines(args, model_path=pretrained_model_path, nlabels=len(prompt_label))
-
- if args.train:
- model.train(train_data, dev_data, prompt, prompt_label)
- result = model.predict(test_data, prompt, prompt_label)
-
- for i, line in enumerate(result):
- print({"content":test_data[i]["content"], "label":list(prompt_label.keys())[line]})
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/felipekitamura/face_deid_ct/face_deid_ct.py b/spaces/felipekitamura/face_deid_ct/face_deid_ct.py
deleted file mode 100644
index bc8c0f2a8a199f111381b6e14262be64f7f26f47..0000000000000000000000000000000000000000
--- a/spaces/felipekitamura/face_deid_ct/face_deid_ct.py
+++ /dev/null
@@ -1,285 +0,0 @@
-import os
-import pydicom
-import numpy as np
-import cv2
-from matplotlib import pyplot as plt
-import random
-import time
-import tqdm
-from IPython.core.display import display, HTML
-
-# Determine if we are in a Jupyter notebook
-try:
- shell = get_ipython().__class__.__name__
- if shell == 'ZMQInteractiveShell':
- # We are in Jupyter, use tqdm.notebook
- from tqdm.notebook import tqdm
- else:
- raise Exception()
-except:
- # We are in a terminal, use standard tqdm
- from tqdm import tqdm
-
-
-FACE_MAX_VALUE = 50
-FACE_MIN_VALUE = -125
-
-AIR_THRESHOLD = -800
-KERNEL_SIZE = 35
-
-
-
-def is_dicom(file_path):
- try:
- pydicom.dcmread(file_path)
- return True
- except Exception:
- return False
-
-def get_first_directory(path):
- # Normalize the path to always use Unix-style path separators
- normalized_path = path.replace("\\", "/")
- split_path = normalized_path.split("/")[-1]
-
- return split_path # Return None if no directories are found
-
-def list_dicom_directories(root_dir):
- dicom_dirs = set()
-
- for root, dirs, files in os.walk(root_dir):
- for file in files:
- file_path = os.path.join(root, file)
- if is_dicom(file_path):
- dicom_dirs.add(root)
- break
-
- return list(dicom_dirs)
-
-def load_scan(path):
- slices = [pydicom.read_file(path + '/' + s) for s in os.listdir(path)]
- slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))
- try:
- slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])
- except:
- try:
- slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)
- except:
- slice_thickness = 1.0
-
- for s in slices:
- s.SliceThickness = slice_thickness
-
- return slices
-
-def get_pixels_hu(slices):
- image = np.stack([s.pixel_array for s in slices])
- # Convert to int16 (from sometimes int16),
- # should be possible as values should always be low enough (<32k)
- image = image.astype(np.int16)
-
- # Set outside-of-scan pixels to 0
- # The intercept is usually -1024, so air is approximately 0
- image[image == -2000] = 0
-
- # Convert to Hounsfield units (HU)
- for slice_number in range(len(slices)):
-
- intercept = slices[slice_number].RescaleIntercept
- slope = slices[slice_number].RescaleSlope
-
- if slope != 1:
- image[slice_number] = slope * image[slice_number].astype(np.float64)
- image[slice_number] = image[slice_number].astype(np.int16)
-
- image[slice_number] += np.int16(intercept)
-
- return np.array(image, dtype=np.int16)
-
-def binarize_volume(volume, air_hu=AIR_THRESHOLD):
- binary_volume = np.zeros_like(volume, dtype=np.uint8)
- binary_volume[volume <= air_hu] = 1
- return binary_volume
-
-def largest_connected_component(binary_image):
- # Find all connected components and stats
- num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(binary_image, connectivity=8)
-
- # Get the index of the largest component, ignoring the background
- # The background is considered as a component by connectedComponentsWithStats and it is usually the first component
- largest_component_index = np.argmax(stats[1:, cv2.CC_STAT_AREA]) + 1
-
- # Create an image to keep largest component only
- largest_component_image = np.zeros(labels.shape, dtype=np.uint8)
- largest_component_image[labels == largest_component_index] = 1
-
- return largest_component_image
-
-def get_largest_component_volume(volume):
- # Initialize an empty array to hold the processed volume
- processed_volume = np.empty_like(volume, dtype=np.uint8)
-
- # Iterate over each slice in the volume
- for i in range(volume.shape[0]):
- # Process the slice and store it in the processed volume
- processed_volume[i] = largest_connected_component(volume[i])
-
- return processed_volume
-
-
-
-def dilate_volume(volume, kernel_size=KERNEL_SIZE):
- # Create the structuring element (kernel) for dilation
- kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (kernel_size, kernel_size))
-
- # Initialize an empty array to hold the dilated volume
- dilated_volume = np.empty_like(volume)
-
- # Iterate over each slice in the volume
- for i in range(volume.shape[0]):
- # Dilate the slice and store it in the dilated volume
- dilated_volume[i] = cv2.dilate(volume[i].astype(np.uint8), kernel)
-
- return dilated_volume
-
-
-def apply_mask_and_get_values(image_volume, mask_volume):
- # Apply the mask by multiplying the image volume with the mask volume
- masked_volume = image_volume * mask_volume
-
- # Get all unique values in the masked volume, excluding zero
- unique_values = np.unique(masked_volume)
- unique_values = unique_values[unique_values > FACE_MIN_VALUE]
- unique_values = unique_values[unique_values < FACE_MAX_VALUE]
-
- # Convert numpy array to a list
- unique_values_list = unique_values.tolist()
-
- return unique_values_list
-
-
-def apply_random_values_optimized(pixels_hu, dilated_volume, unique_values_list):
- # Initialize new volume as a copy of the original volume
- new_volume = np.copy(pixels_hu)
-
- # Generate random indices
- random_indices = np.random.choice(len(unique_values_list), size=np.sum(dilated_volume))
-
- # Select random values from the unique_values_list
- random_values = np.array(unique_values_list)[random_indices]
-
- # Apply the random values to the locations where dilated_volume equals 1
- new_volume[dilated_volume == 1] = random_values
-
- return new_volume
-
-def save_new_dicom_files(new_volume, original_dir, out_path, app="_d"):
- # Create a new directory path by appending "_d" to the original directory
- if out_path is None:
- new_dir = original_dir + app
- else:
- new_dir = out_path
-
- # Create the new directory if it doesn't exist
- if not os.path.exists(new_dir):
- os.makedirs(new_dir)
-
- # List all DICOM files in the original directory
- dicom_files = [os.path.join(original_dir, f) for f in os.listdir(original_dir) if f.endswith('.dcm')]
-
- # Sort the dicom_files list by SliceLocation
- dicom_files.sort(key=lambda x: pydicom.dcmread(x).SliceLocation)
-
- # Loop over each slice of the new volume
- for i in range(new_volume.shape[0]):
- # Get the corresponding original DICOM file
- dicom_file = dicom_files[i]
-
- # Read the file
- ds = pydicom.dcmread(dicom_file)
- ds.decompress()
-
- # Revert the slope and intercept operation on the slice
- new_slice = (new_volume[i] - ds.RescaleIntercept) / ds.RescaleSlope
-
- # Update the pixel data with the data from the new slice
- ds.PixelData = new_slice.astype(np.int16).tobytes()
-
- # Generate new file name
- new_file_name = os.path.join(new_dir, f"new_image_{i}.dcm")
-
- # Save the new DICOM file
- ds.save_as(new_file_name)
-
-
-
-def drown_volume(in_path, out_path='deid_ct', replacer='face'):
- """
- Processes DICOM files from the provided directory by binarizing, getting the largest connected component,
- dilating and applying mask. Then applies random values to the dilated volume based on a unique values list
- obtained from the masked volume (or air value). The results are saved as new DICOM files in a specified directory.
-
- Parameters:
- in_path (str): The path to the directory containing the input DICOM files.
- out_path (str, optional): The path to the directory where the output DICOM files will be saved.
- If not provided, the output files will be saved in the input directory appended by "_d".
- replacer (str, optional): Indicates what kind of pixels are going to be replaced. Default is 'face'.
- 'face': replaces air and face with random values that are found in the skin and subcutaneous fat.
- 'air': replaces air and face with -1000 HU.
- int: replaces air and face with int HU.
-
- Returns:
- None. The function saves new DICOM files and prints the total elapsed time of the operation.
- """
- start_time = time.time()
-
- dirs = list_dicom_directories(in_path)
-
- for _d in tqdm(dirs, desc="List of studies"):
-
- with tqdm(total=8, desc="Processing DICOM Files", leave=False) as pbar:
- # Load the DICOM files
- slices = load_scan(_d)
- pbar.update()
-
- # Get the pixel values and convert them to Hounsfield Units (HU)
- pixels_hu = get_pixels_hu(slices)
- pbar.update()
-
- # Apply the binarization function on the HU volume
- binarized_volume = binarize_volume(pixels_hu)
- pbar.update()
-
- # Get the largest connected component from the binarized volume
- processed_volume = get_largest_component_volume(binarized_volume)
- pbar.update()
-
- # Dilate the processed volume
- dilated_volume = dilate_volume(processed_volume)
- pbar.update()
- if replacer == 'face':
- # Apply the mask to the original volume and get unique values list
- unique_values_list = apply_mask_and_get_values(pixels_hu, dilated_volume - processed_volume)
- elif replacer == 'air':
- unique_values_list = [0]
- else:
- try:
- replacer = int(replacer)
- unique_values_list = [replacer]
- except:
- print('replacer must be either air, face, or an integer number in Hounsfield units, but ' + str(replacer) + ' was provided.')
- print('replacing with face')
- unique_values_list = apply_mask_and_get_values(pixels_hu, dilated_volume - processed_volume)
-
- pbar.update()
-
- # Apply random values to the dilated volume based on the unique values list
- new_volume = apply_random_values_optimized(pixels_hu, dilated_volume, unique_values_list)
- pbar.update()
-
- # Save the new DICOM files
- out_path_n = out_path + "/" + get_first_directory(_d)
- save_new_dicom_files(new_volume, _d, out_path_n)
- pbar.update()
-
- elapsed_time = time.time() - start_time
- print(f"Total elapsed time: {elapsed_time} seconds")
diff --git a/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh b/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh
deleted file mode 100644
index 751a9ac17c220deb476c5aef928f6b0d21d31c40..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh
+++ /dev/null
@@ -1,65 +0,0 @@
-#!/bin/bash
-
-# Check if running on a supported system
-case "$(uname -s)" in
- Linux)
- if [[ -f "/etc/lsb-release" ]]; then
- . /etc/lsb-release
- if [[ "$DISTRIB_ID" != "Ubuntu" ]]; then
- echo "This script only works on Ubuntu, not $DISTRIB_ID."
- exit 1
- fi
- else
- if [[ ! "$(cat /etc/*-release | grep '^ID=')" =~ ^(ID=\"ubuntu\")|(ID=\"centos\")|(ID=\"arch\")$ ]]; then
- echo "Unsupported Linux distribution."
- exit 1
- fi
- fi
- ;;
- Darwin)
- echo "Running on MacOS."
- ;;
- *)
- echo "Unsupported operating system."
- exit 1
- ;;
-esac
-
-# Check if needed dependencies are installed and install if necessary
-if ! command -v node >/dev/null || ! command -v git >/dev/null || ! command -v yarn >/dev/null; then
- case "$(uname -s)" in
- Linux)
- if [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=ubuntu" ]]; then
- sudo apt-get update
- sudo apt-get -y install nodejs git yarn
- elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=centos" ]]; then
- sudo yum -y install epel-release
- sudo yum -y install nodejs git yarn
- elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=arch" ]]; then
- sudo pacman -Syu -y
- sudo pacman -S -y nodejs git yarn
- else
- echo "Unsupported Linux distribution"
- exit 1
- fi
- ;;
- Darwin)
- /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
- brew install node git yarn
- ;;
- esac
-fi
-
-# Clone the repository and install dependencies
-git clone https://github.com/Yidadaa/ChatGPT-Next-Web
-cd ChatGPT-Next-Web
-yarn install
-
-# Prompt user for environment variables
-read -p "Enter OPENAI_API_KEY: " OPENAI_API_KEY
-read -p "Enter CODE: " CODE
-read -p "Enter PORT: " PORT
-
-# Build and run the project using the environment variables
-OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn build
-OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn start
diff --git a/spaces/fffiloni/ControlNet-Video/style.css b/spaces/fffiloni/ControlNet-Video/style.css
deleted file mode 100644
index 98c1607dba4c5e2055c5bc59197a9c995389a3fa..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/ControlNet-Video/style.css
+++ /dev/null
@@ -1,105 +0,0 @@
-#col-container {max-width: 820px; margin-left: auto; margin-right: auto;}
-#duplicate-container{
- display: flex;
- justify-content: space-between;
- align-items: center;
- line-height: 1em;
- flex-direction: row-reverse;
- font-size:1em;
-}
-a, a:hover, a:visited {
- text-decoration-line: underline;
- font-weight: 600;
- color: #1f2937 !important;
-}
-
-.dark a, .dark a:hover, .dark a:visited {
- color: #f3f4f6 !important;
-}
-
-.label-wrap {
- margin-bottom: 12px;
-}
-
-.footer {
- margin-bottom: 45px;
- margin-top: 10px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-
-.footer>p {
- font-size: .8rem!important;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(26px);
- background: white;
-}
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-
-div#may-like-container > p {
- font-size: .8em;
- margin-bottom: 4px;
-}
-
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-
-#share-btn-container {
- display: flex;
- padding-left: 0.5rem !important;
- padding-right: 0.5rem !important;
- background-color: #000000;
- justify-content: center;
- align-items: center;
- border-radius: 9999px !important;
- max-width: 13rem;
-}
-
-#share-btn-container:hover {
- background-color: #060606;
-}
-
-#share-btn {
- all: initial;
- color: #ffffff;
- font-weight: 600;
- cursor:pointer;
- font-family: 'IBM Plex Sans', sans-serif;
- margin-left: 0.5rem !important;
- padding-top: 0.5rem !important;
- padding-bottom: 0.5rem !important;
- right:0;
-}
-
-#share-btn * {
- all: unset;
-}
-
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-
-#share-btn-container .wrap {
- display: none !important;
-}
-
-#share-btn-container.hidden {
- display: none!important;
-}
\ No newline at end of file
diff --git a/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh b/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh
deleted file mode 100644
index 71eac148ffaf44878df6692e92bb442614c30ce4..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-DATASET_DIR=${1:-"./datasets/vctk"} # The first argument is dataset directory.
-WORKSPACE=${2:-"./workspaces/bytesep"} # The second argument is workspace directory.
-
-echo "DATASET_DIR=${DATASET_DIR}"
-echo "WORKSPACE=${WORKSPACE}"
-
-# Users can change the following settings.
-SAMPLE_RATE=44100
-CHANNELS=2
-
-# Paths
-HDF5S_DIR="${WORKSPACE}/hdf5s/vctk/sr=${SAMPLE_RATE}_chn=${CHANNELS}/train"
-
-python3 bytesep/dataset_creation/pack_audios_to_hdf5s/vctk.py \
- --dataset_dir=$DATASET_DIR \
- --split="train" \
- --hdf5s_dir=$HDF5S_DIR \
- --sample_rate=$SAMPLE_RATE \
- --channels=$CHANNELS
-
\ No newline at end of file
diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py
deleted file mode 100644
index 4a7dd3f009dc443561700952c7eb6c41499585d1..0000000000000000000000000000000000000000
--- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-import re
-
-def is_spam(text: str) -> bool:
- # Check for patterns observed in spam messages
- spam_patterns = [
- r"\d{1,2}%", # Percentage discounts
- r"코드[:\:]?\w*",
- r"무료거부", # Unsubscribe keyword in Korean
- r"(http(s)?://)?(bit\.ly|me2\.kr|vo\.la|dokdo\.in|tdeal\.kr|"\
- "openkak(talk)?\.at|kakaos?\.co|buly\.kr|(vvd\.bz))\/\S*", # Spam URL shorteners
- r"=BBQ\+피자\+활쿱", # Spam message
- r"(광고)", # Advertising indicator
- ]
-
- # Combine all spam patterns into a single regex pattern
- spam_pattern_re = re.compile("|".join(spam_patterns), re.IGNORECASE)
-
- return bool(spam_pattern_re.search(text))
-
diff --git a/spaces/fishaudio/fish-diffusion/configs/ALYS.py b/spaces/fishaudio/fish-diffusion/configs/ALYS.py
deleted file mode 100644
index 1a41f164ded86152011c16dc9935e159beebe6a8..0000000000000000000000000000000000000000
--- a/spaces/fishaudio/fish-diffusion/configs/ALYS.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from fish_diffusion.datasets.hifisinger import HiFiSVCDataset
-from fish_diffusion.datasets.utils import get_datasets_from_subfolder
-
-_base_ = [
- "./_base_/archs/hifi_svc.py",
- "./_base_/trainers/base.py",
- "./_base_/schedulers/exponential.py",
- "./_base_/datasets/hifi_svc.py",
-]
-
-speaker_mapping = {
- "ALYS": 0,
-}
-
-model = dict(
- type="HiFiSVC",
- speaker_encoder=dict(
- input_size=len(speaker_mapping),
- ),
-)
-
-preprocessing = dict(
- text_features_extractor=dict(
- type="ContentVec",
- ),
- pitch_extractor=dict(
- type="CrepePitchExtractor",
- keep_zeros=False,
- f0_min=40.0,
- f0_max=1600.0,
- ),
- energy_extractor=dict(
- type="RMSEnergyExtractor",
- ),
- augmentations=[
- dict(
- type="FixedPitchShifting",
- key_shifts=[-5.0, 5.0],
- probability=0.75,
- ),
- ],
-)
-
-trainer = dict(
- # Disable gradient clipping, which is not supported by custom optimization
- gradient_clip_val=None,
- max_steps=1000000,
-)
diff --git a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py b/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py
deleted file mode 100644
index 3bc9cedc219ac2d24d5d89f0ea29b095364eae5a..0000000000000000000000000000000000000000
--- a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py
+++ /dev/null
@@ -1,752 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Google Flax Team Authors and The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Any, Optional, Tuple
-
-import flax.linen as nn
-import jax
-import jax.numpy as jnp
-from flax.core.frozen_dict import FrozenDict, unfreeze
-from flax.linen import combine_masks, make_causal_mask
-from flax.linen.attention import dot_product_attention_weights
-from jax import lax
-
-from transformers.file_utils import add_start_docstrings, add_start_docstrings_to_model_forward
-from transformers.modeling_flax_outputs import FlaxBaseModelOutput, FlaxBaseModelOutputWithPast, FlaxCausalLMOutput, FlaxBaseModelOutputWithPastAndCrossAttentions, FlaxSeq2SeqLMOutput
-from transformers.modeling_flax_utils import ACT2FN, FlaxPreTrainedModel, append_call_sample_docstring
-from transformers.utils import logging
-from transformers.models.gpt2.configuration_gpt2 import GPT2Config
-
-
-logger = logging.get_logger(__name__)
-
-_CHECKPOINT_FOR_DOC = "gpt2"
-_CONFIG_FOR_DOC = "GPT2Config"
-_TOKENIZER_FOR_DOC = "GPT2Tokenizer"
-
-
-GPT2_START_DOCSTRING = r"""
-
- This model inherits from :class:`~transformers.FlaxPreTrainedModel`. Check the superclass documentation for the
- generic methods the library implements for all its model (such as downloading or saving, resizing the input
- embeddings, pruning heads etc.)
-
- This model is also a Flax Linen `flax.nn.Module
- `__ subclass. Use it as a regular Flax
- Module and refer to the Flax documentation for all matter related to general usage and behavior.
-
- Finally, this model supports inherent JAX features such as:
-
- - `Just-In-Time (JIT) compilation `__
- - `Automatic Differentiation `__
- - `Vectorization `__
- - `Parallelization `__
-
- Parameters:
- config (:class:`~transformers.GPT2Config`): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the :meth:`~transformers.FlaxPreTrainedModel.from_pretrained` method to load the
- model weights.
-"""
-
-GPT2_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, input_ids_length)`):
- :obj:`input_ids_length` = ``sequence_length``. Indices of input sequence tokens in the vocabulary.
-
- Indices can be obtained using :class:`~transformers.GPT2Tokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
- details.
-
- `What are input IDs? <../glossary.html#input-ids>`__
- attention_mask (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
- position_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0,
- config.max_position_embeddings - 1]``.
- past_key_values (:obj:`Dict[str, np.ndarray]`, `optional`, returned by ``init_cache`` or when passing previous ``past_key_values``):
- Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
- auto-regressive decoding. Pre-computed key and value hidden-states are of shape `[batch_size, max_length]`.
- output_attentions (:obj:`bool`, `optional`):
- Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
- tensors for more detail.
- output_hidden_states (:obj:`bool`, `optional`):
- Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
- more detail.
- return_dict (:obj:`bool`, `optional`):
- Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
-"""
-
-
-class FlaxConv1D(nn.Module):
- features: int
- use_bias: bool = True
- dtype: Any = jnp.float32
- precision: Any = None
-
- @nn.compact
- def __call__(self, inputs):
- inputs = jnp.asarray(inputs, self.dtype)
- kernel = self.param("kernel", jax.nn.initializers.normal(stddev=0.02), (self.features, inputs.shape[-1]))
- kernel = jnp.asarray(kernel.transpose(), self.dtype)
- y = lax.dot_general(inputs, kernel, (((inputs.ndim - 1,), (0,)), ((), ())), precision=self.precision)
- if self.use_bias:
- bias = self.param("bias", jax.nn.initializers.zeros, (self.features,))
- bias = jnp.asarray(bias, self.dtype)
- y = y + bias
- return y
-
-
-class FlaxGPT2Attention(nn.Module):
- config: GPT2Config
- dtype: jnp.dtype = jnp.float32
- causal: bool = True
-
- def setup(self):
- config = self.config
- self.embed_dim = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_dim = self.embed_dim // self.num_heads
-
- self.c_attn = FlaxConv1D(features=3 * self.embed_dim, dtype=self.dtype)
- self.c_proj = FlaxConv1D(self.embed_dim, dtype=self.dtype)
-
- self.c_attn_for_k_v = FlaxConv1D(features=2 * self.embed_dim, dtype=self.dtype)
-
- self.resid_dropout = nn.Dropout(rate=config.resid_pdrop)
-
- if self.causal:
- self.causal_mask = make_causal_mask(jnp.ones((1, config.max_position_embeddings), dtype="bool"), dtype="bool")
-
- def _split_heads(self, hidden_states):
- return hidden_states.reshape(hidden_states.shape[:2] + (self.num_heads, self.head_dim))
-
- def _merge_heads(self, hidden_states):
- return hidden_states.reshape(hidden_states.shape[:2] + (self.embed_dim,))
-
- @nn.compact
- def _concatenate_to_cache(self, key, value, query, attention_mask):
- """
- This function takes projected key, value states from a single input token and concatenates the states to cached
- states from previous steps. This function is slighly adapted from the official Flax repository:
- https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252
- """
- # detect if we're initializing by absence of existing cache data.
- is_initialized = self.has_variable("cache", "cached_key")
- cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype)
- cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype)
- cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32))
-
- if is_initialized:
- *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape
- # update key, value caches with our new 1d spatial slices
- cur_index = cache_index.value
- indices = (0,) * len(batch_dims) + (cur_index, 0, 0)
- key = lax.dynamic_update_slice(cached_key.value, key, indices)
- value = lax.dynamic_update_slice(cached_value.value, value, indices)
- cached_key.value = key
- cached_value.value = value
- num_updated_cache_vectors = query.shape[1]
- cache_index.value = cache_index.value + num_updated_cache_vectors
- # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements.
- pad_mask = jnp.broadcast_to(
- jnp.arange(max_length) < cur_index + num_updated_cache_vectors,
- tuple(batch_dims) + (1, num_updated_cache_vectors, max_length),
- )
- attention_mask = combine_masks(pad_mask, attention_mask)
- return key, value, attention_mask
-
- def __call__(
- self,
- hidden_states,
- key_value_states: Optional[jnp.ndarray] = None,
- attention_mask=None,
- deterministic: bool = True,
- init_cache: bool = False,
- output_attentions: bool = False,
- ):
-
- # if key_value_states are provided this layer is used as a cross-attention layer
- # for the decoder
- is_cross_attention = key_value_states is not None
-
- qkv_out = self.c_attn(hidden_states)
- query, key, value = jnp.split(qkv_out, 3, axis=2)
-
- if is_cross_attention:
- _qkv_out = self.c_attn_for_k_v(key_value_states)
- key, value = jnp.split(_qkv_out, 2, axis=2)
-
- query = self._split_heads(query)
- key = self._split_heads(key)
- value = self._split_heads(value)
-
- query_length, key_length = query.shape[1], key.shape[1]
-
- if self.causal:
- if self.has_variable("cache", "cached_key"):
- mask_shift = self.variables["cache"]["cache_index"]
- max_decoder_length = self.variables["cache"]["cached_key"].shape[1]
- causal_mask = lax.dynamic_slice(
- self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length)
- )
- else:
- causal_mask = self.causal_mask[:, :, :query_length, :key_length]
-
- batch_size = hidden_states.shape[0]
- causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:])
-
- # combine masks if needed
- if attention_mask is not None and self.causal:
- attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape)
- attention_mask = combine_masks(attention_mask, causal_mask)
- elif self.causal:
- attention_mask = causal_mask
- elif attention_mask is not None:
- attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2))
-
- dropout_rng = None
- if not deterministic and self.config.attn_pdrop > 0.0:
- dropout_rng = self.make_rng("dropout")
-
- # During fast autoregressive decoding, we feed one position at a time,
- # and cache the keys and values step by step.
- if self.causal and (self.has_variable("cache", "cached_key") or init_cache):
- key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask)
-
- # transform boolean mask into float mask
- if attention_mask is not None:
- attention_bias = lax.select(
- attention_mask > 0,
- jnp.full(attention_mask.shape, 0.0).astype(self.dtype),
- jnp.full(attention_mask.shape, -1e4).astype(self.dtype),
- )
- else:
- attention_bias = None
-
- # usual dot product attention
- attn_weights = dot_product_attention_weights(
- query,
- key,
- bias=attention_bias,
- dropout_rng=dropout_rng,
- dropout_rate=self.config.attn_pdrop,
- deterministic=deterministic,
- dtype=self.dtype,
- precision=None,
- )
-
- attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value)
- attn_output = self._merge_heads(attn_output)
- attn_output = self.c_proj(attn_output)
- attn_output = self.resid_dropout(attn_output, deterministic=deterministic)
-
- outputs = (attn_output, attn_weights) if output_attentions else (attn_output,)
- return outputs
-
-
-class FlaxGPT2MLP(nn.Module):
- config: GPT2Config
- intermediate_size: int
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- embed_dim = self.config.hidden_size
- self.c_fc = FlaxConv1D(self.intermediate_size, dtype=self.dtype)
- self.c_proj = FlaxConv1D(embed_dim, dtype=self.dtype)
- self.act = ACT2FN[self.config.activation_function]
- self.dropout = nn.Dropout(rate=self.config.resid_pdrop)
-
- def __call__(self, hidden_states, deterministic: bool = True):
- hidden_states = self.c_fc(hidden_states)
- hidden_states = self.act(hidden_states)
- hidden_states = self.c_proj(hidden_states)
- hidden_states = self.dropout(hidden_states, deterministic=deterministic)
- return hidden_states
-
-
-class FlaxGPT2Block(nn.Module):
- config: GPT2Config
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- hidden_size = self.config.hidden_size
- inner_dim = self.config.n_inner if self.config.n_inner is not None else 4 * hidden_size
-
- self.ln_1 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype)
- self.attn = FlaxGPT2Attention(self.config, dtype=self.dtype)
- self.ln_3 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype)
- self.encoder_attn = FlaxGPT2Attention(config=self.config, dtype=self.dtype)
- self.ln_2 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype)
- self.mlp = FlaxGPT2MLP(self.config, inner_dim, dtype=self.dtype)
-
- def __call__(
- self,
- hidden_states,
- attention_mask=None,
- encoder_hidden_states: Optional[jnp.ndarray] = None,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- deterministic: bool = True,
- init_cache: bool = False,
- output_attentions: bool = False,
- ):
- residual = hidden_states
- hidden_states = self.ln_1(hidden_states)
- outputs = self.attn(
- hidden_states,
- attention_mask=attention_mask,
- deterministic=deterministic,
- init_cache=init_cache,
- output_attentions=output_attentions,
- )
- # residual connection
- attn_output = outputs[0]
- hidden_states = attn_output + residual
-
- # Cross-Attention Block
- if encoder_hidden_states is not None:
-
- residual = hidden_states
- hidden_states = self.ln_3(hidden_states)
-
- cross_attn_outputs = self.encoder_attn(
- hidden_states=hidden_states,
- key_value_states=encoder_hidden_states,
- attention_mask=encoder_attention_mask,
- deterministic=deterministic,
- output_attentions=output_attentions,
- )
-
- # residual connection
- cross_attn_output = cross_attn_outputs[0]
- hidden_states = cross_attn_output + residual
-
- residual = hidden_states
- hidden_states = self.ln_2(hidden_states)
- feed_forward_hidden_states = self.mlp(hidden_states, deterministic=deterministic)
- # residual connection
- hidden_states = residual + feed_forward_hidden_states
-
- output = (hidden_states,) + outputs[1:]
- if encoder_hidden_states is not None:
- output = output + cross_attn_outputs[1:]
-
- return output
-
-
-class FlaxGPT2PreTrainedModel(FlaxPreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = GPT2Config
- base_model_prefix = "transformer"
- module_class: nn.Module = None
-
- def __init__(
- self,
- config: GPT2Config,
- input_shape: Tuple = (1, 1),
- seed: int = 0,
- dtype: jnp.dtype = jnp.float32,
- **kwargs,
- ):
- module = self.module_class(config=config, dtype=dtype, **kwargs)
- super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
-
- def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict:
- # init input tensors
- input_ids = jnp.zeros(input_shape, dtype="i4")
- attention_mask = jnp.ones_like(input_ids)
- position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape)
- params_rng, dropout_rng = jax.random.split(rng)
- rngs = {"params": params_rng, "dropout": dropout_rng}
-
- if self.config.add_cross_attention:
- encoder_hidden_states = jnp.zeros(input_shape + (self.config.n_embd,))
- encoder_attention_mask = attention_mask
- module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, encoder_hidden_states, encoder_attention_mask, return_dict=False)
- else:
- module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, return_dict=False)
-
- return module_init_outputs["params"]
-
- @classmethod
- def _from_config(cls, config, **kwargs):
- return super()._from_config(config, **kwargs)
-
- def init_cache(self, batch_size, max_length):
- r"""
- Args:
- batch_size (:obj:`int`):
- batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
- max_length (:obj:`int`):
- maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
- cache.
- """
- # init input variables to retrieve cache
- input_ids = jnp.ones((batch_size, max_length))
- attention_mask = jnp.ones_like(input_ids)
- position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape)
-
- init_variables = self.module.init(
- jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict=False, init_cache=True
- )
- return init_variables["cache"]
-
- @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING)
- def __call__(
- self,
- input_ids,
- attention_mask=None,
- position_ids=None,
- encoder_hidden_states: Optional[jnp.ndarray] = None,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- params: dict = None,
- past_key_values: dict = None,
- dropout_rng: jax.random.PRNGKey = None,
- train: bool = False,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.return_dict
-
- if encoder_hidden_states is not None and encoder_attention_mask is None:
- batch_size, sequence_length = encoder_hidden_states.shape[:2]
- encoder_attention_mask = jnp.ones((batch_size, sequence_length))
-
- batch_size, sequence_length = input_ids.shape
-
- if position_ids is None:
- if past_key_values is not None:
- raise ValueError("Make sure to provide `position_ids` when passing `past_key_values`.")
-
- position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length))
-
- if attention_mask is None:
- attention_mask = jnp.ones((batch_size, sequence_length))
-
- # Handle any PRNG if needed
- rngs = {}
- if dropout_rng is not None:
- rngs["dropout"] = dropout_rng
-
- inputs = {"params": params or self.params}
-
- # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be changed by FlaxGPT2Attention module
- if past_key_values:
- inputs["cache"] = past_key_values
- mutable = ["cache"]
- else:
- mutable = False
-
- outputs = self.module.apply(
- inputs,
- jnp.array(input_ids, dtype="i4"),
- jnp.array(attention_mask, dtype="i4"),
- jnp.array(position_ids, dtype="i4"),
- encoder_hidden_states,
- encoder_attention_mask,
- not train,
- False,
- output_attentions,
- output_hidden_states,
- return_dict,
- rngs=rngs,
- mutable=mutable,
- )
-
- # add updated cache to model output
- if past_key_values is not None and return_dict:
- outputs, past_key_values = outputs
- outputs["past_key_values"] = unfreeze(past_key_values["cache"])
- return outputs
- elif past_key_values is not None and not return_dict:
- outputs, past_key_values = outputs
- outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:]
-
- return outputs
-
-
-class FlaxGPT2BlockCollection(nn.Module):
- config: GPT2Config
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.blocks = [
- FlaxGPT2Block(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.num_hidden_layers)
- ]
-
- def __call__(
- self,
- hidden_states,
- attention_mask=None,
- encoder_hidden_states: Optional[jnp.ndarray] = None,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- deterministic: bool = True,
- init_cache: bool = False,
- output_attentions: bool = False,
- output_hidden_states: bool = False,
- return_dict: bool = True,
- ):
- all_attentions = () if output_attentions else None
- all_hidden_states = () if output_hidden_states else None
- all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None
-
- for block in self.blocks:
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- layer_outputs = block(
- hidden_states,
- attention_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- deterministic=deterministic,
- init_cache=init_cache,
- output_attentions=output_attentions,
- )
- hidden_states = layer_outputs[0]
-
- if output_attentions:
- all_attentions += (layer_outputs[1],)
- if encoder_hidden_states is not None:
- all_cross_attentions += (layer_outputs[2],)
-
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- outputs = [hidden_states, all_hidden_states, all_attentions, all_cross_attentions]
-
- if not return_dict:
- return tuple(v for v in outputs if v is not None)
-
- if encoder_hidden_states is None:
- return FlaxBaseModelOutputWithPast(
- last_hidden_state=hidden_states,
- past_key_values=None,
- hidden_states=all_hidden_states,
- attentions=all_attentions,
- )
- else:
- return FlaxBaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=None,
- hidden_states=all_hidden_states,
- attentions=all_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-class FlaxGPT2Module(nn.Module):
- config: GPT2Config
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.embed_dim = self.config.hidden_size
-
- self.wte = nn.Embed(
- self.config.vocab_size,
- self.embed_dim,
- embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
- dtype=self.dtype,
- )
- self.wpe = nn.Embed(
- self.config.max_position_embeddings,
- self.embed_dim,
- embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
- dtype=self.dtype,
- )
- self.dropout = nn.Dropout(rate=self.config.embd_pdrop)
- self.h = FlaxGPT2BlockCollection(self.config, dtype=self.dtype)
- self.ln_f = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype)
-
- def __call__(
- self,
- input_ids,
- attention_mask,
- position_ids,
- encoder_hidden_states: Optional[jnp.ndarray] = None,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- deterministic=True,
- init_cache: bool = False,
- output_attentions: bool = False,
- output_hidden_states: bool = False,
- return_dict: bool = True,
- ):
- input_embeds = self.wte(input_ids.astype("i4"))
- position_embeds = self.wpe(position_ids.astype("i4"))
-
- hidden_states = input_embeds + position_embeds
- hidden_states = self.dropout(hidden_states, deterministic=deterministic)
-
- outputs = self.h(
- hidden_states,
- attention_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- deterministic=deterministic,
- init_cache=init_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = outputs[0]
- hidden_states = self.ln_f(hidden_states)
-
- if not return_dict:
- return (hidden_states,) + outputs[1:]
-
- if encoder_hidden_states is None:
- return FlaxBaseModelOutput(
- last_hidden_state=hidden_states,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
- else:
- return FlaxBaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- cross_attentions=outputs.cross_attentions,
- )
-
-@add_start_docstrings(
- "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.",
- GPT2_START_DOCSTRING,
-)
-class FlaxGPT2Model(FlaxGPT2PreTrainedModel):
- module_class = FlaxGPT2Module
-
-
-append_call_sample_docstring(
- FlaxGPT2Model, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutput, _CONFIG_FOR_DOC
-)
-
-
-class FlaxGPT2LMHeadModule(nn.Module):
- config: GPT2Config
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.transformer = FlaxGPT2Module(self.config, dtype=self.dtype)
- self.lm_head = nn.Dense(
- self.config.vocab_size,
- use_bias=False,
- dtype=self.dtype,
- kernel_init=jax.nn.initializers.normal(stddev=self.config.initializer_range, dtype=self.dtype),
- )
-
- def __call__(
- self,
- input_ids,
- attention_mask,
- position_ids,
- encoder_hidden_states: Optional[jnp.ndarray] = None,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- deterministic: bool = True,
- init_cache: bool = False,
- output_attentions: bool = False,
- output_hidden_states: bool = False,
- return_dict: bool = True,
- ):
- outputs = self.transformer(
- input_ids,
- attention_mask,
- position_ids,
- encoder_hidden_states,
- encoder_attention_mask,
- deterministic=deterministic,
- init_cache=init_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = outputs[0]
-
- if self.config.tie_word_embeddings:
- shared_kernel = self.transformer.variables["params"]["wte"]["embedding"].T
- lm_logits = self.lm_head.apply({"params": {"kernel": shared_kernel}}, hidden_states)
- else:
- lm_logits = self.lm_head(hidden_states)
-
- if not return_dict:
- return (lm_logits,) + outputs[1:]
-
- if encoder_hidden_states is None:
- return FlaxCausalLMOutput(logits=lm_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions)
- else:
- return FlaxSeq2SeqLMOutput(
- logits=lm_logits,
- decoder_hidden_states=outputs.hidden_states,
- decoder_attentions=outputs.attentions,
- cross_attentions=outputs.cross_attentions,
- encoder_last_hidden_state=encoder_hidden_states,
- encoder_hidden_states=None,
- encoder_attentions=None,
- )
-
-@add_start_docstrings(
- """
- The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input
- embeddings).
- """,
- GPT2_START_DOCSTRING,
-)
-class FlaxGPT2LMHeadModel(FlaxGPT2PreTrainedModel):
- module_class = FlaxGPT2LMHeadModule
-
- def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jnp.DeviceArray] = None):
- # initializing the cache
- batch_size, seq_length = input_ids.shape
-
- past_key_values = self.init_cache(batch_size, max_length)
- # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length.
- # But since GPT2 uses a causal mask, those positions are masked anyways.
- # Thus we can create a single static attention_mask here, which is more efficient for compilation
- extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4")
- if attention_mask is not None:
- position_ids = attention_mask.cumsum(axis=-1) - 1
- extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0))
- else:
- position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length))
-
- return {
- "past_key_values": past_key_values,
- "attention_mask": extended_attention_mask,
- "position_ids": position_ids,
- }
-
- def update_inputs_for_generation(self, model_outputs, model_kwargs):
- model_kwargs["past_key_values"] = model_outputs.past_key_values
- model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1
- return model_kwargs
-
-
-append_call_sample_docstring(
- FlaxGPT2LMHeadModel, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxCausalLMOutput, _CONFIG_FOR_DOC
-)
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py
deleted file mode 100644
index f354f516ec83790f1981d43318d68631c329405d..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py
+++ /dev/null
@@ -1,869 +0,0 @@
-import time
-
-import numpy as np
-from gym_minigrid.social_ai_envs.socialaigrammar import SocialAIGrammar, SocialAIActions, SocialAIActionSpace
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-import time
-from collections import deque
-
-
-class Partner(NPC):
- """
- A simple NPC that knows who is telling the truth
- """
- def __init__(self, color, name, env):
- super().__init__(color)
- self.name = name
- self.env = env
- self.npc_dir = 1 # NPC initially looks downward
- # todo: this should be id == name
- self.npc_type = 0 # this will be put into the encoding
-
- self.npc_side = "L" if self.env.agent_side == "R" else "R"
- assert {self.npc_side, self.env.agent_side} == {"L", "R"}
-
- self.target_obj = None
-
- self.was_introduced_to = False
-
- self.ate_an_apple = False
- self.demo_over = False
- self.demo_over_and_position_safe = False
- self.apple_unlocked_for_agent = False
-
- self.list_of_possible_utterances = [
- *self.list_of_possible_utterances,
- "Hot", # change to hot -> all with small letters
- "Warm",
- "Medium",
- "Cold",
- *COLOR_NAMES
- ]
-
- assert self.env.grammar.contains_utterance(self.introduction_statement)
-
- def step(self, utterance):
-
- reply, info = super().step()
-
- if self.env.hidden_npc:
- return reply, info
-
- if self.npc_side == "L":
- # the npc waits for the agent to open one of the right boxes, and then uses the object of the same color
- action = None
- if self.env.chosen_left_obj is not None:
- self.target_obj = self.env.chosen_left_obj
-
- if type(self.target_obj) == Switch and self.target_obj.is_on:
- next_target_position = self.env.box.cur_pos
-
- elif type(self.target_obj) == AppleGenerator and self.target_obj.is_pressed:
- next_target_position = self.env.left_generator_platform.cur_pos
-
- else:
- next_target_position = self.target_obj.cur_pos
-
- if type(self.target_obj) == AppleGenerator and not self.target_obj.is_pressed:
- # we have to activate the generator
- if not self.env.generator.marble_activation:
- # push generator
- action = self.path_to_pos(next_target_position)
- else:
- # find angle
- if self.env.marble.moving_dir is None:
- distance = (self.env.marble.cur_pos - self.target_obj.cur_pos)
-
- diff = np.sign(distance)
- if sum(abs(diff)) == 1:
- push_pos = self.env.marble.cur_pos + diff
- if all(self.cur_pos == push_pos):
- next_target_position = self.env.marble.cur_pos
- else:
- next_target_position = push_pos
-
- # go to loc in front of
- # push
- action = self.path_to_pos(next_target_position)
-
- else:
- action = None
-
- else:
- # toggle all other objects
- action = self.path_to_toggle_pos(next_target_position)
- else:
- action = self.turn_to_see_agent()
-
- else:
- if self.ate_an_apple:
- action = self.turn_to_see_agent()
- else:
- # toggle the chosen box then the apple
- if self.target_obj is None:
- self.target_obj = self.env._rand_elem([
- self.env.right_box1,
- self.env.right_box2
- ])
-
- action = self.path_to_toggle_pos(self.target_obj.cur_pos)
-
- if self.npc_side == "R":
- eaten_before = self.env.right_apple.eaten
- else:
- eaten_before = self.env.left_apple.eaten
-
- if action is not None:
- action()
-
- if not self.ate_an_apple:
- # check if the NPC ate the apple
- if self.npc_side == "R":
- self.ate_an_apple = not eaten_before and self.env.right_apple.eaten
- else:
- self.ate_an_apple = not eaten_before and self.env.left_apple.eaten
-
- info = {
- "prim_action": action.__name__ if action is not None else "no_op",
- "utterance": "no_op",
- "was_introduced_to": self.was_introduced_to
- }
-
- reply = None
-
- return reply, info
-
- def is_point_from_loc(self, pos):
- target_pos = self.target_obj.cur_pos
- if self.distractor_obj is not None:
- distractor_pos = self.distractor_obj.cur_pos
- else:
- distractor_pos = [None, None]
-
- if self.env.is_in_marble_way(pos):
- return False
-
- if any(pos == target_pos):
- same_ind = np.argmax(target_pos == pos)
-
- if pos[same_ind] != distractor_pos[same_ind]:
- return True
-
- if pos[same_ind] == distractor_pos[same_ind]:
- # if in between
- if distractor_pos[1-same_ind] < pos[1-same_ind] < target_pos[1-same_ind]:
- return True
-
- if distractor_pos[1-same_ind] > pos[1-same_ind] > target_pos[1-same_ind]:
- return True
-
- return False
-
- def find_point_from_loc(self):
- reject_fn = lambda env, p: not self.is_point_from_loc(p)
-
- point = self.env.find_loc(size=(self.env.wall_x, self.env.wall_y), reject_fn=reject_fn, reject_agent_pos=False)
-
- assert all(point < np.array([self.env.wall_x, self.env.wall_y]))
- assert all(point > np.array([0, 0]))
-
- return point
-
-
-class ObjectsCollaborationEnv(MultiModalMiniGridEnv):
- """
- Environment in which the agent is instructed to go to a given object
- named using an English text string
- """
-
- def __init__(
- self,
- size=10,
- diminished_reward=True,
- step_penalty=False,
- knowledgeable=False,
- max_steps=80,
- hidden_npc=False,
- switch_no_light=True,
- reward_diminish_factor=0.1,
- see_through_walls=False,
- egocentric_observation=True,
- ):
- assert size >= 5
- self.empty_symbol = "NA \n"
- self.diminished_reward = diminished_reward
- self.step_penalty = step_penalty
- self.knowledgeable = knowledgeable
- self.hidden_npc = hidden_npc
- self.hear_yourself = False
- self.switch_no_light = switch_no_light
-
- self.grammar = SocialAIGrammar()
-
- self.init_done = False
- # parameters - to be set in reset
- self.parameters = None
-
- # encoding size should be 5
- self.add_npc_direction = True
- self.add_npc_point_direction = True
- self.add_npc_last_prim_action = True
-
- self.reward_diminish_factor = reward_diminish_factor
-
- self.egocentric_observation = egocentric_observation
- self.encoding_size = 3 + 2*bool(not self.egocentric_observation) + bool(self.add_npc_direction) + bool(self.add_npc_point_direction) + bool(self.add_npc_last_prim_action)
-
- super().__init__(
- grid_size=size,
- max_steps=max_steps,
- # Set this to True for maximum speed
- see_through_walls=see_through_walls,
- actions=SocialAIActions, # primitive actions
- action_space=SocialAIActionSpace,
- add_npc_direction=self.add_npc_direction,
- add_npc_point_direction=self.add_npc_point_direction,
- add_npc_last_prim_action=self.add_npc_last_prim_action,
- reward_diminish_factor=self.reward_diminish_factor,
- )
- self.all_npc_utterance_actions = Partner.get_list_of_possible_utterances()
- self.prim_actions_dict = SocialAINPCActionsDict
-
- def revert(self):
- self.put_objects_in_env(remove_objects=True)
-
- def is_in_marble_way(self, pos):
- target_pos = self.generator_current_pos
-
- # generator distractor is in the same row / collumn as the marble and the generator
- # if self.distractor_current_pos is not None:
- # distractor_pos = self.distractor_current_pos
- # else:
- # distractor_pos = [None, None]
-
- if self.problem in ["Marble"]:
- # point can't be in the same row or column as both the marble and the generator
- # all three: marble, generator, loc are in the same row or column
- if any((pos == target_pos) * (pos == self.marble_current_pos)):
- # all three: marble, generator, loc are in the same row or column -> is in its way
- return True
-
- # is it in the way for the distractor generator
- if any((pos == self.distractor_current_pos) * (pos == self.marble_current_pos)):
- # all three: marble, distractor generator, loc are in the same row or column -> is in its way
- return True
-
- # all good
- return False
-
- def _gen_grid(self, width_, height_):
- # Create the grid
- self.grid = Grid(width_, height_, nb_obj_dims=self.encoding_size)
-
- # new
- min_w = min(9, width_)
- min_h = min(9, height_)
- self.current_width = self._rand_int(min_w, width_+1)
- self.current_height = self._rand_int(min_h, height_+1)
-
- self.wall_x = self.current_width-1
- self.wall_y = self.current_height-1
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, self.current_width, self.current_height)
-
- # problem: Apples/Boxes/Switches/Generators/Marbles
- self.problem = self.parameters["Problem"] if self.parameters else "Apples"
- num_of_colors = self.parameters.get("Num_of_colors", None) if self.parameters else None
- self.version = self.parameters["Version"] if self.parameters else "Asocial"
- self.role = self.parameters["Role"] if self.parameters else "A"
- assert self.role in ["A", "B", "Meta"]
-
- if self.role in ["B", "Meta"]:
- self.agent_side = "R" # starts on the right side
- else:
- self.agent_side = "L" # starts on the right side
-
- self.add_obstacles()
-
- # apple
-
- # box
- locked = self.problem == "Switches"
-
- if num_of_colors is None:
- POSSIBLE_COLORS = COLOR_NAMES.copy()
-
- else:
- POSSIBLE_COLORS = COLOR_NAMES[:int(num_of_colors)].copy()
-
- self.left_half_size = (self.current_width//2, self.current_height)
- self.left_half_top = (0, 0)
-
- self.right_half_size = (self.current_width//2 - 1, self.current_height)
- self.right_half_top = (self.current_width - self.current_width // 2 + 1, 0)
-
- # add fence to grid
- self.grid.vert_wall(
- x=self.current_width//2 + 1, # one collumn to the right of the center
- y=1,
- length=self.current_height - 2,
- obj_type=Fence
- )
-
- self.right_box1_color = self._rand_elem(POSSIBLE_COLORS)
- POSSIBLE_COLORS.remove(self.right_box1_color)
-
- self.right_box2_color = self._rand_elem(POSSIBLE_COLORS)
-
- assert self.right_box1_color != self.right_box2_color
-
- POSSIBLE_COLORS_LEFT = [self.right_box1_color, self.right_box2_color]
-
- self.left_color_1 = self._rand_elem(POSSIBLE_COLORS_LEFT)
- POSSIBLE_COLORS_LEFT.remove(self.left_color_1)
- self.left_color_2 = self._rand_elem(POSSIBLE_COLORS_LEFT)
-
-
- self.box_color = self.left_color_1
- # find the position for the apple/box/generator_platform
- self.left_apple_current_pos = self.find_loc(
- size=self.left_half_size,
- top=self.left_half_top,
- reject_agent_pos=True
- )
-
- # right boxes
- self.right_box1_current_pos = self.find_loc(
- size=self.right_half_size,
- top=self.right_half_top,
- reject_agent_pos=True
- )
- self.right_box2_current_pos = self.find_loc(
- size=self.right_half_size,
- top=self.right_half_top,
- reject_agent_pos=True,
- reject_fn=lambda _, pos: tuple(pos) in map(tuple, [self.right_box1_current_pos]),
- )
- assert all(self.left_apple_current_pos < np.array([self.current_width - 1, self.current_height - 1]))
-
- # switch
- # self.switch_pos = (self.current_width, self.current_height)
- self.switch_color = self.left_color_1
- self.switch_current_pos = self.find_loc(
- top=self.left_half_top,
- size=self.left_half_size,
- reject_agent_pos=True,
- reject_fn=lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos]),
- )
-
- # generator
- # self.generator_pos = (self.current_width, self.current_height)
- self.generator_color = self.left_color_1
- self.generator_current_pos = self.find_loc(
- top=self.left_half_top,
- size=self.left_half_size,
- reject_agent_pos=True,
- reject_fn=lambda _, pos: (
- tuple(pos) in map(tuple, [self.left_apple_current_pos])
- or
- (self.problem in ["Marbles", "Marble"] and tuple(pos) in [
- # not in corners
- (1, 1),
- (self.current_width-2, 1),
- (1, self.current_height-2),
- (self.current_width-2, self.current_height-2),
- ])
- or
- # not in the same row collumn as the platform
- (self.problem in ["Marbles", "Marble"] and any(pos == self.left_apple_current_pos))
- ),
- )
-
- # generator platform
- self.left_generator_platform_color = self._rand_elem(POSSIBLE_COLORS)
-
- # marbles
- # self.marble_pos = (self.current_width, self.current_height)
- self.marble_color = self._rand_elem(POSSIBLE_COLORS)
- self.marble_current_pos = self.find_loc(
- top=self.left_half_top,
- size=self.left_half_size,
- reject_agent_pos=True,
- reject_fn=lambda _, pos: self.problem in ["Marbles", "Marble"] and (
- tuple(pos) in map(tuple, [self.left_apple_current_pos, self.generator_current_pos])
- or
- all(pos != self.generator_current_pos) # reject if not in row or column as the generator
- or
- any(pos == 1) # next to a wall
- or
- pos[1] == self.current_height-2
- or
- pos[0] == self.current_width-2
- ),
- )
-
- self.distractor_color = self.left_color_2
- # self.distractor_pos = (self.current_width, self.current_height)
-
- if self.problem in ["Apples", "Boxes"]:
- distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos])
-
- elif self.problem in ["Switches"]:
- distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos, self.switch_current_pos])
-
- elif self.problem in ["Generators"]:
- distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos, self.generator_current_pos])
-
- elif self.problem in ["Marbles", "Marble"]:
- # problem is marbles
- same_dim = (self.generator_current_pos == self.marble_current_pos).argmax()
- distactor_same_dim = 1-same_dim
- distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [
- self.left_apple_current_pos,
- self.generator_current_pos,
- self.marble_current_pos
- ]) or pos[distactor_same_dim] != self.marble_current_pos[distactor_same_dim]
- # todo: not in corners -> but it's not that important
- # or tuple(pos) in [
- # # not in corners
- # (1, 1),
- # (self.current_width-2, 1),
- # (1, self.current_height-2),
- # (self.current_width-2, self.current_height-2),
- # ])
-
- else:
- raise ValueError("Problem {} indefined.".format(self.problem))
-
- self.distractor_current_pos = self.find_loc(
- top=self.left_half_top,
- size=self.left_half_size,
- reject_agent_pos=True,
- # todo: reject based on problem
- reject_fn=distractor_reject_fn
- )
-
- self.put_objects_in_env()
-
- # place agent
- if self.agent_side == "L":
- self.place_agent(size=self.left_half_size, top=self.left_half_top)
- else:
- self.place_agent(size=self.right_half_size, top=self.right_half_top)
-
- # NPC
- if self.version == "Social":
- self.npc_color = self._rand_elem(COLOR_NAMES)
- self.caretaker = Partner(self.npc_color, "Partner", self)
-
- if self.agent_side == "L":
- self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way)
- else:
- self.place_obj(self.caretaker, size=self.left_half_size, top=self.left_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way)
-
- # Generate the mission string
- self.mission = 'lets collaborate'
-
- # Dummy beginning string
- # self.beginning_string = "This is what you hear. \n"
- self.beginning_string = "Conversation: \n" # todo: go back to "this what you hear?
- self.utterance = self.beginning_string
-
- # utterance appended at the end of each step
- self.utterance_history = ""
-
- # used for rendering
- self.full_conversation = self.utterance
- self.outcome_info = None
-
- def put_objects_in_env(self, remove_objects=False):
-
- assert self.left_apple_current_pos is not None
- assert self.right_box1_current_pos is not None
- assert self.right_box2_current_pos is not None
- assert self.switch_current_pos is not None
-
- self.switches_block_set = []
- self.boxes_block_set = []
- self.right_boxes_block_set = []
- self.generators_block_set = []
-
- self.other_box = None
- self.other_switch = None
- self.other_generator = None
-
- # problem: Apples/Boxes/Switches/Generators
- assert self.problem == self.parameters["Problem"] if self.parameters else "Apples"
-
- # move objects (used only in revert), not in gen_grid
- if remove_objects:
- # remove apple or box
- # assert type(self.grid.get(*self.apple_current_pos)) in [Apple, LockableBox]
- # self.grid.set(*self.apple_current_pos, None)
-
- # remove apple (after demo it must be an apple)
- assert type(self.grid.get(*self.left_apple_current_pos)) in [Apple]
- self.grid.set(*self.left_apple_current_pos, None)
-
- self.grid.set(*self.right_apple_current_pos, None)
-
- if self.problem in ["Switches"]:
- # remove switch
- assert type(self.grid.get(*self.switch_current_pos)) in [Switch]
- self.grid.set(*self.switch.cur_pos, None)
-
- elif self.problem in ["Generators", "Marbles", "Marble"]:
- # remove generator
- assert type(self.grid.get(*self.generator.cur_pos)) in [AppleGenerator]
- self.grid.set(*self.generator.cur_pos, None)
-
- if self.problem in ["Marbles", "Marble"]:
- # remove generator
- assert type(self.grid.get(*self.marble.cur_pos)) in [Marble]
- self.grid.set(*self.marble.cur_pos, None)
-
- if self.marble.tee_uncovered:
- self.grid.set(*self.marble.tee.cur_pos, None)
-
- elif self.problem in ["Apples", "Boxes"]:
- pass
-
- else:
- raise ValueError("Undefined problem {}".format(self.problem))
-
- # remove distractor
- if self.problem in ["Boxes", "Switches", "Generators", "Marbles", "Marble"]:
- assert type(self.grid.get(*self.distractor_current_pos)) in [LockableBox, Switch, AppleGenerator]
- self.grid.set(*self.distractor_current_pos, None)
-
- # apple
- self.left_apple = Apple()
- self.right_apple = Apple()
-
- # right apple
- self.right_box1 = LockableBox(
- self.right_box1_color,
- contains=self.right_apple,
- is_locked=False,
- block_set=self.right_boxes_block_set
- )
- self.right_boxes_block_set.append(self.right_box1)
-
- # right apple
- self.right_box2 = LockableBox(
- self.right_box2_color,
- contains=self.right_apple,
- is_locked=False,
- block_set=self.right_boxes_block_set
- )
- self.right_boxes_block_set.append(self.right_box2)
-
- # Box
- locked = self.problem == "Switches"
-
- self.box = LockableBox(
- self.box_color,
- # contains=self.left_apple,
- is_locked=locked,
- block_set=self.boxes_block_set
- )
- self.boxes_block_set.append(self.box)
-
- # Switch
- self.switch = Switch(
- color=self.switch_color,
- # lockable_object=self.box,
- locker_switch=True,
- no_turn_off=True,
- no_light=self.switch_no_light,
- block_set=self.switches_block_set,
- )
-
- self.switches_block_set.append(self.switch)
-
- # Generator
- self.generator = AppleGenerator(
- self.generator_color,
- block_set=self.generators_block_set,
- # on_push=lambda: self.grid.set(*self.left_apple_current_pos, self.left_apple),
- marble_activation=self.problem in ["Marble"],
- )
- self.generators_block_set.append(self.generator)
-
- self.left_generator_platform = GeneratorPlatform(self.left_generator_platform_color)
-
- self.marble = Marble(self.marble_color, env=self)
-
- # right side
- self.put_obj_np(self.right_box1, self.right_box1_current_pos)
- self.put_obj_np(self.right_box2, self.right_box2_current_pos)
-
- self.candidate_objects=[]
- # left side
- if self.problem == "Apples":
- self.put_obj_np(self.left_apple, self.left_apple_current_pos)
- self.candidate_objects.append(self.left_apple)
-
- elif self.problem in ["Boxes"]:
- self.put_obj_np(self.box, self.left_apple_current_pos)
- self.candidate_objects.append(self.box)
-
- elif self.problem in ["Switches"]:
- self.put_obj_np(self.box, self.left_apple_current_pos)
- self.put_obj_np(self.switch, self.switch_current_pos)
- self.candidate_objects.append(self.switch)
-
- elif self.problem in ["Generators", "Marble"]:
- self.put_obj_np(self.generator, self.generator_current_pos)
- self.put_obj_np(self.left_generator_platform, self.left_apple_current_pos)
- self.candidate_objects.append(self.generator)
-
- if self.problem in ["Marble"]:
- self.put_obj_np(self.marble, self.marble_current_pos)
-
- else:
- raise ValueError("Problem {} not defined. ".format(self.problem))
-
- # Distractors
- if self.problem == "Boxes":
- assert not locked
-
- self.other_box = LockableBox(
- self.left_color_2,
- is_locked=locked,
- block_set=self.boxes_block_set,
- )
- self.boxes_block_set.append(self.other_box)
-
- self.put_obj_np(self.other_box, self.distractor_current_pos)
- self.candidate_objects.append(self.other_box)
-
- elif self.problem == "Switches":
- self.other_switch = Switch(
- color=self.left_color_2,
- locker_switch=True,
- no_turn_off=True,
- no_light=self.switch_no_light,
- block_set=self.switches_block_set,
- )
- self.switches_block_set.append(self.other_switch)
-
- self.put_obj_np(self.other_switch, self.distractor_current_pos)
- self.candidate_objects.append(self.other_switch)
-
- elif self.problem in ["Generators", "Marble"]:
- self.other_generator = AppleGenerator(
- color=self.left_color_2,
- block_set=self.generators_block_set,
- marble_activation=self.problem in ["Marble"],
- )
- self.generators_block_set.append(self.other_generator)
-
- self.put_obj_np(self.other_generator, self.distractor_current_pos)
- self.candidate_objects.append(self.other_generator)
-
- def reset(
- self, *args, **kwargs
- ):
- # This env must be used inside the parametric env
- if not kwargs:
- # The only place when kwargs can empty is during the class construction
- # reset should be called again before using the env (paramenv does it in its constructor)
- assert self.parameters is None
- assert not self.init_done
- self.init_done = True
-
- obs = super().reset()
- return obs
-
- else:
- assert self.init_done
-
- self.parameters = dict(kwargs)
-
- assert self.parameters is not None
- assert len(self.parameters) > 0
-
- obs = super().reset()
-
- self.agent_ate_an_apple = False
- self.chosen_right_box = None
- self.chosen_left_obj = None
-
- return obs
-
- def step(self, action):
- success = False
- p_action = action[0]
- utterance_action = action[1:]
-
- left_apple_had_been_eaten = self.left_apple.eaten
- right_apple_had_been_eaten = self.right_apple.eaten
-
- # primitive actions
- _, reward, done, info = super().step(p_action)
-
- if self.problem in ["Marbles", "Marble"]:
- # todo: create objects which can stepped automatically?
- self.marble.step()
-
- if not self.agent_ate_an_apple:
- if self.agent_side == "L":
- self.agent_ate_an_apple = self.left_apple.eaten and not left_apple_had_been_eaten
- else:
- self.agent_ate_an_apple = self.right_apple.eaten and not right_apple_had_been_eaten
-
- if self.right_box1.is_open:
- self.chosen_right_box = self.right_box1
-
- if self.right_box2.is_open:
- self.chosen_right_box = self.right_box2
-
- if self.chosen_right_box is not None:
- chosen_color = self.chosen_right_box.color
- self.chosen_left_obj = [o for o in self.candidate_objects if o.color == chosen_color][0]
-
- if type(self.chosen_left_obj) == LockableBox:
- self.chosen_left_obj.contains = self.left_apple
-
- elif type(self.chosen_left_obj) == Switch:
- self.chosen_left_obj.lockable_object = self.box
- self.box.contains = self.left_apple
-
- elif type(self.chosen_left_obj) == AppleGenerator:
- self.chosen_left_obj.on_push=lambda: self.grid.set(*self.left_apple_current_pos, self.left_apple)
-
- else:
- raise ValueError("Unknown target object.")
-
- # utterances
- agent_spoke = not all(np.isnan(utterance_action))
- if agent_spoke:
- utterance = self.grammar.construct_utterance(utterance_action)
-
- if self.hear_yourself:
- self.utterance += "YOU: {} \n".format(utterance)
- self.full_conversation += "YOU: {} \n".format(utterance)
- else:
- utterance = None
-
- if self.version == "Social":
- reply, npc_info = self.caretaker.step(utterance)
-
- if reply:
- self.utterance += "{}: {} \n".format(self.caretaker.name, reply)
- self.full_conversation += "{}: {} \n".format(self.caretaker.name, reply)
- else:
- npc_info = {
- "prim_action": "no_op",
- "utterance": "no_op",
- "was_introduced_to": False,
- }
-
-
- # aftermath
- if p_action == self.actions.done:
- done = True
-
- if (self.role in ["A", "B"] or self.version == "Asocial") and self.agent_ate_an_apple:
- reward = self._reward()
- success = True
- done = True
-
- elif self.role == "Meta" and self.version == "Social" and self.agent_ate_an_apple and self.caretaker.ate_an_apple:
-
- if self.agent_side == "L":
- reward = self._reward() / 2
- success = True
- done = True
-
- else:
- # revert and rotate
- reward = self._reward() / 2
- self.agent_ate_an_apple = False
- self.caretaker.ate_an_apple = False
- self.agent_side = "L"
- self.put_objects_in_env(remove_objects=True)
-
- # teleport the agent and the NPC
- self.place_agent(size=self.left_half_size, top=self.left_half_top)
-
- self.grid.set(*self.caretaker.cur_pos, None)
-
- self.caretaker = Partner(self.npc_color, "Partner", self)
- self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way)
-
- # discount
- if self.step_penalty:
- reward = reward - 0.01
-
- # update obs with NPC movement
- obs = self.gen_obs(full_obs=self.full_obs)
-
- # fill observation with text
- self.append_existing_utterance_to_history()
- obs = self.add_utterance_to_observation(obs)
- self.reset_utterance()
-
- if done:
- if reward > 0:
- self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1))
- else:
- self.outcome_info = "FAILURE: agent got {} reward \n".format(reward)
-
- if self.version == "Social":
- # is the npc seen by the agent
- ag_view_npc = self.relative_coords(*self.caretaker.cur_pos)
-
- if ag_view_npc is not None:
- # in the agent's field of view
- ag_view_npc_x, ag_view_npc_y = ag_view_npc
-
- n_dims = obs['image'].shape[-1]
- npc_encoding = self.caretaker.encode(n_dims)
-
- # is it occluded
- npc_observed = all(obs['image'][ag_view_npc_x, ag_view_npc_y] == npc_encoding)
- else:
- npc_observed = False
- else:
- npc_observed = False
-
- info = {**info, **{"NPC_"+k: v for k, v in npc_info.items()}}
-
- info["NPC_observed"] = npc_observed
- info["success"] = success
-
- return obs, reward, done, info
-
- def _reward(self):
- if self.diminished_reward:
- return super()._reward()
- else:
- return 1.0
-
- # def render(self, *args, **kwargs):
- # obs = super().render(*args, **kwargs)
- # self.window.clear_text() # erase previous text
- # self.window.set_caption(self.full_conversation)
- #
- # # self.window.ax.set_title("correct color: {}".format(self.box.target_color), loc="left", fontsize=10)
- #
- # if self.outcome_info:
- # color = None
- # if "SUCCESS" in self.outcome_info:
- # color = "lime"
- # elif "FAILURE" in self.outcome_info:
- # color = "red"
- # self.window.add_text(*(0.01, 0.85, self.outcome_info),
- # **{'fontsize': 15, 'color': color, 'weight': "bold"})
- #
- # self.window.show_img(obs) # re-draw image to add changes to window
- # return obs
-
-register(
- id='SocialAI-ObjectsCollaboration-v0',
- entry_point='gym_minigrid.social_ai_envs:ObjectsCollaborationEnv'
-)
\ No newline at end of file
diff --git a/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py b/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py
deleted file mode 100644
index 7c053f49218115f745b967b44cf769fef7a0ae6c..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.distributions.categorical import Categorical
-import torch_ac
-
-
-from utils.other import init_params
-
-
-class DialogueMemoryMultiHeadedACModel(nn.Module, torch_ac.RecurrentACModel):
- def __init__(self, obs_space, action_space, use_memory=False, use_text=False, use_dialogue=False):
- super().__init__()
-
- # Decide which components are enabled
- self.use_text = use_text
- self.use_dialogue = use_dialogue
- self.use_memory = use_memory
-
- if not self.use_memory:
- raise ValueError("You should not be using this model. Use MultiHeadedACModel instead")
-
- if not self.use_dialogue:
- raise ValueError("You should not be using this model. Use ACModel instead")
-
- if self.use_text:
- raise ValueError("You should not use text but dialogue.")
-
- # multi dim
- if action_space.shape == ():
- raise ValueError("The action space is not multi modal. Use ACModel instead.")
-
- self.n_primitive_actions = action_space.nvec[0] + 1 # for talk
- self.talk_action = int(self.n_primitive_actions) - 1
-
- self.n_utterance_actions = action_space.nvec[1:]
-
- # Define image embedding
- self.image_conv = nn.Sequential(
- nn.Conv2d(3, 16, (2, 2)),
- nn.ReLU(),
- nn.MaxPool2d((2, 2)),
- nn.Conv2d(16, 32, (2, 2)),
- nn.ReLU(),
- nn.Conv2d(32, 64, (2, 2)),
- nn.ReLU()
- )
- n = obs_space["image"][0]
- m = obs_space["image"][1]
- self.image_embedding_size = ((n-1)//2-2)*((m-1)//2-2)*64
-
- if self.use_text or self.use_dialogue:
- self.word_embedding_size = 32
- self.word_embedding = nn.Embedding(obs_space["text"], self.word_embedding_size)
-
- # Define text embedding
- if self.use_text:
- self.text_embedding_size = 128
- self.text_rnn = nn.GRU(self.word_embedding_size, self.text_embedding_size, batch_first=True)
-
- # Define dialogue embedding
- if self.use_dialogue:
- self.dialogue_embedding_size = 128
- self.dialogue_rnn = nn.GRU(self.word_embedding_size, self.dialogue_embedding_size, batch_first=True)
-
- # Resize image embedding
- self.embedding_size = self.image_embedding_size
-
- if self.use_text:
- self.embedding_size += self.text_embedding_size
-
- if self.use_dialogue:
- self.embedding_size += self.dialogue_embedding_size
-
- # Define actor's model
- self.actor = nn.Sequential(
- nn.Linear(self.embedding_size, 64),
- nn.Tanh(),
- nn.Linear(64, self.n_primitive_actions)
- )
- self.talker = nn.ModuleList([
- nn.Sequential(
- nn.Linear(self.embedding_size, 64),
- nn.Tanh(),
- nn.Linear(64, n)
- ) for n in self.n_utterance_actions])
-
- # Define critic's model
- self.critic = nn.Sequential(
- nn.Linear(self.embedding_size, 64),
- nn.Tanh(),
- nn.Linear(64, 1)
- )
-
- # Initialize parameters correctly
- self.apply(init_params)
-
- @property
- def memory_size(self):
- return self.dialogue_embedding_size
-
- def forward(self, obs, memory):
- x = obs.image.transpose(1, 3).transpose(2, 3)
- x = self.image_conv(x)
-
- batch_size = x.shape[0]
- x = x.reshape(batch_size, -1)
-
- embedding = x
-
- if self.use_text:
- embed_text = self._get_embed_text(obs.text)
- embedding = torch.cat((embedding, embed_text), dim=1)
-
- if self.use_dialogue:
- embed_dial, memory = self._get_embed_dialogue(obs.dialogue, memory)
- embedding = torch.cat((embedding, embed_dial), dim=1)
-
- x = self.actor(embedding)
- primitive_actions_dist = Categorical(logits=F.log_softmax(x, dim=1))
-
- x = self.critic(embedding)
- value = x.squeeze(1)
- utterance_actions_dists = [
- Categorical(logits=F.log_softmax(
- tal(embedding),
- dim=1,
- )) for tal in self.talker
- ]
-
- dist = [primitive_actions_dist] + utterance_actions_dists
-
- return dist, value, memory
-
- def sample_action(self, dist):
- return torch.stack([d.sample() for d in dist], dim=1)
-
- def calculate_log_probs(self, dist, action):
- return torch.stack([d.log_prob(action[:, i]) for i, d in enumerate(dist)], dim=1)
-
- def calculate_action_masks(self, action):
- talk_mask = action[:, 0] == self.talk_action
- mask = torch.stack(
- (torch.ones_like(talk_mask), talk_mask, talk_mask),
- dim=1).detach()
-
- assert action.shape == mask.shape
-
- return mask
-
- def construct_final_action(self, action):
- act_mask = action[:, 0] != self.n_primitive_actions - 1
-
- nan_mask = np.array([
- np.array([1, np.nan, np.nan]) if t else np.array([np.nan, 1, 1]) for t in act_mask
- ])
-
- action = nan_mask*action
-
- return action
-
- def _get_embed_text(self, text):
- _, hidden = self.text_rnn(self.word_embedding(text))
-
- return hidden[-1]
-
- def _get_embed_dialogue(self, dial, memory):
- _, hidden = self.dialogue_rnn(self.word_embedding(dial), )
- return hidden[-1], hidden[-1]
diff --git a/spaces/fr1ll/sketch-to-1d-SRME/README.md b/spaces/fr1ll/sketch-to-1d-SRME/README.md
deleted file mode 100644
index b3bf30cffe539e906f3122a13da3e3fefa30f611..0000000000000000000000000000000000000000
--- a/spaces/fr1ll/sketch-to-1d-SRME/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sketch To Srme
-emoji: 🏃
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py
deleted file mode 100644
index 1fa9ed34184ec6c6063305cf71b2a662222d5207..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import numpy as np
-
-import gradio as gr
-
-
-def transpose(matrix):
- return matrix.T
-
-
-demo = gr.Interface(
- transpose,
- gr.Dataframe(type="numpy", datatype="number", row_count=5, col_count=3),
- "numpy",
- examples=[
- [np.zeros((3, 3)).tolist()],
- [np.ones((2, 2)).tolist()],
- [np.random.randint(0, 10, (3, 10)).tolist()],
- [np.random.randint(0, 10, (10, 3)).tolist()],
- [np.random.randint(0, 10, (10, 10)).tolist()],
- ],
-)
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/ghoskno/ColorCanny-Controlnet/lpw.py b/spaces/ghoskno/ColorCanny-Controlnet/lpw.py
deleted file mode 100644
index 7c6bcaea93a83e3781728eaeb16224bfcf1f01ce..0000000000000000000000000000000000000000
--- a/spaces/ghoskno/ColorCanny-Controlnet/lpw.py
+++ /dev/null
@@ -1,389 +0,0 @@
-import re
-from typing import List, Optional, Union
-
-import torch
-
-from diffusers import StableDiffusionPipeline
-
-
-re_attention = re.compile(
- r"""
-\\\(|
-\\\)|
-\\\[|
-\\]|
-\\\\|
-\\|
-\(|
-\[|
-:([+-]?[.\d]+)\)|
-\)|
-]|
-[^\\()\[\]:]+|
-:
-""",
- re.X,
-)
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- res.append([text, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
-
-
-def get_prompts_with_weights(pipe: StableDiffusionPipeline, prompt: List[str], max_length: int):
- r"""
- Tokenize a list of prompts and return its tokens with weights of each token.
-
- No padding, starting or ending token is included.
- """
- tokens = []
- weights = []
- truncated = False
- for text in prompt:
- texts_and_weights = parse_prompt_attention(text)
- text_token = []
- text_weight = []
- for word, weight in texts_and_weights:
- # tokenize and discard the starting and the ending token
- token = pipe.tokenizer(word).input_ids[1:-1]
- text_token += token
- # copy the weight by length of token
- text_weight += [weight] * len(token)
- # stop if the text is too long (longer than truncation limit)
- if len(text_token) > max_length:
- truncated = True
- break
- # truncate
- if len(text_token) > max_length:
- truncated = True
- text_token = text_token[:max_length]
- text_weight = text_weight[:max_length]
- tokens.append(text_token)
- weights.append(text_weight)
- if truncated:
- logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
- return tokens, weights
-
-
-def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77):
- r"""
- Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
- """
- max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
- weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
- for i in range(len(tokens)):
- tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i]))
- if no_boseos_middle:
- weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
- else:
- w = []
- if len(weights[i]) == 0:
- w = [1.0] * weights_length
- else:
- for j in range(max_embeddings_multiples):
- w.append(1.0) # weight for starting token in this chunk
- w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
- w.append(1.0) # weight for ending token in this chunk
- w += [1.0] * (weights_length - len(w))
- weights[i] = w[:]
-
- return tokens, weights
-
-def get_unweighted_text_embeddings(
- pipe: StableDiffusionPipeline,
- text_input: torch.Tensor,
- chunk_length: int,
- no_boseos_middle: Optional[bool] = True,
-):
- """
- When the length of tokens is a multiple of the capacity of the text encoder,
- it should be split into chunks and sent to the text encoder individually.
- """
- max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
- if max_embeddings_multiples > 1:
- text_embeddings = []
- for i in range(max_embeddings_multiples):
- # extract the i-th chunk
- text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
-
- # cover the head and the tail by the starting and the ending tokens
- text_input_chunk[:, 0] = text_input[0, 0]
- text_input_chunk[:, -1] = text_input[0, -1]
- text_embedding = pipe.text_encoder(text_input_chunk)[0]
-
- if no_boseos_middle:
- if i == 0:
- # discard the ending token
- text_embedding = text_embedding[:, :-1]
- elif i == max_embeddings_multiples - 1:
- # discard the starting token
- text_embedding = text_embedding[:, 1:]
- else:
- # discard both starting and ending tokens
- text_embedding = text_embedding[:, 1:-1]
-
- text_embeddings.append(text_embedding)
- text_embeddings = torch.concat(text_embeddings, axis=1)
- else:
- text_embeddings = pipe.text_encoder(text_input)[0]
- return text_embeddings
-
-
-def get_weighted_text_embeddings(
- pipe: StableDiffusionPipeline,
- prompt: Union[str, List[str]],
- uncond_prompt: Optional[Union[str, List[str]]] = None,
- max_embeddings_multiples: Optional[int] = 3,
- no_boseos_middle: Optional[bool] = False,
- skip_parsing: Optional[bool] = False,
- skip_weighting: Optional[bool] = False,
- **kwargs,
-):
- r"""
- Prompts can be assigned with local weights using brackets. For example,
- prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
- and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
-
- Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
-
- Args:
- pipe (`StableDiffusionPipeline`):
- Pipe to provide access to the tokenizer and the text encoder.
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- uncond_prompt (`str` or `List[str]`):
- The unconditional prompt or prompts for guide the image generation. If unconditional prompt
- is provided, the embeddings of prompt and uncond_prompt are concatenated.
- max_embeddings_multiples (`int`, *optional*, defaults to `3`):
- The max multiple length of prompt embeddings compared to the max output length of text encoder.
- no_boseos_middle (`bool`, *optional*, defaults to `False`):
- If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
- ending token in each of the chunk in the middle.
- skip_parsing (`bool`, *optional*, defaults to `False`):
- Skip the parsing of brackets.
- skip_weighting (`bool`, *optional*, defaults to `False`):
- Skip the weighting. When the parsing is skipped, it is forced True.
- """
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
- if isinstance(prompt, str):
- prompt = [prompt]
-
- if not skip_parsing:
- prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
- if uncond_prompt is not None:
- if isinstance(uncond_prompt, str):
- uncond_prompt = [uncond_prompt]
- uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
- else:
- prompt_tokens = [
- token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
- ]
- prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
- if uncond_prompt is not None:
- if isinstance(uncond_prompt, str):
- uncond_prompt = [uncond_prompt]
- uncond_tokens = [
- token[1:-1]
- for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
- ]
- uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
-
- # round up the longest length of tokens to a multiple of (model_max_length - 2)
- max_length = max([len(token) for token in prompt_tokens])
- if uncond_prompt is not None:
- max_length = max(max_length, max([len(token) for token in uncond_tokens]))
-
- max_embeddings_multiples = min(
- max_embeddings_multiples,
- (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
- )
- max_embeddings_multiples = max(1, max_embeddings_multiples)
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
-
- # pad the length of tokens and weights
- bos = pipe.tokenizer.bos_token_id
- eos = pipe.tokenizer.eos_token_id
- prompt_tokens, prompt_weights = pad_tokens_and_weights(
- prompt_tokens,
- prompt_weights,
- max_length,
- bos,
- eos,
- no_boseos_middle=no_boseos_middle,
- chunk_length=pipe.tokenizer.model_max_length,
- )
- prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.text_encoder.device)
- if uncond_prompt is not None:
- uncond_tokens, uncond_weights = pad_tokens_and_weights(
- uncond_tokens,
- uncond_weights,
- max_length,
- bos,
- eos,
- no_boseos_middle=no_boseos_middle,
- chunk_length=pipe.tokenizer.model_max_length,
- )
- uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.text_encoder.device)
-
- # get the embeddings
- text_embeddings = get_unweighted_text_embeddings(
- pipe,
- prompt_tokens,
- pipe.tokenizer.model_max_length,
- no_boseos_middle=no_boseos_middle,
- )
- prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=pipe.text_encoder.device)
- if uncond_prompt is not None:
- uncond_embeddings = get_unweighted_text_embeddings(
- pipe,
- uncond_tokens,
- pipe.tokenizer.model_max_length,
- no_boseos_middle=no_boseos_middle,
- )
- uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=pipe.text_encoder.device)
-
- # assign weights to the prompts and normalize in the sense of mean
- # TODO: should we normalize by chunk or in a whole (current implementation)?
- if (not skip_parsing) and (not skip_weighting):
- previous_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
- text_embeddings *= prompt_weights.unsqueeze(-1)
- current_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
- text_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
- if uncond_prompt is not None:
- previous_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
- uncond_embeddings *= uncond_weights.unsqueeze(-1)
- current_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
- uncond_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
-
- if uncond_prompt is not None:
- return text_embeddings, uncond_embeddings
- return text_embeddings, None
-
-def _encode_prompt(
- pipe,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- max_embeddings_multiples,
-):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- max_embeddings_multiples (`int`, *optional*, defaults to `3`):
- The max multiple length of prompt embeddings compared to the max output length of text encoder.
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- if negative_prompt is None:
- negative_prompt = [""] * batch_size
- elif isinstance(negative_prompt, str):
- negative_prompt = [negative_prompt] * batch_size
- if batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
-
- text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
- pipe=pipe,
- prompt=prompt,
- uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
- max_embeddings_multiples=max_embeddings_multiples,
- )
- return text_embeddings, uncond_embeddings
\ No newline at end of file
diff --git a/spaces/giswqs/geospatial/app.py b/spaces/giswqs/geospatial/app.py
deleted file mode 100644
index 5c9c0f6e47353921c0ae2f57a0997d6290c2c261..0000000000000000000000000000000000000000
--- a/spaces/giswqs/geospatial/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import gradio as gr
-import leafmap.foliumap as leafmap
-
-
-def split(left, right):
- m = leafmap.Map()
- m.split_map(left_layer=left, right_layer=right)
- return m.to_gradio()
-
-
-left_url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif'
-right_url = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif'
-left_input = gr.Textbox(value=left_url, label="Left Layer URL")
-right_input = gr.Textbox(value=right_url, label="Right Layer URL")
-
-title = 'Gradio for Geospatial Applications'
-description = 'Visualizing geospatial datasets with Gradio and leafmap'
-demo = gr.Interface(split, [left_input, right_input], "html", title=title, description=description)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md b/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md
deleted file mode 100644
index e717ba8308ef3d77d3165b3a712650e938c6dd83..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md
+++ /dev/null
@@ -1,6 +0,0 @@
-jagjit singh evergreen vol 2 zip
DOWNLOAD ->>> https://urlgoal.com/2uyLVT
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md b/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md
deleted file mode 100644
index 3124e1aaad7446be3b90af6e1f5627e253f8b6da..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md
+++ /dev/null
@@ -1,6 +0,0 @@
-Swarg Yahan Narak Yahan telugu movie mp4 free download
Download Zip ::: https://urlgoal.com/2uyN9B
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/examples/bart/README.glue.md b/spaces/gradio/HuBERT/examples/bart/README.glue.md
deleted file mode 100644
index a010934e1e6dec491eb1c704ec02ba7405760510..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/bart/README.glue.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Fine-tuning BART on GLUE tasks
-
-### 1) Download the data from GLUE website (https://gluebenchmark.com/tasks) using following commands:
-```bash
-wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
-python download_glue_data.py --data_dir glue_data --tasks all
-```
-
-### 2) Preprocess GLUE task data (same as RoBERTa):
-```bash
-./examples/roberta/preprocess_GLUE_tasks.sh glue_data
-```
-`glue_task_name` is one of the following:
-`{ALL, QQP, MNLI, QNLI, MRPC, RTE, STS-B, SST-2, CoLA}`
-Use `ALL` for preprocessing all the glue tasks.
-
-### 3) Fine-tuning on GLUE task:
-Example fine-tuning cmd for `RTE` task
-```bash
-TOTAL_NUM_UPDATES=2036 # 10 epochs through RTE for bsz 16
-WARMUP_UPDATES=61 # 6 percent of the number of updates
-LR=1e-05 # Peak LR for polynomial LR scheduler.
-NUM_CLASSES=2
-MAX_SENTENCES=16 # Batch size.
-BART_PATH=/path/to/bart/model.pt
-
-CUDA_VISIBLE_DEVICES=0,1 fairseq-train RTE-bin/ \
- --restore-file $BART_PATH \
- --batch-size $MAX_SENTENCES \
- --max-tokens 4400 \
- --task sentence_prediction \
- --add-prev-output-tokens \
- --layernorm-embedding \
- --share-all-embeddings \
- --share-decoder-input-output-embed \
- --reset-optimizer --reset-dataloader --reset-meters \
- --required-batch-size-multiple 1 \
- --init-token 0 \
- --arch bart_large \
- --criterion sentence_prediction \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --max-epoch 10 \
- --find-unused-parameters \
- --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric;
-```
-
-For each of the GLUE task, you will need to use following cmd-line arguments:
-
-Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B
----|---|---|---|---|---|---|---|---
-`--num-classes` | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1
-`--lr` | 5e-6 | 1e-5 | 1e-5 | 1e-5 | 5e-6 | 2e-5 | 2e-5 | 2e-5
-`bsz` | 128 | 32 | 32 | 32 | 128 | 64 | 64 | 32
-`--total-num-update` | 30968 | 33112 | 113272 | 1018 | 5233 | 1148 | 1334 | 1799
-`--warmup-updates` | 1858 | 1986 | 6796 | 61 | 314 | 68 | 80 | 107
-
-For `STS-B` additionally add `--regression-target --best-checkpoint-metric loss` and remove `--maximize-best-checkpoint-metric`.
-
-**Note:**
-
-a) `--total-num-updates` is used by `--polynomial_decay` scheduler and is calculated for `--max-epoch=10` and `--batch-size=32/64/128` depending on the task.
-
-b) Above cmd-args and hyperparams are tested on Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`.
-
-### Inference on GLUE task
-After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using following python code snippet:
-
-```python
-from fairseq.models.bart import BARTModel
-
-bart = BARTModel.from_pretrained(
- 'checkpoints/',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='RTE-bin'
-)
-
-label_fn = lambda label: bart.task.label_dictionary.string(
- [label + bart.task.label_dictionary.nspecial]
-)
-ncorrect, nsamples = 0, 0
-bart.cuda()
-bart.eval()
-with open('glue_data/RTE/dev.tsv') as fin:
- fin.readline()
- for index, line in enumerate(fin):
- tokens = line.strip().split('\t')
- sent1, sent2, target = tokens[1], tokens[2], tokens[3]
- tokens = bart.encode(sent1, sent2)
- prediction = bart.predict('sentence_classification_head', tokens).argmax().item()
- prediction_label = label_fn(prediction)
- ncorrect += int(prediction_label == target)
- nsamples += 1
-print('| Accuracy: ', float(ncorrect)/float(nsamples))
-```
diff --git a/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py b/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py
deleted file mode 100644
index 65e17ec94f7e595cb657b3d2daaa1052a95d0677..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import nn
-from torch.nn.modules.utils import _single
-from torch import Tensor
-
-
-class ConvTBC(torch.nn.Module):
- """1D convolution over an input of shape (time x batch x channel)
-
- The implementation uses gemm to perform the convolution. This implementation
- is faster than cuDNN for small kernel sizes.
- """
-
- def __init__(self, in_channels, out_channels, kernel_size, padding=0):
- super(ConvTBC, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _single(kernel_size)
- self.padding = _single(padding)
-
- self.weight = torch.nn.Parameter(
- torch.Tensor(self.kernel_size[0], in_channels, out_channels)
- )
- self.bias = torch.nn.Parameter(torch.Tensor(out_channels))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- nn.init.xavier_normal_(self.weight)
- nn.init.zeros_(self.bias)
-
- def conv_tbc(self, input: Tensor):
- return torch.conv_tbc(
- input.contiguous(), self.weight, self.bias, self.padding[0]
- )
-
- def forward(self, input: Tensor):
- return self.conv_tbc(input)
-
- def __repr__(self):
- s = (
- "{name}({in_channels}, {out_channels}, kernel_size={kernel_size}"
- ", padding={padding}"
- )
- if self.bias is None:
- s += ", bias=False"
- s += ")"
- return s.format(name=self.__class__.__name__, **self.__dict__)
diff --git a/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py b/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py
deleted file mode 100644
index a931be8396ea4397a1621a629f607f5ce39de7c4..0000000000000000000000000000000000000000
--- a/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import openai
-import os
-import gradio as gr
-
-# Configure votre clé API
-openai.api_key = os.environ['OpenaiKey']
-
-def writing_assistant(debut, suite, instructions):
- # Construction de la requête
-
- with open('instructions.txt', 'r') as fichier:
- # Lecture du contenu du fichier
- instructions = fichier.read() + "\n" + instructions
-
- prompt = f"DEBUT = '{debut}'\n SUITE = '{suite}' \n INSTRUCTIONS = {instructions}"
-
- messages = [
- {"role": "system", "content": f"Tu es un assistant d'écriture. Tu aides un auteur contemporain à écrire, en t'inspirant de son style littéraire."},
- {"role": "user", "content": prompt}
- ]
-
- # Call GPT-3.5-turbo API
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=messages,
- temperature=0.9
- )
-
- # Get generated text
- texte_reecrit = response.choices[0].message['content'].strip()
-
- return texte_reecrit
-
-# Définition d'inputs par défaut
-with open('debut_par_defaut.txt', 'r') as fichier:
- # Lecture du contenu du fichier
- debut_par_defaut = fichier.read()
-
-with open('suite_par_defaut.txt', 'r') as fichier:
- # Lecture du contenu du fichier
- suite_par_defaut = fichier.read()
-
-# Création de l'interface Gradio
-iface = gr.Interface(
- fn=writing_assistant,
- inputs=[
- gr.inputs.Textbox(lines=5, label="Début", default = debut_par_defaut),
- gr.inputs.Textbox(lines=5, label="Suite", default = suite_par_defaut),
- gr.inputs.Textbox(lines=2, label="Instructions additionnelles")
- ],
- outputs=gr.outputs.Textbox(label="Texte réécrit"),
- title="Assistant d'écriture",
- description="par Nicolas \nRéécrit un brouillon en respectant un début avec un style donné."
-)
-
-# Lancer l'interface
-iface.launch()
\ No newline at end of file
diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py b/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py
deleted file mode 100644
index 0a8cc43addb2e8e94b9920cef109443c7f475241..0000000000000000000000000000000000000000
--- a/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import argparse
-import cv2
-import glob
-import os
-from basicsr.archs.rrdbnet_arch import RRDBNet
-from basicsr.utils.download_util import load_file_from_url
-
-from realesrgan import RealESRGANer
-from realesrgan.archs.srvgg_arch import SRVGGNetCompact
-
-
-def main():
- """Inference demo for Real-ESRGAN.
- """
- parser = argparse.ArgumentParser()
- parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder')
- parser.add_argument(
- '-n',
- '--model_name',
- type=str,
- default='RealESRGAN_x4plus',
- help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus | '
- 'realesr-animevideov3 | realesr-general-x4v3'))
- parser.add_argument('-o', '--output', type=str, default='results', help='Output folder')
- parser.add_argument(
- '-dn',
- '--denoise_strength',
- type=float,
- default=0.5,
- help=('Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. '
- 'Only used for the realesr-general-x4v3 model'))
- parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image')
- parser.add_argument(
- '--model_path', type=str, default=None, help='[Option] Model path. Usually, you do not need to specify it')
- parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image')
- parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
- parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
- parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
- parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face')
- parser.add_argument(
- '--fp32', action='store_true', help='Use fp32 precision during inference. Default: fp16 (half precision).')
- parser.add_argument(
- '--alpha_upsampler',
- type=str,
- default='realesrgan',
- help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
- parser.add_argument(
- '--ext',
- type=str,
- default='auto',
- help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
- parser.add_argument(
- '-g', '--gpu-id', type=int, default=None, help='gpu device to use (default=None) can be 0,1,2 for multi-gpu')
-
- args = parser.parse_args()
-
- # determine models according to model names
- args.model_name = args.model_name.split('.')[0]
- if args.model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- netscale = 4
- file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth']
- elif args.model_name == 'RealESRNet_x4plus': # x4 RRDBNet model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- netscale = 4
- file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth']
- elif args.model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- netscale = 4
- file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth']
- elif args.model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
- netscale = 2
- file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth']
- elif args.model_name == 'realesr-animevideov3': # x4 VGG-style model (XS size)
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
- netscale = 4
- file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth']
- elif args.model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size)
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu')
- netscale = 4
- file_url = [
- 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth',
- 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth'
- ]
-
- # determine model paths
- if args.model_path is not None:
- model_path = args.model_path
- else:
- model_path = os.path.join('weights', args.model_name + '.pth')
- if not os.path.isfile(model_path):
- ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
- for url in file_url:
- # model_path will be updated
- model_path = load_file_from_url(
- url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None)
-
- # use dni to control the denoise strength
- dni_weight = None
- if args.model_name == 'realesr-general-x4v3' and args.denoise_strength != 1:
- wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3')
- model_path = [model_path, wdn_model_path]
- dni_weight = [args.denoise_strength, 1 - args.denoise_strength]
-
- # restorer
- upsampler = RealESRGANer(
- scale=netscale,
- model_path=model_path,
- dni_weight=dni_weight,
- model=model,
- tile=args.tile,
- tile_pad=args.tile_pad,
- pre_pad=args.pre_pad,
- half=not args.fp32,
- gpu_id=args.gpu_id)
-
- if args.face_enhance: # Use GFPGAN for face enhancement
- from gfpgan import GFPGANer
- face_enhancer = GFPGANer(
- model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth',
- upscale=args.outscale,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=upsampler)
- os.makedirs(args.output, exist_ok=True)
-
- if os.path.isfile(args.input):
- paths = [args.input]
- else:
- paths = sorted(glob.glob(os.path.join(args.input, '*')))
-
- for idx, path in enumerate(paths):
- imgname, extension = os.path.splitext(os.path.basename(path))
- print('Testing', idx, imgname)
-
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
- if len(img.shape) == 3 and img.shape[2] == 4:
- img_mode = 'RGBA'
- else:
- img_mode = None
-
- try:
- if args.face_enhance:
- _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
- else:
- output, _ = upsampler.enhance(img, outscale=args.outscale)
- except RuntimeError as error:
- print('Error', error)
- print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
- else:
- if args.ext == 'auto':
- extension = extension[1:]
- else:
- extension = args.ext
- if img_mode == 'RGBA': # RGBA images should be saved in png format
- extension = 'png'
- if args.suffix == '':
- save_path = os.path.join(args.output, f'{imgname}.{extension}')
- else:
- save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}')
- cv2.imwrite(save_path, output)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/gurgenblbulyan/video-based-text-generation/utils.py b/spaces/gurgenblbulyan/video-based-text-generation/utils.py
deleted file mode 100644
index f58ec4088b8441a48a973481b8b9f9372547969e..0000000000000000000000000000000000000000
--- a/spaces/gurgenblbulyan/video-based-text-generation/utils.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from transformers import ViTFeatureExtractor
-import torchvision
-import torchvision.transforms.functional as fn
-import torch as th
-
-
-def video2image_from_path(video_path, feature_extractor_name):
- video = torchvision.io.read_video(video_path)
-
- return video2image(video[0], feature_extractor_name)
-
-
-def video2image(video, feature_extractor_name):
- feature_extractor = ViTFeatureExtractor.from_pretrained(
- feature_extractor_name
- )
-
- vid = th.permute(video, (3, 0, 1, 2))
- samp = th.linspace(0, vid.shape[1]-1, 49, dtype=th.long)
- vid = vid[:, samp, :, :]
-
- im_l = list()
- for i in range(vid.shape[1]):
- im_l.append(vid[:, i, :, :])
-
- inputs = feature_extractor(im_l, return_tensors="pt")
-
- inputs = inputs['pixel_values']
-
- im_h = list()
- for i in range(7):
- im_v = th.cat((inputs[0+i*7, :, :, :],
- inputs[1+i*7, :, :, :],
- inputs[2+i*7, :, :, :],
- inputs[3+i*7, :, :, :],
- inputs[4+i*7, :, :, :],
- inputs[5+i*7, :, :, :],
- inputs[6+i*7, :, :, :]), 2)
- im_h.append(im_v)
- resize = fn.resize(th.cat(im_h, 1), size=[224])
-
- return resize
diff --git a/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py b/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py
deleted file mode 100644
index bcd98b23636e2d9493323cbecb5705fb7b2a15f1..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Stat / Series / Small / Interval
-# Create a small stat card displaying a primary value and a series plot.
-# #stat_card #interval #series
-# ---
-import time
-
-from faker import Faker
-
-from synth import FakeCategoricalSeries
-from h2o_wave import site, ui, data
-
-page = site['/demo']
-
-fake = Faker()
-f = FakeCategoricalSeries()
-cat, val, pc = f.next()
-c = page.add('example', ui.small_series_stat_card(
- box='1 1 1 1',
- title=fake.cryptocurrency_name(),
- value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}',
- data=dict(qux=val, quux=pc),
- plot_category='foo',
- plot_type='interval',
- plot_value='qux',
- plot_color='$red',
- plot_data=data('foo qux', -20),
- plot_zero_value=0,
-))
-page.save()
-
-while True:
- time.sleep(1)
- cat, val, pc = f.next()
- c.data.qux = val
- c.data.quux = pc
- c.plot_data[-1] = [cat, val]
- page.save()
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py
deleted file mode 100644
index 028f3b8da44c85e01d20ccc5d4a5fa72c759008b..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py
+++ /dev/null
@@ -1,26 +0,0 @@
-"""Git operations for autogpt"""
-import git
-
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-CFG = Config()
-
-
-def clone_repository(repo_url: str, clone_path: str) -> str:
- """Clone a GitHub repository locally
-
- Args:
- repo_url (str): The URL of the repository to clone
- clone_path (str): The path to clone the repository to
-
- Returns:
- str: The result of the clone operation"""
- split_url = repo_url.split("//")
- auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url)
- safe_clone_path = path_in_workspace(clone_path)
- try:
- git.Repo.clone_from(auth_repo_url, safe_clone_path)
- return f"""Cloned {repo_url} to {safe_clone_path}"""
- except Exception as e:
- return f"Error: {str(e)}"
diff --git a/spaces/haoqi7/research/scripts/tests/model_test.py b/spaces/haoqi7/research/scripts/tests/model_test.py
deleted file mode 100644
index b2de4b642b75950d2ee6ed005293a54d9a960bec..0000000000000000000000000000000000000000
--- a/spaces/haoqi7/research/scripts/tests/model_test.py
+++ /dev/null
@@ -1,103 +0,0 @@
-if __name__ == '__main__':
- import sys
- from pathlib import Path
-
- project_root = Path(
- __file__).parent.parent.parent.absolute() # /home/adapting/git/leoxiang66/idp_LiteratureResearch_Tool
- sys.path.append(project_root.__str__())
-
- import torch
- from lrt.clustering.models.keyBartPlus import *
- from lrt.clustering.models.adapter import *
- from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
- import os
-
- ####################### Adapter Test #############################
- input_dim = 1024
- adapter_hid_dim = 256
- adapter = Adapter(input_dim,adapter_hid_dim)
-
- data = torch.randn(10, 20, input_dim)
-
- tmp = adapter(data)
-
- assert data.size() == tmp.size()
- ####################### Adapter Test #############################
-
- ####################### BartDecoderPlus Test #############################
- keyBart = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART")
- bartDecoderP = BartDecoderPlus(keyBart, 100)
- tmp = bartDecoderP(inputs_embeds=data,
- output_attentions = True,
- output_hidden_states = True,
- encoder_hidden_states = data
- )
- print(type(tmp))
- # print(tmp.__dict__)
- print(dir(tmp))
- last_hid_states = tmp.last_hidden_state
- hidden_states = tmp.hidden_states
- attentions = tmp.attentions
- cross_attention = tmp.cross_attentions
- print(last_hid_states.shape)
- print(hidden_states.__len__())
- print(attentions.__len__())
- print(len(cross_attention))
- # print(cross_attention[0])
- print(cross_attention[0].shape)
-
- ####################### BartDecoderPlus Test #############################
-
- ####################### BartPlus Test #############################
- bartP = BartPlus(keyBart,100)
- tmp = bartP(
- inputs_embeds = data,
- decoder_inputs_embeds = data,
- output_attentions=True,
- output_hidden_states=True,
- )
- print(type(tmp))
- # print(tmp.__dict__)
- print(dir(tmp))
- last_hid_states = tmp.last_hidden_state
- hidden_states = tmp.decoder_hidden_states
- attentions = tmp.decoder_attentions
- cross_attention = tmp.cross_attentions
- print(last_hid_states.shape)
- print(hidden_states.__len__())
- print(attentions.__len__())
- print(len(cross_attention))
- # print(cross_attention[0])
- print(cross_attention[0].shape)
- ####################### BartPlus Test #############################
-
- ####################### Summary #############################
- from torchinfo import summary
-
- summary(bartP)
- # summary(bartDecoderP)
- ####################### Summary #############################
-
- ####################### KeyBartAdapter Test #############################
- keybart_adapter = KeyBartAdapter(100)
- tmp = keybart_adapter(
- inputs_embeds=data,
- decoder_inputs_embeds=data,
- output_attentions=True,
- output_hidden_states=True,
- )
- print(type(tmp))
- # print(tmp.__dict__)
- print(dir(tmp))
- last_hid_states = tmp.encoder_last_hidden_state
- hidden_states = tmp.decoder_hidden_states
- attentions = tmp.decoder_attentions
- cross_attention = tmp.cross_attentions
- print(last_hid_states.shape)
- print(hidden_states.__len__())
- print(attentions.__len__())
- print(len(cross_attention))
- # print(cross_attention[0])
- print(cross_attention[0].shape)
- summary(keybart_adapter)
- ####################### KeyBartAdapter Test #############################
\ No newline at end of file
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py
deleted file mode 100644
index 90f09072d99ff0ee7552d6dcdf6e75971b388fda..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py
+++ /dev/null
@@ -1,998 +0,0 @@
-# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import copy
-import datetime
-import json
-import os
-from collections import OrderedDict, defaultdict
-
-import numpy as np
-import pycocotools.mask as mask_util
-import torch
-# import torch._six
-
-import maskrcnn_benchmark.utils.mdetr_dist as dist
-
-from maskrcnn_benchmark.utils.mdetr_dist import all_gather
-
-
-from .lvis import LVIS
-
-def merge(img_ids, eval_imgs):
- all_img_ids = all_gather(img_ids)
- all_eval_imgs = all_gather(eval_imgs)
-
- merged_img_ids = []
- for p in all_img_ids:
- merged_img_ids.extend(p)
-
- merged_eval_imgs = []
- for p in all_eval_imgs:
- merged_eval_imgs.append(p)
-
- merged_img_ids = np.array(merged_img_ids)
- merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)
-
- # keep only unique (and in sorted order) images
- merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)
- merged_eval_imgs = merged_eval_imgs[..., idx]
-
- return merged_img_ids, merged_eval_imgs
-
-
-#################################################################
-# From LVIS, with following changes:
-# * fixed LVISEval constructor to accept empty dt
-# * Removed logger
-# * LVIS results supports numpy inputs
-#################################################################
-
-
-class Params:
- def __init__(self, iou_type):
- """Params for LVIS evaluation API."""
- self.img_ids = []
- self.cat_ids = []
- # np.arange causes trouble. the data point on arange is slightly
- # larger than the true value
- self.iou_thrs = np.linspace(0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True)
- self.rec_thrs = np.linspace(0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True)
- self.max_dets = 300
- self.area_rng = [
- [0 ** 2, 1e5 ** 2],
- [0 ** 2, 32 ** 2],
- [32 ** 2, 96 ** 2],
- [96 ** 2, 1e5 ** 2],
- ]
- self.area_rng_lbl = ["all", "small", "medium", "large"]
- self.use_cats = 1
- # We bin categories in three bins based how many images of the training
- # set the category is present in.
- # r: Rare : < 10
- # c: Common : >= 10 and < 100
- # f: Frequent: >= 100
- self.img_count_lbl = ["r", "c", "f"]
- self.iou_type = iou_type
-
-
-class LVISResults(LVIS):
- def __init__(self, lvis_gt, results, max_dets=300):
- """Constructor for LVIS results.
- Args:
- lvis_gt (LVIS class instance, or str containing path of
- annotation file)
- results (str containing path of result file or a list of dicts)
- max_dets (int): max number of detections per image. The official
- value of max_dets for LVIS is 300.
- """
- super(LVISResults, self).__init__()
- assert isinstance(lvis_gt, LVIS)
- self.dataset["images"] = [img for img in lvis_gt.dataset["images"]]
-
- if isinstance(results, str):
- result_anns = self._load_json(results)
- elif type(results) == np.ndarray:
- result_anns = self.loadNumpyAnnotations(results)
- else:
- result_anns = results
-
- if max_dets >= 0:
- result_anns = self.limit_dets_per_image(result_anns, max_dets)
-
- if len(result_anns) > 0 and "bbox" in result_anns[0]:
- self.dataset["categories"] = copy.deepcopy(lvis_gt.dataset["categories"])
- for id, ann in enumerate(result_anns):
- x1, y1, w, h = ann["bbox"]
- x2 = x1 + w
- y2 = y1 + h
-
- if "segmentation" not in ann:
- ann["segmentation"] = [[x1, y1, x1, y2, x2, y2, x2, y1]]
-
- ann["area"] = w * h
- ann["id"] = id + 1
-
- elif len(result_anns) > 0 and "segmentation" in result_anns[0]:
- self.dataset["categories"] = copy.deepcopy(lvis_gt.dataset["categories"])
- for id, ann in enumerate(result_anns):
- # Only support compressed RLE format as segmentation results
- ann["area"] = mask_util.area(ann["segmentation"])
-
- if "bbox" not in ann:
- ann["bbox"] = mask_util.toBbox(ann["segmentation"])
-
- ann["id"] = id + 1
-
- self.dataset["annotations"] = result_anns
- self._create_index()
-
- # #FIXME: disabling this check for now
- # img_ids_in_result = [ann["image_id"] for ann in result_anns]
-
- # assert set(img_ids_in_result) == (
- # set(img_ids_in_result) & set(self.get_img_ids())
- # ), "Results do not correspond to current LVIS set."
-
- def limit_dets_per_image(self, anns, max_dets):
- img_ann = defaultdict(list)
- for ann in anns:
- img_ann[ann["image_id"]].append(ann)
-
- for img_id, _anns in img_ann.items():
- if len(_anns) <= max_dets:
- continue
- _anns = sorted(_anns, key=lambda ann: ann["score"], reverse=True)
- img_ann[img_id] = _anns[:max_dets]
-
- return [ann for anns in img_ann.values() for ann in anns]
-
- def get_top_results(self, img_id, score_thrs):
- ann_ids = self.get_ann_ids(img_ids=[img_id])
- anns = self.load_anns(ann_ids)
- return list(filter(lambda ann: ann["score"] > score_thrs, anns))
-
-
-class LVISEval:
- def __init__(self, lvis_gt, lvis_dt=None, iou_type="segm"):
- """Constructor for LVISEval.
- Args:
- lvis_gt (LVIS class instance, or str containing path of annotation file)
- lvis_dt (LVISResult class instance, or str containing path of result file,
- or list of dict)
- iou_type (str): segm or bbox evaluation
- """
-
- if iou_type not in ["bbox", "segm"]:
- raise ValueError("iou_type: {} is not supported.".format(iou_type))
-
- if isinstance(lvis_gt, LVIS):
- self.lvis_gt = lvis_gt
- elif isinstance(lvis_gt, str):
- self.lvis_gt = LVIS(lvis_gt)
- else:
- raise TypeError("Unsupported type {} of lvis_gt.".format(lvis_gt))
-
- if isinstance(lvis_dt, LVISResults):
- self.lvis_dt = lvis_dt
- elif isinstance(lvis_dt, (str, list)):
- self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt)
- elif lvis_dt is not None:
- raise TypeError("Unsupported type {} of lvis_dt.".format(lvis_dt))
-
- # per-image per-category evaluation results
- self.eval_imgs = defaultdict(list)
- self.eval = {} # accumulated evaluation results
- self._gts = defaultdict(list) # gt for evaluation
- self._dts = defaultdict(list) # dt for evaluation
- self.params = Params(iou_type=iou_type) # parameters
- self.results = OrderedDict()
- self.stats = []
- self.ious = {} # ious between all gts and dts
-
- self.params.img_ids = sorted(self.lvis_gt.get_img_ids())
- self.params.cat_ids = sorted(self.lvis_gt.get_cat_ids())
-
- def _to_mask(self, anns, lvis):
- for ann in anns:
- rle = lvis.ann_to_rle(ann)
- ann["segmentation"] = rle
-
- def _prepare(self):
- """Prepare self._gts and self._dts for evaluation based on params."""
-
- cat_ids = self.params.cat_ids if self.params.cat_ids else None
-
- gts = self.lvis_gt.load_anns(self.lvis_gt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids))
- dts = self.lvis_dt.load_anns(self.lvis_dt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids))
- # convert ground truth to mask if iou_type == 'segm'
- if self.params.iou_type == "segm":
- self._to_mask(gts, self.lvis_gt)
- self._to_mask(dts, self.lvis_dt)
-
- # set ignore flag
- for gt in gts:
- if "ignore" not in gt:
- gt["ignore"] = 0
-
- for gt in gts:
- self._gts[gt["image_id"], gt["category_id"]].append(gt)
-
- # For federated dataset evaluation we will filter out all dt for an
- # image which belong to categories not present in gt and not present in
- # the negative list for an image. In other words detector is not penalized
- # for categories about which we don't have gt information about their
- # presence or absence in an image.
- img_data = self.lvis_gt.load_imgs(ids=self.params.img_ids)
- # per image map of categories not present in image
- img_nl = {d["id"]: d["neg_category_ids"] for d in img_data}
- # per image list of categories present in image
- img_pl = defaultdict(set)
- for ann in gts:
- img_pl[ann["image_id"]].add(ann["category_id"])
- # per image map of categoires which have missing gt. For these
- # categories we don't penalize the detector for flase positives.
- self.img_nel = {d["id"]: d["not_exhaustive_category_ids"] for d in img_data}
-
- for dt in dts:
- img_id, cat_id = dt["image_id"], dt["category_id"]
- if cat_id not in img_nl[img_id] and cat_id not in img_pl[img_id]:
- continue
- self._dts[img_id, cat_id].append(dt)
-
- self.freq_groups = self._prepare_freq_group()
-
- def _prepare_freq_group(self):
- freq_groups = [[] for _ in self.params.img_count_lbl]
- cat_data = self.lvis_gt.load_cats(self.params.cat_ids)
- for idx, _cat_data in enumerate(cat_data):
- frequency = _cat_data["frequency"]
- freq_groups[self.params.img_count_lbl.index(frequency)].append(idx)
- return freq_groups
-
- def evaluate(self):
- """
- Run per image evaluation on given images and store results
- (a list of dict) in self.eval_imgs.
- """
-
- self.params.img_ids = list(np.unique(self.params.img_ids))
-
- if self.params.use_cats:
- cat_ids = self.params.cat_ids
- else:
- cat_ids = [-1]
-
- self._prepare()
-
- self.ious = {
- (img_id, cat_id): self.compute_iou(img_id, cat_id) for img_id in self.params.img_ids for cat_id in cat_ids
- }
-
- # loop through images, area range, max detection number
- self.eval_imgs = [
- self.evaluate_img(img_id, cat_id, area_rng)
- for cat_id in cat_ids
- for area_rng in self.params.area_rng
- for img_id in self.params.img_ids
- ]
-
- def _get_gt_dt(self, img_id, cat_id):
- """Create gt, dt which are list of anns/dets. If use_cats is true
- only anns/dets corresponding to tuple (img_id, cat_id) will be
- used. Else, all anns/dets in image are used and cat_id is not used.
- """
- if self.params.use_cats:
- gt = self._gts[img_id, cat_id]
- dt = self._dts[img_id, cat_id]
- else:
- gt = [_ann for _cat_id in self.params.cat_ids for _ann in self._gts[img_id, cat_id]]
- dt = [_ann for _cat_id in self.params.cat_ids for _ann in self._dts[img_id, cat_id]]
- return gt, dt
-
- def compute_iou(self, img_id, cat_id):
- gt, dt = self._get_gt_dt(img_id, cat_id)
-
- if len(gt) == 0 and len(dt) == 0:
- return []
-
- # Sort detections in decreasing order of score.
- idx = np.argsort([-d["score"] for d in dt], kind="mergesort")
- dt = [dt[i] for i in idx]
-
- iscrowd = [int(False)] * len(gt)
-
- if self.params.iou_type == "segm":
- ann_type = "segmentation"
- elif self.params.iou_type == "bbox":
- ann_type = "bbox"
- else:
- raise ValueError("Unknown iou_type for iou computation.")
- gt = [g[ann_type] for g in gt]
- dt = [d[ann_type] for d in dt]
-
- # compute iou between each dt and gt region
- # will return array of shape len(dt), len(gt)
- ious = mask_util.iou(dt, gt, iscrowd)
- return ious
-
- def evaluate_img(self, img_id, cat_id, area_rng):
- """Perform evaluation for single category and image."""
- gt, dt = self._get_gt_dt(img_id, cat_id)
-
- if len(gt) == 0 and len(dt) == 0:
- return None
-
- # Add another filed _ignore to only consider anns based on area range.
- for g in gt:
- if g["ignore"] or (g["area"] < area_rng[0] or g["area"] > area_rng[1]):
- g["_ignore"] = 1
- else:
- g["_ignore"] = 0
-
- # Sort gt ignore last
- gt_idx = np.argsort([g["_ignore"] for g in gt], kind="mergesort")
- gt = [gt[i] for i in gt_idx]
-
- # Sort dt highest score first
- dt_idx = np.argsort([-d["score"] for d in dt], kind="mergesort")
- dt = [dt[i] for i in dt_idx]
-
- # load computed ious
- ious = self.ious[img_id, cat_id][:, gt_idx] if len(self.ious[img_id, cat_id]) > 0 else self.ious[img_id, cat_id]
-
- num_thrs = len(self.params.iou_thrs)
- num_gt = len(gt)
- num_dt = len(dt)
-
- # Array to store the "id" of the matched dt/gt
- gt_m = np.zeros((num_thrs, num_gt))
- dt_m = np.zeros((num_thrs, num_dt))
-
- gt_ig = np.array([g["_ignore"] for g in gt])
- dt_ig = np.zeros((num_thrs, num_dt))
-
- for iou_thr_idx, iou_thr in enumerate(self.params.iou_thrs):
- if len(ious) == 0:
- break
-
- for dt_idx, _dt in enumerate(dt):
- iou = min([iou_thr, 1 - 1e-10])
- # information about best match so far (m=-1 -> unmatched)
- # store the gt_idx which matched for _dt
- m = -1
- for gt_idx, _ in enumerate(gt):
- # if this gt already matched continue
- if gt_m[iou_thr_idx, gt_idx] > 0:
- continue
- # if _dt matched to reg gt, and on ignore gt, stop
- if m > -1 and gt_ig[m] == 0 and gt_ig[gt_idx] == 1:
- break
- # continue to next gt unless better match made
- if ious[dt_idx, gt_idx] < iou:
- continue
- # if match successful and best so far, store appropriately
- iou = ious[dt_idx, gt_idx]
- m = gt_idx
-
- # No match found for _dt, go to next _dt
- if m == -1:
- continue
-
- # if gt to ignore for some reason update dt_ig.
- # Should not be used in evaluation.
- dt_ig[iou_thr_idx, dt_idx] = gt_ig[m]
- # _dt match found, update gt_m, and dt_m with "id"
- dt_m[iou_thr_idx, dt_idx] = gt[m]["id"]
- gt_m[iou_thr_idx, m] = _dt["id"]
-
- # For LVIS we will ignore any unmatched detection if that category was
- # not exhaustively annotated in gt.
- dt_ig_mask = [
- d["area"] < area_rng[0] or d["area"] > area_rng[1] or d["category_id"] in self.img_nel[d["image_id"]]
- for d in dt
- ]
- dt_ig_mask = np.array(dt_ig_mask).reshape((1, num_dt)) # 1 X num_dt
- dt_ig_mask = np.repeat(dt_ig_mask, num_thrs, 0) # num_thrs X num_dt
- # Based on dt_ig_mask ignore any unmatched detection by updating dt_ig
- dt_ig = np.logical_or(dt_ig, np.logical_and(dt_m == 0, dt_ig_mask))
- # store results for given image and category
- return {
- "image_id": img_id,
- "category_id": cat_id,
- "area_rng": area_rng,
- "dt_ids": [d["id"] for d in dt],
- "gt_ids": [g["id"] for g in gt],
- "dt_matches": dt_m,
- "gt_matches": gt_m,
- "dt_scores": [d["score"] for d in dt],
- "gt_ignore": gt_ig,
- "dt_ignore": dt_ig,
- }
-
- def accumulate(self):
- """Accumulate per image evaluation results and store the result in
- self.eval.
- """
-
- if not self.eval_imgs:
- print("Warning: Please run evaluate first.")
-
- if self.params.use_cats:
- cat_ids = self.params.cat_ids
- else:
- cat_ids = [-1]
-
- num_thrs = len(self.params.iou_thrs)
- num_recalls = len(self.params.rec_thrs)
- num_cats = len(cat_ids)
- num_area_rngs = len(self.params.area_rng)
- num_imgs = len(self.params.img_ids)
-
- # -1 for absent categories
- precision = -np.ones((num_thrs, num_recalls, num_cats, num_area_rngs))
- recall = -np.ones((num_thrs, num_cats, num_area_rngs))
-
- # Initialize dt_pointers
- dt_pointers = {}
- for cat_idx in range(num_cats):
- dt_pointers[cat_idx] = {}
- for area_idx in range(num_area_rngs):
- dt_pointers[cat_idx][area_idx] = {}
-
- # Per category evaluation
- for cat_idx in range(num_cats):
- Nk = cat_idx * num_area_rngs * num_imgs
- for area_idx in range(num_area_rngs):
- Na = area_idx * num_imgs
- E = [self.eval_imgs[Nk + Na + img_idx] for img_idx in range(num_imgs)]
- # Remove elements which are None
- E = [e for e in E if e is not None]
- if len(E) == 0:
- continue
-
- # Append all scores: shape (N,)
- dt_scores = np.concatenate([e["dt_scores"] for e in E], axis=0)
- dt_ids = np.concatenate([e["dt_ids"] for e in E], axis=0)
-
- dt_idx = np.argsort(-dt_scores, kind="mergesort")
- dt_scores = dt_scores[dt_idx]
- dt_ids = dt_ids[dt_idx]
-
- dt_m = np.concatenate([e["dt_matches"] for e in E], axis=1)[:, dt_idx]
- dt_ig = np.concatenate([e["dt_ignore"] for e in E], axis=1)[:, dt_idx]
-
- gt_ig = np.concatenate([e["gt_ignore"] for e in E])
- # num gt anns to consider
- num_gt = np.count_nonzero(gt_ig == 0)
-
- if num_gt == 0:
- continue
-
- tps = np.logical_and(dt_m, np.logical_not(dt_ig))
- fps = np.logical_and(np.logical_not(dt_m), np.logical_not(dt_ig))
-
- tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
- fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)
-
- dt_pointers[cat_idx][area_idx] = {
- "dt_ids": dt_ids,
- "tps": tps,
- "fps": fps,
- }
-
- for iou_thr_idx, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):
- tp = np.array(tp)
- fp = np.array(fp)
- num_tp = len(tp)
- rc = tp / num_gt
- if num_tp:
- recall[iou_thr_idx, cat_idx, area_idx] = rc[-1]
- else:
- recall[iou_thr_idx, cat_idx, area_idx] = 0
-
- # np.spacing(1) ~= eps
- pr = tp / (fp + tp + np.spacing(1))
- pr = pr.tolist()
-
- # Replace each precision value with the maximum precision
- # value to the right of that recall level. This ensures
- # that the calculated AP value will be less suspectable
- # to small variations in the ranking.
- for i in range(num_tp - 1, 0, -1):
- if pr[i] > pr[i - 1]:
- pr[i - 1] = pr[i]
-
- rec_thrs_insert_idx = np.searchsorted(rc, self.params.rec_thrs, side="left")
-
- pr_at_recall = [0.0] * num_recalls
-
- try:
- for _idx, pr_idx in enumerate(rec_thrs_insert_idx):
- pr_at_recall[_idx] = pr[pr_idx]
- except Exception:
- pass
- precision[iou_thr_idx, :, cat_idx, area_idx] = np.array(pr_at_recall)
-
- self.eval = {
- "params": self.params,
- "counts": [num_thrs, num_recalls, num_cats, num_area_rngs],
- "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
- "precision": precision,
- "recall": recall,
- "dt_pointers": dt_pointers,
- }
-
- def _summarize(self, summary_type, iou_thr=None, area_rng="all", freq_group_idx=None):
- aidx = [idx for idx, _area_rng in enumerate(self.params.area_rng_lbl) if _area_rng == area_rng]
-
- if summary_type == "ap":
- s = self.eval["precision"]
- if iou_thr is not None:
- tidx = np.where(iou_thr == self.params.iou_thrs)[0]
- s = s[tidx]
- if freq_group_idx is not None:
- s = s[:, :, self.freq_groups[freq_group_idx], aidx]
- else:
- s = s[:, :, :, aidx]
- else:
- s = self.eval["recall"]
- if iou_thr is not None:
- tidx = np.where(iou_thr == self.params.iou_thrs)[0]
- s = s[tidx]
- s = s[:, :, aidx]
-
- if len(s[s > -1]) == 0:
- mean_s = -1
- else:
- mean_s = np.mean(s[s > -1])
- return mean_s
-
- def summarize(self):
- """Compute and display summary metrics for evaluation results."""
- if not self.eval:
- raise RuntimeError("Please run accumulate() first.")
-
- max_dets = self.params.max_dets
-
- self.results["AP"] = self._summarize("ap")
- self.results["AP50"] = self._summarize("ap", iou_thr=0.50)
- self.results["AP75"] = self._summarize("ap", iou_thr=0.75)
- self.results["APs"] = self._summarize("ap", area_rng="small")
- self.results["APm"] = self._summarize("ap", area_rng="medium")
- self.results["APl"] = self._summarize("ap", area_rng="large")
- self.results["APr"] = self._summarize("ap", freq_group_idx=0)
- self.results["APc"] = self._summarize("ap", freq_group_idx=1)
- self.results["APf"] = self._summarize("ap", freq_group_idx=2)
-
- self.stats = np.zeros((9,))
- self.stats[0] = self._summarize("ap")
- self.stats[1] = self._summarize("ap", iou_thr=0.50)
- self.stats[2] = self._summarize("ap", iou_thr=0.75)
- self.stats[3] = self._summarize("ap", area_rng="small")
- self.stats[4] = self._summarize("ap", area_rng="medium")
- self.stats[5] = self._summarize("ap", area_rng="large")
- self.stats[6] = self._summarize("ap", freq_group_idx=0)
- self.stats[7] = self._summarize("ap", freq_group_idx=1)
- self.stats[8] = self._summarize("ap", freq_group_idx=2)
-
- key = "AR@{}".format(max_dets)
- self.results[key] = self._summarize("ar")
-
- for area_rng in ["small", "medium", "large"]:
- key = "AR{}@{}".format(area_rng[0], max_dets)
- self.results[key] = self._summarize("ar", area_rng=area_rng)
- _returned = self.print_results()
- return _returned
-
- def run(self):
- """Wrapper function which calculates the results."""
- self.evaluate()
- self.accumulate()
- self.summarize()
-
- def print_results(self):
- template = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} catIds={:>3s}] = {:0.3f}"
- out_strings = []
- for key, value in self.results.items():
- max_dets = self.params.max_dets
- if "AP" in key:
- title = "Average Precision"
- _type = "(AP)"
- else:
- title = "Average Recall"
- _type = "(AR)"
-
- if len(key) > 2 and key[2].isdigit():
- iou_thr = float(key[2:]) / 100
- iou = "{:0.2f}".format(iou_thr)
- else:
- iou = "{:0.2f}:{:0.2f}".format(self.params.iou_thrs[0], self.params.iou_thrs[-1])
-
- if len(key) > 2 and key[2] in ["r", "c", "f"]:
- cat_group_name = key[2]
- else:
- cat_group_name = "all"
-
- if len(key) > 2 and key[2] in ["s", "m", "l"]:
- area_rng = key[2]
- else:
- area_rng = "all"
-
- print(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value))
- out_strings.append(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value))
- return out_strings
-
- def get_results(self):
- if not self.results:
- print("Warning: results is empty. Call run().")
- return self.results
-
-
-#################################################################
-# end of straight copy from lvis, just fixing constructor
-#################################################################
-
-
-class LvisEvaluator(object):
- def __init__(self, lvis_gt, iou_types):
- assert isinstance(iou_types, (list, tuple))
- # lvis_gt = copy.deepcopy(lvis_gt)
- self.lvis_gt = lvis_gt
-
- self.iou_types = iou_types
- self.coco_eval = {}
- for iou_type in iou_types:
- self.coco_eval[iou_type] = LVISEval(lvis_gt, iou_type=iou_type)
-
- self.img_ids = []
- self.eval_imgs = {k: [] for k in iou_types}
-
- def update(self, predictions):
- img_ids = list(np.unique(list(predictions.keys())))
- self.img_ids.extend(img_ids)
-
- for iou_type in self.iou_types:
- results = self.prepare(predictions, iou_type)
- lvis_dt = LVISResults(self.lvis_gt, results)
- lvis_eval = self.coco_eval[iou_type]
-
- lvis_eval.lvis_dt = lvis_dt
- lvis_eval.params.img_ids = list(img_ids)
- lvis_eval.evaluate()
- eval_imgs = lvis_eval.eval_imgs
- eval_imgs = np.asarray(eval_imgs).reshape(
- len(lvis_eval.params.cat_ids), len(lvis_eval.params.area_rng), len(lvis_eval.params.img_ids)
- )
-
- self.eval_imgs[iou_type].append(eval_imgs)
-
- def synchronize_between_processes(self):
- for iou_type in self.iou_types:
- self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)
- create_common_lvis_eval(self.coco_eval[iou_type], self.img_ids, self.eval_imgs[iou_type])
-
- def accumulate(self):
- for lvis_eval in self.coco_eval.values():
- lvis_eval.accumulate()
-
- def summarize(self):
- for iou_type, lvis_eval in self.coco_eval.items():
- print("IoU metric: {}".format(iou_type))
- lvis_eval.summarize()
-
- def prepare(self, predictions, iou_type):
- if iou_type == "bbox":
- return self.prepare_for_lvis_detection(predictions)
- elif iou_type == "segm":
- return self.prepare_for_lvis_segmentation(predictions)
- elif iou_type == "keypoints":
- return self.prepare_for_lvis_keypoint(predictions)
- else:
- raise ValueError("Unknown iou type {}".format(iou_type))
-
- def prepare_for_lvis_detection(self, predictions):
- lvis_results = []
- for original_id, prediction in predictions.items():
- if len(prediction) == 0:
- continue
-
- boxes = prediction["boxes"]
- boxes = convert_to_xywh(boxes).tolist()
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- lvis_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "bbox": box,
- "score": scores[k],
- }
- for k, box in enumerate(boxes)
- ]
- )
- return lvis_results
-
- def prepare_for_lvis_segmentation(self, predictions):
- lvis_results = []
- for original_id, prediction in predictions.items():
- if len(prediction) == 0:
- continue
-
- scores = prediction["scores"]
- labels = prediction["labels"]
- masks = prediction["masks"]
-
- masks = masks > 0.5
-
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- rles = [
- mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0] for mask in masks
- ]
- for rle in rles:
- rle["counts"] = rle["counts"].decode("utf-8")
-
- lvis_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "segmentation": rle,
- "score": scores[k],
- }
- for k, rle in enumerate(rles)
- ]
- )
- return lvis_results
-
-
-def _merge_lists(listA, listB, maxN, key):
- result = []
- indA, indB = 0, 0
- while (indA < len(listA) or indB < len(listB)) and len(result) < maxN:
- if (indB < len(listB)) and (indA >= len(listA) or key(listA[indA]) < key(listB[indB])):
- result.append(listB[indB])
- indB += 1
- else:
- result.append(listA[indA])
- indA += 1
- return result
-
-
-# Adapted from https://github.com/achalddave/large-vocab-devil/blob/9aaddc15b00e6e0d370b16743233e40d973cd53f/scripts/evaluate_ap_fixed.py
-class LvisEvaluatorFixedAP(object):
- def __init__(self, gt: LVIS, topk=10000, fixed_ap=True):
-
- self.results = []
- self.by_cat = {}
- self.gt = gt
- self.topk = topk
- self.fixed_ap = fixed_ap
-
- def update(self, predictions):
- cur_results = self.prepare(predictions)
- if self.fixed_ap:
- by_cat = defaultdict(list)
- for ann in cur_results:
- by_cat[ann["category_id"]].append(ann)
-
- for cat, cat_anns in by_cat.items():
- if cat not in self.by_cat:
- self.by_cat[cat] = []
-
- cur = sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk]
- self.by_cat[cat] = _merge_lists(self.by_cat[cat], cur, self.topk, key=lambda x: x["score"])
- else:
- by_id = defaultdict(list)
- for ann in cur_results:
- by_id[ann["image_id"]].append(ann)
-
- for id_anns in by_id.values():
- self.results.extend(sorted(id_anns, key=lambda x: x["score"], reverse=True)[:300])
-
- def synchronize_between_processes(self):
- if self.fixed_ap:
- all_cats = dist.all_gather(self.by_cat)
- self.by_cat = defaultdict(list)
- for cats in all_cats:
- for cat, cat_anns in cats.items():
- self.by_cat[cat].extend(cat_anns)
- else:
- self.results = sum(dist.all_gather(self.results), [])
-
- def prepare(self, predictions):
- lvis_results = []
- for original_id, prediction in predictions:
- if len(prediction) == 0:
- continue
-
- boxes = prediction["boxes"]
- boxes = convert_to_xywh(boxes).tolist()
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- lvis_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "bbox": box,
- "score": scores[k],
- }
- for k, box in enumerate(boxes)
- ]
- )
- return lvis_results
-
- def summarize(self):
- if not dist.is_main_process():
- return
-
- if self.fixed_ap:
- return self._summarize_fixed()
- else:
- return self._summarize_standard()
-
- def _summarize_standard(self):
- results = LVISResults(self.gt, self.results)
- lvis_eval = LVISEval(self.gt, results, iou_type="bbox")
- lvis_eval.run()
- lvis_eval.print_results()
-
- def _summarize_fixed(self):
- results = []
-
- missing_dets_cats = set()
- for cat, cat_anns in self.by_cat.items():
- if len(cat_anns) < self.topk:
- missing_dets_cats.add(cat)
- results.extend(sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk])
- if missing_dets_cats:
- print(
- f"\n===\n"
- f"{len(missing_dets_cats)} classes had less than {self.topk} detections!\n"
- f"Outputting {self.topk} detections for each class will improve AP further.\n"
- f"If using detectron2, please use the lvdevil/infer_topk.py script to "
- f"output a results file with {self.topk} detections for each class.\n"
- f"==="
- )
-
- results = LVISResults(self.gt, results, max_dets=-1)
- lvis_eval = LVISEval(self.gt, results, iou_type="bbox")
- params = lvis_eval.params
- params.max_dets = -1 # No limit on detections per image.
- lvis_eval.run()
- scores = lvis_eval.print_results()
- metrics = {k: v for k, v in lvis_eval.results.items() if k.startswith("AP")}
- print("copypaste: %s,%s", ",".join(map(str, metrics.keys())), "path")
- return scores
-
-
-class LvisDumper(object):
- def __init__(self, topk=10000, fixed_ap=True, out_path="lvis_eval"):
-
- self.results = []
- self.by_cat = {}
- self.topk = topk
- self.fixed_ap = fixed_ap
- self.out_path = out_path
- if dist.is_main_process():
- if not os.path.exists(self.out_path):
- os.mkdir(self.out_path)
-
- def update(self, predictions):
- cur_results = self.prepare(predictions)
- if self.fixed_ap:
- by_cat = defaultdict(list)
- for ann in cur_results:
- by_cat[ann["category_id"]].append(ann)
-
- for cat, cat_anns in by_cat.items():
- if cat not in self.by_cat:
- self.by_cat[cat] = []
-
- cur = sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk]
- self.by_cat[cat] = _merge_lists(self.by_cat[cat], cur, self.topk, key=lambda x: x["score"])
- else:
- by_id = defaultdict(list)
- for ann in cur_results:
- by_id[ann["image_id"]].append(ann)
-
- for id_anns in by_id.values():
- self.results.extend(sorted(id_anns, key=lambda x: x["score"], reverse=True)[:300])
-
- def synchronize_between_processes(self):
- if self.fixed_ap:
- all_cats = dist.all_gather(self.by_cat)
- self.by_cat = defaultdict(list)
- for cats in all_cats:
- for cat, cat_anns in cats.items():
- self.by_cat[cat].extend(cat_anns)
- else:
- self.results = sum(dist.all_gather(self.results), [])
-
- def prepare(self, predictions):
- lvis_results = []
- for original_id, prediction in predictions:
- if len(prediction) == 0:
- continue
-
- boxes = prediction["boxes"]
- boxes = convert_to_xywh(boxes).tolist()
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- lvis_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "bbox": box,
- "score": scores[k],
- }
- for k, box in enumerate(boxes)
- ]
- )
- return lvis_results
-
- def summarize(self):
- if not dist.is_main_process():
- return
-
- if self.fixed_ap:
- self._summarize_fixed()
- else:
- self._summarize_standard()
-
- def _summarize_standard(self):
- json_path = os.path.join(self.out_path, "results.json")
- print("dumping to ", json_path)
- with open(json_path, "w") as f:
- json.dump(self.results, f)
-
- print("dumped")
-
- def _summarize_fixed(self):
- results = []
-
- missing_dets_cats = set()
- for cat, cat_anns in self.by_cat.items():
- if len(cat_anns) < self.topk:
- missing_dets_cats.add(cat)
- results.extend(sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk])
- if missing_dets_cats:
- print(
- f"\n===\n"
- f"{len(missing_dets_cats)} classes had less than {self.topk} detections!\n"
- f"Outputting {self.topk} detections for each class will improve AP further.\n"
- f"If using detectron2, please use the lvdevil/infer_topk.py script to "
- f"output a results file with {self.topk} detections for each class.\n"
- f"==="
- )
-
- json_path = os.path.join(self.out_path, "results.json")
- print("dumping to ", json_path)
- with open(json_path, "w") as f:
- json.dump(results, f)
-
- print("dumped")
-
-
-def convert_to_xywh(boxes):
- xmin, ymin, xmax, ymax = boxes.unbind(1)
- return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)
-
-
-def create_common_lvis_eval(lvis_eval, img_ids, eval_imgs):
- img_ids, eval_imgs = merge(img_ids, eval_imgs)
- img_ids = list(img_ids)
- eval_imgs = list(eval_imgs.flatten())
-
- lvis_eval.eval_imgs = eval_imgs
- lvis_eval.params.img_ids = img_ids
-
-def lvis_evaluation():
- pass
\ No newline at end of file
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py
deleted file mode 100644
index 27c1c63eb5b5f14f7a143a97e82015360ff848c6..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py
+++ /dev/null
@@ -1,1036 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-from collections import defaultdict
-
-from .inference import make_atss_postprocessor
-from .loss import make_atss_loss_evaluator
-from .anchor_generator import make_anchor_generator_complex
-
-from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist
-from maskrcnn_benchmark.layers import Scale, DYReLU, SELayer, ModulatedDeformConv
-from maskrcnn_benchmark.layers import NaiveSyncBatchNorm2d, FrozenBatchNorm2d
-from maskrcnn_benchmark.modeling.backbone.fbnet import *
-from maskrcnn_benchmark.engine.inference import create_positive_map_label_to_token_from_positive_map
-from ..utils import cat, concat_box_prediction_layers, permute_and_flatten
-
-from maskrcnn_benchmark.utils.fuse_helper import FeatureResizer, func_attention, _make_mlp, _make_conv, _make_coord, \
- BiAttentionBlock, AttentionT2I, BiAttentionBlockForCheckpoint, BertLMPredictionHead
-from transformers.models.bert.modeling_bert import BertConfig, BertAttention, BertIntermediate, BertOutput, \
- BertPreTrainedModel
-from transformers.modeling_utils import apply_chunking_to_forward
-import torch.utils.checkpoint as checkpoint
-import pdb
-
-from maskrcnn_benchmark.modeling.language_backbone.clip_model import QuickGELU, LayerNorm, DropPath
-from timm.models.layers import DropPath, trunc_normal_
-
-class h_sigmoid(nn.Module):
- def __init__(self, inplace=True, h_max=1):
- super(h_sigmoid, self).__init__()
- self.relu = nn.ReLU6(inplace=inplace)
- self.h_max = h_max
-
- def forward(self, x):
- return self.relu(x + 3) * self.h_max / 6
-
-
-class BoxCoder(object):
-
- def __init__(self, cfg):
- self.cfg = cfg
-
- def encode(self, gt_boxes, anchors):
- TO_REMOVE = 1 # TODO remove
- ex_widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE
- ex_heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE
- ex_ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2
- ex_ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2
-
- gt_widths = gt_boxes[:, 2] - gt_boxes[:, 0] + TO_REMOVE
- gt_heights = gt_boxes[:, 3] - gt_boxes[:, 1] + TO_REMOVE
- gt_ctr_x = (gt_boxes[:, 2] + gt_boxes[:, 0]) / 2
- gt_ctr_y = (gt_boxes[:, 3] + gt_boxes[:, 1]) / 2
-
- wx, wy, ww, wh = (10., 10., 5., 5.)
- targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths
- targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights
- targets_dw = ww * torch.log(gt_widths / ex_widths)
- targets_dh = wh * torch.log(gt_heights / ex_heights)
- targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1)
-
- return targets
-
- def decode(self, preds, anchors):
- anchors = anchors.to(preds.dtype)
-
- TO_REMOVE = 1 # TODO remove
- widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE
- heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE
- ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2
- ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2
-
- wx, wy, ww, wh = (10., 10., 5., 5.)
- dx = preds[:, 0::4] / wx
- dy = preds[:, 1::4] / wy
- dw = preds[:, 2::4] / ww
- dh = preds[:, 3::4] / wh
-
- # Prevent sending too large values into torch.exp()
- dw = torch.clamp(dw, max=math.log(1000. / 16))
- dh = torch.clamp(dh, max=math.log(1000. / 16))
-
- pred_ctr_x = dx * widths[:, None] + ctr_x[:, None]
- pred_ctr_y = dy * heights[:, None] + ctr_y[:, None]
- pred_w = torch.exp(dw) * widths[:, None]
- pred_h = torch.exp(dh) * heights[:, None]
-
- pred_boxes = torch.zeros_like(preds)
- pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1)
- pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1)
- pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1)
- pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1)
-
- return pred_boxes
-
-
-class Conv3x3Norm(torch.nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- stride,
- groups=1,
- deformable=False,
- bn_type=None):
- super(Conv3x3Norm, self).__init__()
-
- if deformable:
- self.conv = ModulatedDeformConv(in_channels, out_channels, kernel_size=3, stride=stride, padding=1,
- groups=groups)
- else:
- self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, groups=groups)
-
- if isinstance(bn_type, (list, tuple)):
- assert len(bn_type) == 2
- assert bn_type[0] == "gn"
- gn_group = bn_type[1]
- bn_type = bn_type[0]
-
- if bn_type == "bn":
- bn_op = nn.BatchNorm2d(out_channels)
- elif bn_type == "sbn":
- bn_op = nn.SyncBatchNorm(out_channels)
- elif bn_type == "nsbn":
- bn_op = NaiveSyncBatchNorm2d(out_channels)
- elif bn_type == "gn":
- bn_op = nn.GroupNorm(num_groups=gn_group, num_channels=out_channels)
- elif bn_type == "af":
- bn_op = FrozenBatchNorm2d(out_channels)
- if bn_type is not None:
- self.bn = bn_op
- else:
- self.bn = None
-
- def forward(self, input, **kwargs):
- x = self.conv(input, **kwargs)
- if self.bn:
- x = self.bn(x)
- return x
-
-
-class DyConv(torch.nn.Module):
- def __init__(self,
- in_channels=256,
- out_channels=256,
- conv_func=nn.Conv2d,
- use_dyfuse=True,
- use_dyrelu=False,
- use_deform=False
- ):
- super(DyConv, self).__init__()
-
- self.DyConv = nn.ModuleList()
- self.DyConv.append(conv_func(in_channels, out_channels, 1))
- self.DyConv.append(conv_func(in_channels, out_channels, 1))
- self.DyConv.append(conv_func(in_channels, out_channels, 2))
-
- if use_dyfuse:
- self.AttnConv = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(in_channels, 1, kernel_size=1),
- nn.ReLU(inplace=True))
- self.h_sigmoid = h_sigmoid()
- else:
- self.AttnConv = None
-
- if use_dyrelu:
- self.relu = DYReLU(in_channels, out_channels)
- else:
- self.relu = nn.ReLU()
-
- if use_deform:
- self.offset = nn.Conv2d(in_channels, 27, kernel_size=3, stride=1, padding=1)
- else:
- self.offset = None
-
- self.init_weights()
-
- def init_weights(self):
- for m in self.DyConv.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight.data, 0, 0.01)
- if m.bias is not None:
- m.bias.data.zero_()
- if self.AttnConv is not None:
- for m in self.AttnConv.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight.data, 0, 0.01)
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, inputs):
- visual_feats = inputs["visual"]
- language_dict_features = inputs["lang"]
-
- next_x = []
- for level, feature in enumerate(visual_feats):
-
- conv_args = dict()
- if self.offset is not None:
- offset_mask = self.offset(feature)
- offset = offset_mask[:, :18, :, :]
- mask = offset_mask[:, 18:, :, :].sigmoid()
- conv_args = dict(offset=offset, mask=mask)
-
- temp_fea = [self.DyConv[1](feature, **conv_args)]
-
- if level > 0:
- temp_fea.append(self.DyConv[2](visual_feats[level - 1], **conv_args))
- if level < len(visual_feats) - 1:
- temp_fea.append(F.upsample_bilinear(self.DyConv[0](visual_feats[level + 1], **conv_args),
- size=[feature.size(2), feature.size(3)]))
- mean_fea = torch.mean(torch.stack(temp_fea), dim=0, keepdim=False)
-
- if self.AttnConv is not None:
- attn_fea = []
- res_fea = []
- for fea in temp_fea:
- res_fea.append(fea)
- attn_fea.append(self.AttnConv(fea))
-
- res_fea = torch.stack(res_fea)
- spa_pyr_attn = self.h_sigmoid(torch.stack(attn_fea))
-
- mean_fea = torch.mean(res_fea * spa_pyr_attn, dim=0, keepdim=False)
-
- next_x.append(mean_fea)
-
- next_x = [self.relu(item) for item in next_x]
-
- features_dict = {"visual": next_x,
- "lang": language_dict_features}
-
- return features_dict
-
-
-class BertEncoderLayer(BertPreTrainedModel):
- def __init__(self, config, clamp_min_for_underflow = False, clamp_max_for_overflow = False):
- super().__init__(config)
- self.config = config
-
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
-
- from maskrcnn_benchmark.modeling.rpn.modeling_bert import BertAttention, BertIntermediate, BertOutput
-
- self.attention = BertAttention(config, clamp_min_for_underflow, clamp_max_for_overflow)
- self.intermediate = BertIntermediate(config)
- self.output = BertOutput(config)
-
- def forward(self, inputs):
- language_dict_features = inputs["lang"]
- hidden_states = language_dict_features["hidden"]
- attention_mask = language_dict_features["masks"]
-
- device = hidden_states.device
- input_shape = hidden_states.size()[:-1]
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device)
-
- self_attention_outputs = self.attention(
- hidden_states,
- extended_attention_mask,
- None,
- output_attentions=False,
- past_key_value=None,
- )
- attention_output = self_attention_outputs[0]
- outputs = self_attention_outputs[1:] # add self attentions if we output attention weights
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
- )
- outputs = (layer_output,) + outputs
- hidden_states = outputs[0]
-
- language_dict_features["hidden"] = hidden_states
-
- features_dict = {"visual": inputs["visual"],
- "lang": language_dict_features
- }
-
- return features_dict
-
- def feed_forward_chunk(self, attention_output):
- intermediate_output = self.intermediate(attention_output)
- layer_output = self.output(intermediate_output, attention_output)
- return layer_output
-
-
-class CLIPTransformerLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.config = config
- d_model = self.config.MODEL.CLIP.WIDTH
- n_head = self.config.MODEL.CLIP.HEADS
- drop_path = self.config.MODEL.CLIP.DROP_PATH
- self.context_length = self.config.MODEL.CLIP.CONTEXT_LENGTH
- self.attn = nn.MultiheadAttention(d_model, n_head)
- self.ln_1 = LayerNorm(d_model)
- self.mlp = nn.Sequential(OrderedDict([
- ("c_fc", nn.Linear(d_model, d_model * 4)),
- ("gelu", QuickGELU()),
- ("c_proj", nn.Linear(d_model * 4, d_model))
- ]))
- self.ln_2 = LayerNorm(d_model)
- self.attn_mask = None
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, (nn.Linear, nn.Conv2d)):
- trunc_normal_(m.weight, std=0.02)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm, nn.BatchNorm2d)):
- nn.init.constant_(m.bias, 0)
-
- def attention(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None):
- self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) \
- if self.attn_mask is not None else None
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask, key_padding_mask=key_padding_mask)[0]
-
- def forward(self, inputs):
- language_dict_features = inputs["lang"]
- x = language_dict_features["hidden"]
- mask = language_dict_features["masks"]
- # get extended attention mask for nn.MultiHeadAttention
- key_padding_mask = (1.0 - mask).to(torch.bool)
-
- x = x.permute(1, 0, 2)
- x = x + self.drop_path(self.attention(self.ln_1(x), key_padding_mask=key_padding_mask))
- x = x + self.drop_path(self.mlp(self.ln_2(x)))
- x = x.permute(1, 0, 2)
-
- language_dict_features["hidden"] = x
- features_dict = {"visual": inputs["visual"],
- "lang": language_dict_features
- }
- return features_dict
-
-
-class DummyLayer(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, inputs):
- return inputs
-
-
-class VLFuse(torch.nn.Module):
- """
- Early Fusion Module
- """
-
- def __init__(self, cfg):
- super(VLFuse, self).__init__()
- self.init_configs(cfg)
- self.cfg = cfg
-
- self.use_checkpoint = False
- if hasattr(cfg.MODEL.DYHEAD, 'USE_CHECKPOINT'):
- self.use_checkpoint = cfg.MODEL.DYHEAD.USE_CHECKPOINT
- self.dummy_tensor = torch.ones(1, dtype=torch.float32, requires_grad=True)
-
- # early fusion module
- print("EARLY FUSION ON, USING {}".format(cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE))
- if cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-S":
- # single-direction (text->image)
- # text -> image
- self.t2i_attn = AttentionT2I(q_dim=self.joint_embedding_size,
- k_dim=self.lang_dim,
- embed_dim=self.embed_dim,
- num_heads=self.n_head,
- hidden_dim=self.t2i_hidden_dim,
- dropout=0.1,
- drop_path=.0,
- init_values=1.0 / cfg.MODEL.DYHEAD.NUM_CONVS,
- mode="t2i",
- use_layer_scale=cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_LAYER_SCALE,
- clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_MIN_FOR_UNDERFLOW,
- clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_MAX_FOR_OVERFLOW
- )
-
- elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-B":
- # bi-direction (text->image, image->text)
- self.b_attn = BiAttentionBlockForCheckpoint(v_dim=self.joint_embedding_size,
- l_dim=self.lang_dim,
- embed_dim=self.embed_dim,
- num_heads=self.n_head,
- hidden_dim=self.i2t_hidden_dim,
- dropout=0.1,
- drop_path=.0,
- init_values=1.0 / cfg.MODEL.DYHEAD.NUM_CONVS,
- cfg=cfg
- )
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SEPARATE_BIDIRECTIONAL and self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DO_LANG_PROJ_OUTSIDE_CHECKPOINT:
- self.shrink_lang = FeatureResizer(self.lang_dim * 5,
- self.lang_dim, 0.1)
-
-
- elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "SCAN":
- # single-direction (text->image)
- self.mapping_lang = _make_mlp(self.lang_dim,
- self.joint_embedding_size,
- self.joint_embedding_dropout)
- self.joint_fusion = nn.ModuleList([_make_conv(self.joint_inp_dim, self.joint_out_dim, 1) \
- for _ in range(5)])
-
- elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "FILM":
- # single-direction (text->image)
- self.mapping_lang = _make_mlp(self.lang_dim,
- self.joint_embedding_size,
- self.joint_embedding_dropout)
- self.gamma = nn.ModuleList(nn.Linear(self.joint_embedding_size, self.joint_inp_dim) for _ in range(5))
- self.beta = nn.ModuleList(nn.Linear(self.joint_embedding_size, self.joint_inp_dim) for _ in range(5))
-
- self.joint_fusion = nn.ModuleList([_make_conv(self.joint_inp_dim, self.joint_out_dim, 1) \
- for _ in range(5)])
-
- else:
- print("NO FUSION INVOLVED.")
-
- def init_configs(self, cfg):
- # common params
- self.lang_model = cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE
- self.joint_embedding_size = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_SIZE
- self.joint_embedding_dropout = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_DROPOUT
- self.joint_mlp_layers = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_MLP_LAYERS
-
- self.max_query_len = cfg.MODEL.LANGUAGE_BACKBONE.MAX_QUERY_LEN
- self.n_layers = cfg.MODEL.LANGUAGE_BACKBONE.N_LAYERS
- self.coord_dim = 8
- self.joint_inp_dim = self.coord_dim + self.joint_embedding_size
- self.joint_out_dim = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_OUT_SIZE
-
- # mha params
- self.n_head = 8
- self.embed_dim = 2048
- self.t2i_hidden_dim = 1024 # 256 * 4
- self.i2t_hidden_dim = 3072 # 768 * 4
-
- if self.lang_model in ["bert-base-uncased", "roberta-base", "clip"]:
- self.lang_dim = cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM
- else:
- self.lang_dim = 1024
-
- def forward(self, x):
- visual_features = x["visual"]
- language_dict_features = x["lang"]
-
- batch_size = visual_features[0].shape[0]
- device = visual_features[0].device
-
- fused_visual_features = None
- fused_language_dict_features = None
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-S":
- language_feature = language_dict_features['hidden']
- mask = language_dict_features['masks']
- # text -> image
- if self.use_checkpoint:
- q0, q1, q2, q3, q4 = checkpoint.checkpoint(
- self.t2i_attn,
- visual_features[0], visual_features[1],
- visual_features[2], visual_features[3],
- visual_features[4],
- language_feature, language_feature,
- mask,
- self.dummy_tensor
- )
- else:
- q0, q1, q2, q3, q4 = self.t2i_attn(
- visual_features[0], visual_features[1],
- visual_features[2], visual_features[3],
- visual_features[4],
- language_feature, language_feature,
- attention_mask=mask
- )
-
- fused_visual_features = [q0, q1, q2, q3, q4]
- fused_language_dict_features = language_dict_features
-
- elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-B":
- if self.use_checkpoint:
- q0, q1, q2, q3, q4, l0, l1, l2, l3, l4 = checkpoint.checkpoint(self.b_attn,
- visual_features[0], visual_features[1],
- visual_features[2], visual_features[3],
- visual_features[4],
- language_dict_features['hidden'],
- language_dict_features['masks'],
- self.dummy_tensor
- )
- else:
- q0, q1, q2, q3, q4, l0, l1, l2, l3, l4 = self.b_attn(
- visual_features[0], visual_features[1],
- visual_features[2], visual_features[3],
- visual_features[4],
- language_dict_features['hidden'],
- language_dict_features['masks'],
- self.dummy_tensor
- )
-
- fused_visual_features = [q0, q1, q2, q3, q4]
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SEPARATE_BIDIRECTIONAL and self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DO_LANG_PROJ_OUTSIDE_CHECKPOINT:
- language_features = self.shrink_lang(torch.cat([l0, l1, l2, l3, l4], dim = -1))
- else:
- language_features = l0
-
- language_dict_features['hidden'] = language_features
- fused_language_dict_features = language_dict_features
-
- elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "SCAN":
- # text -> image
- language_feature = language_dict_features['aggregate']
- language_feature = self.mapping_lang(language_feature)
- visu_feat = []
- for ii, feat in enumerate(visual_features):
- attn_feat = func_attention(feat, language_feature, smooth=1, raw_feature_norm="softmax")
- visu_feat.append(attn_feat)
-
- fused_visual_features = [fusion(feat) for feat, fusion in zip(visu_feat, self.joint_fusion)]
- fused_language_dict_features = language_dict_features
-
- elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "FILM":
- # text -> image
- # relative position embedding
- coord_feats = [_make_coord(batch_size, x.shape[2], x.shape[3]) for x in visual_features]
- # I only use a global representation of language
- # you can also use more complex modeling using word-level representations
- # Usage: lang_feat = lang_feat['words'] shape [seq_len, dim]
- language_feature = language_dict_features['aggregate']
- language_feature = self.mapping_lang(language_feature)
-
- # attention mechanism for fusion
- gamma = [F.tanh(gamma(language_feature)) for gamma in self.gamma]
- beta = [F.tanh(beta(language_feature)) for beta in self.beta]
-
- visu_feat = []
- for ii, feat in enumerate(visual_features):
- coord_feat = coord_feats[ii].to(device)
- feat = torch.cat([feat, coord_feat], dim=1)
- b = beta[ii].view(batch_size, -1, 1, 1).expand_as(feat)
- g = gamma[ii].view(batch_size, -1, 1, 1).expand_as(feat)
- feat = F.relu(g * feat + b)
- visu_feat.append(feat)
-
- fused_visual_features = [fusion(feat) for feat, fusion in zip(visu_feat, self.joint_fusion)]
- fused_language_dict_features = language_dict_features
-
- else:
- fused_visual_features = visual_features
- fused_language_dict_features = language_dict_features
-
- features_dict = {"visual": fused_visual_features,
- "lang": fused_language_dict_features}
-
- return features_dict
-
-
-class VLDyHead(torch.nn.Module):
- def __init__(self, cfg):
- super(VLDyHead, self).__init__()
- self.cfg = cfg
- # bert_cfg = BertConfig.from_pretrained(cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE)
- if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "bert-base-uncased":
- lang_cfg = BertConfig.from_pretrained(cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE)
- elif cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip":
- lang_cfg = cfg
- else:
- lang_cfg = None
- raise NotImplementedError
-
- num_classes = cfg.MODEL.DYHEAD.NUM_CLASSES - 1
- num_tokens = cfg.MODEL.LANGUAGE_BACKBONE.MAX_QUERY_LEN
- num_anchors = len(cfg.MODEL.RPN.ASPECT_RATIOS) * cfg.MODEL.RPN.SCALES_PER_OCTAVE
- in_channels = cfg.MODEL.BACKBONE.OUT_CHANNELS
- channels = cfg.MODEL.DYHEAD.CHANNELS
-
- if cfg.MODEL.DYHEAD.USE_GN:
- bn_type = ['gn', cfg.MODEL.GROUP_NORM.NUM_GROUPS]
- elif cfg.MODEL.DYHEAD.USE_NSYNCBN:
- bn_type = 'nsbn'
- elif cfg.MODEL.DYHEAD.USE_SYNCBN:
- bn_type = 'sbn'
- else:
- bn_type = None
-
- use_dyrelu = cfg.MODEL.DYHEAD.USE_DYRELU
- use_dyfuse = cfg.MODEL.DYHEAD.USE_DYFUSE
- use_deform = cfg.MODEL.DYHEAD.USE_DFCONV
-
- if cfg.MODEL.DYHEAD.CONV_FUNC:
- conv_func = lambda i, o, s: eval(cfg.MODEL.DYHEAD.CONV_FUNC)(i, o, s, bn_type=bn_type)
- else:
- conv_func = lambda i, o, s: Conv3x3Norm(i, o, s, deformable=use_deform, bn_type=bn_type)
-
- dyhead_tower = []
- for i in range(cfg.MODEL.DYHEAD.NUM_CONVS):
- if cfg.MODEL.DYHEAD.FUSE_CONFIG.EARLY_FUSE_ON:
- # cross-modality fusion
- dyhead_tower.append(
- VLFuse(cfg)
- )
- # self language path
- if i < cfg.MODEL.DYHEAD.NUM_CONVS - 1 or cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_FUSED_FEATURES_DOT_PRODUCT:
- # dyhead_tower.append(
- # BertEncoderLayer(
- # bert_cfg,
- # clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MIN_FOR_UNDERFLOW,
- # clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MAX_FOR_OVERFLOW)
- # )
- if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "bert-base-uncased":
- dyhead_tower.append(
- BertEncoderLayer(
- lang_cfg,
- clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MIN_FOR_UNDERFLOW,
- clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MAX_FOR_OVERFLOW)
- )
- elif cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip":
- dyhead_tower.append(
- CLIPTransformerLayer(lang_cfg)
- )
- else:
- raise NotImplementedError
-
- else:
- dyhead_tower.append(
- DummyLayer()
- )
-
- # self vision path
- dyhead_tower.append(
- DyConv(
- in_channels if i == 0 else channels,
- channels,
- conv_func=conv_func,
- use_dyrelu=(use_dyrelu and in_channels == channels) if i == 0 else use_dyrelu,
- use_dyfuse=(use_dyfuse and in_channels == channels) if i == 0 else use_dyfuse,
- use_deform=(use_deform and in_channels == channels) if i == 0 else use_deform,
- )
- )
-
- self.add_module('dyhead_tower', nn.Sequential(*dyhead_tower))
-
- self.cls_logits = nn.Conv2d(channels, num_anchors * num_classes, kernel_size=1)
- self.bbox_pred = nn.Conv2d(channels, num_anchors * 4, kernel_size=1)
- self.centerness = nn.Conv2d(channels, num_anchors * 1, kernel_size=1)
-
- # initialize the bias for focal loss
- prior_prob = cfg.MODEL.DYHEAD.PRIOR_PROB
- bias_value = -math.log((1 - prior_prob) / prior_prob)
-
- log_scale = self.cfg.MODEL.DYHEAD.LOG_SCALE
-
- # soft token head
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS:
- self.token_logits = nn.Conv2d(channels, num_anchors * num_tokens, kernel_size=1)
- # ABLATION
- # self.token_logits = nn.Conv2d(channels, num_anchors * num_tokens, kernel_size=1, bias=False)
- # self.bias = nn.Parameter(torch.zeros(channels), requires_grad=True)
- # self.bias0 = nn.Parameter(torch.Tensor([bias_value]), requires_grad=True)
-
- # contrastive alignment head
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- assert self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS == False
- contrastive_hdim = cfg.MODEL.DYHEAD.FUSE_CONFIG.CONTRASTIVE_HIDDEN_DIM
- self.contrastive_align_projection_image = nn.Conv2d(channels, num_anchors * contrastive_hdim, kernel_size=1)
- self.contrastive_align_projection_text = nn.Linear(channels, contrastive_hdim, bias=True)
- self.log_scale = nn.Parameter(torch.Tensor([log_scale]), requires_grad=True)
-
- # dot product soft token head
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- assert self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS == False
- self.dot_product_projection_image = nn.Identity()
- self.dot_product_projection_text = nn.Linear(self.cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM,
- num_anchors * channels, bias=True)
- self.log_scale = nn.Parameter(torch.Tensor([log_scale]), requires_grad=True)
- # DEBUG
- # self.bias = nn.Parameter(torch.zeros(channels), requires_grad=True)
- self.bias_lang = nn.Parameter(torch.zeros(self.cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM), requires_grad=True)
- self.bias0 = nn.Parameter(torch.Tensor([bias_value]), requires_grad=True)
-
- # initialization
- for modules in [self.cls_logits, self.bbox_pred,
- self.centerness]:
- for l in modules.modules():
- if isinstance(l, nn.Conv2d):
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- self.scales = nn.ModuleList([Scale(init_value=1.0) for _ in range(5)])
-
- torch.nn.init.constant_(self.cls_logits.bias, bias_value)
-
- # if use soft token loss
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS:
- for modules in [self.token_logits]:
- for l in modules.modules():
- if isinstance(l, nn.Conv2d):
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- torch.nn.init.constant_(self.token_logits.bias, bias_value)
- # print(torch.norm(self.token_logits.weight))
-
- # if use contrastive loss
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- for modules in [self.contrastive_align_projection_image]:
- for l in modules.modules():
- if isinstance(l, nn.Conv2d):
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
- # if use dot product token loss
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- for modules in [self.dot_product_projection_image]:
- for l in modules.modules():
- if isinstance(l, nn.Conv2d):
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, bias_value)
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS:
- if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip":
- lang_cfg = BertConfig.from_pretrained("bert-base-uncased")
- lang_cfg.hidden_size = cfg.MODEL.CLIP.WIDTH
- lang_cfg.vocab_size = cfg.MODEL.CLIP.VOCAB_SIZE
- self.mlm_head = BertLMPredictionHead(
- lang_cfg
- ) #nn.Linear(hidden_size, config.vocab_size, bias=False)
-
- def forward(self, x, language_dict_features=None, embedding=None, swint_feature_c4=None):
- logits = []
- bbox_reg = []
- centerness = []
-
- feat_inputs = {"visual": x,
- "lang": language_dict_features}
-
- dyhead_tower = self.dyhead_tower(feat_inputs)
-
- # soft token
- t_logits = None
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS:
- t_logits = []
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_FUSED_FEATURES_DOT_PRODUCT:
- embedding = dyhead_tower["lang"]["hidden"]
-
- # MLM loss
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS:
- mlm_logits = self.mlm_head(embedding)
- else:
- mlm_logits = None
-
- # contrastive
- contrastive_logits = None
- proj_tokens = None
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- contrastive_logits = []
- # follow MDETR's way
- proj_tokens = F.normalize(
- self.contrastive_align_projection_text(embedding), p=2, dim=-1
- )
-
- # dot product soft token
- dot_product_logits = None
- dot_product_proj_tokens = None
- dot_product_proj_tokens_bias = None
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- dot_product_logits = []
- # norm
- embedding = F.normalize(embedding, p=2, dim=-1)
- dot_product_proj_tokens = self.dot_product_projection_text(embedding / 2.0)
- # w/o norm
- # dot_product_proj_tokens = self.dot_product_projection_text(embedding / 28.0)
-
- dot_product_proj_tokens_bias = torch.matmul(embedding, self.bias_lang) + self.bias0
-
- # shallow contrastive (original feature from image & text encoder)
- shallow_img_emb_feats = None
- shallow_text_emb = None
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS \
- or self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS:
- shallow_img_emb_feats = []
- shallow_text_emb = embedding
-
- # print([v.shape for v in x])
- # shallow contrastive: use the feature from swint backbone
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS:
- for b, feature in enumerate(swint_feature_c4):
- # BF, CF, HF, WF = feat.shape
- # shallow_img_emb = permute_and_flatten(feat, BF, -1, CF, HF, WF)
- shallow_img_emb_feats.append(feature)
-
- fused_visual_features = None
- if self.cfg.MODEL.RPN.RETURN_FUSED_FEATURES:
- fused_visual_features = []
-
- # use the feature from FPN
- for l, feature in enumerate(x):
- logits.append(self.cls_logits(dyhead_tower["visual"][l]))
-
- bbox_pred = self.scales[l](self.bbox_pred(dyhead_tower["visual"][l]))
- bbox_reg.append(bbox_pred)
-
- centerness.append(self.centerness(dyhead_tower["visual"][l]))
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS:
- t_logits.append(self.token_logits(dyhead_tower["visual"][l]))
-
- # ABLATION
- # b = self.bias.unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
- # x = dyhead_tower["visual"][l]
- # B, C, H, W = x.shape
- # bias = b.repeat(B, 1, H, W)
- # t_logits.append(self.token_logits(dyhead_tower["visual"][l] + bias) + self.bias0)
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- x = dyhead_tower["visual"][l]
- B, _, H, W = x.shape
- C = proj_tokens.shape[2]
- proj_queries = self.contrastive_align_projection_image(dyhead_tower["visual"][l])
- proj_queries = permute_and_flatten(proj_queries, B, -1, C, H, W)
- normalized_img_emb = F.normalize(proj_queries, p=2, dim=-1)
- normalized_text_emb = proj_tokens
- contrastive_logit = (
- torch.matmul(normalized_img_emb, normalized_text_emb.transpose(-1, -2)) / self.log_scale.exp())
- contrastive_logits.append(contrastive_logit)
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- x = dyhead_tower["visual"][l]
- if self.cfg.MODEL.RPN.RETURN_FUSED_FEATURES:
- fused_visual_features.append(x)
- B, C, H, W = x.shape
-
- # add bias (language)
- dot_product_proj_queries = self.dot_product_projection_image(x)
- dot_product_proj_queries = permute_and_flatten(dot_product_proj_queries, B, -1, C, H, W)
-
- A = dot_product_proj_queries.shape[1]
- bias = dot_product_proj_tokens_bias.unsqueeze(1).repeat(1, A, 1)
-
- dot_product_logit = (torch.matmul(dot_product_proj_queries, dot_product_proj_tokens.transpose(-1, -2)) / self.log_scale.exp()) + bias
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_DOT_PRODUCT:
- dot_product_logit = torch.clamp(dot_product_logit, max=50000)
- dot_product_logit = torch.clamp(dot_product_logit, min=-50000)
- dot_product_logits.append(dot_product_logit)
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS:
- feat = feature
- BF, CF, HF, WF = feat.shape
- shallow_img_emb = permute_and_flatten(feat, BF, -1, CF, HF, WF)
- shallow_img_emb_feats.append(shallow_img_emb)
-
- # no matter the feature is from backboone or from fpn, we use shallow_img_embs all the time
- if shallow_img_emb_feats is not None and shallow_text_emb is not None:
- # shallow_img_embs = torch.cat(shallow_img_embs, dim=1)
- proj_tokens = shallow_text_emb
- return logits, bbox_reg, centerness, t_logits, proj_tokens, contrastive_logits, dot_product_logits, mlm_logits, shallow_img_emb_feats, fused_visual_features
-
-
-class VLDyHeadModule(torch.nn.Module):
-
- def __init__(self, cfg):
- super(VLDyHeadModule, self).__init__()
- self.cfg = cfg
- self.head = VLDyHead(cfg)
- box_coder = BoxCoder(cfg)
- self.loss_evaluator = make_atss_loss_evaluator(cfg, box_coder)
- self.box_selector_train = make_atss_postprocessor(cfg, box_coder, is_train=True)
- self.box_selector_test = make_atss_postprocessor(cfg, box_coder, is_train=False)
- self.anchor_generator = make_anchor_generator_complex(cfg)
-
- self.lang_model = cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE
- self.joint_embedding_size = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_SIZE
- self.joint_embedding_dropout = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_DROPOUT
- if self.lang_model in ["bert-base-uncased", "roberta-base", "clip"]:
- self.lang_dim = cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM
- else:
- self.lang_dim = 1024
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- self.resizer = FeatureResizer(
- input_feat_size=self.lang_dim,
- output_feat_size=self.joint_embedding_size,
- dropout=self.joint_embedding_dropout
- )
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.ADD_LINEAR_LAYER:
- self.tunable_linear = torch.nn.Linear(self.lang_dim, 1000, bias=False)
- self.tunable_linear.weight.data.fill_(0.0)
-
- def forward(self, images, features, targets=None,
- language_dict_features=None,
- positive_map=None,
- captions=None,
- swint_feature_c4=None
- ):
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- # resizer needed
- embedding = language_dict_features['embedded']
- embedding = self.resizer(embedding)
- elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- # no resizer needed
- embedding = language_dict_features['embedded']
- else:
- embedding = None
-
- if "masks" in language_dict_features:
- text_masks = language_dict_features["masks"]
- else:
- text_masks = None
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.ADD_LINEAR_LAYER:
- embedding = self.tunable_linear.weight[:embedding.size(1), :].unsqueeze(0) + embedding
- language_dict_features['embedded'] = embedding
- language_dict_features['hidden'] = self.tunable_linear.weight[:embedding.size(1), :].unsqueeze(0) + language_dict_features['hidden']
-
- box_cls, box_regression, centerness, token_logits, \
- proj_tokens, contrastive_logits, dot_product_logits, mlm_logits, shallow_img_emb_feats, fused_visual_features = self.head(features,
- language_dict_features,
- embedding,
- swint_feature_c4
- )
- anchors = self.anchor_generator(images, features)
-
- if self.training:
- return self._forward_train(box_cls, box_regression, centerness, targets, anchors,
- captions,
- positive_map,
- token_logits,
- proj_tokens,
- contrastive_logits,
- dot_product_logits,
- text_masks,
- mlm_logits = mlm_logits,
- mlm_labels = language_dict_features["mlm_labels"],
- shallow_img_emb_feats=shallow_img_emb_feats,
- fused_visual_features=fused_visual_features
- )
- else:
- return self._forward_test(box_regression, centerness, anchors,
- box_cls,
- token_logits,
- dot_product_logits,
- positive_map,
- fused_visual_features=fused_visual_features
- )
-
- def _forward_train(self, box_cls, box_regression, centerness, targets, anchors,
- captions=None,
- positive_map=None,
- token_logits=None,
- proj_tokens=None,
- contrastive_logits=None,
- dot_product_logits=None,
- text_masks=None,
- mlm_logits=None,
- mlm_labels=None,
- shallow_img_emb_feats=None,
- fused_visual_features=None
- ):
-
- loss_box_cls, loss_box_reg, loss_centerness, loss_token, loss_contrastive_align, loss_dot_product_token, loss_shallow_contrastive = self.loss_evaluator(
- box_cls, box_regression, centerness, targets, anchors,
- captions,
- positive_map,
- token_logits,
- proj_tokens,
- contrastive_logits,
- dot_product_logits,
- text_masks,
- shallow_img_emb_feats
- )
-
- losses = {
- # "loss_cls": loss_box_cls,
- "loss_reg": loss_box_reg,
- "loss_centerness": loss_centerness
- }
-
- if mlm_labels is not None and mlm_logits is not None:
- losses["mlm_loss"] = nn.CrossEntropyLoss(ignore_index = -100)(mlm_logits.view(-1, mlm_logits.size(-1)), mlm_labels.view(-1)) * self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS_COEF
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CLASSIFICATION_LOSS:
- losses["loss_cls"] = loss_box_cls
- else:
- losses["loss_cls"] = 0.0 * loss_box_cls
-
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS:
- losses["loss_token"] = loss_token * self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TOKEN_LOSS_WEIGHT
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS:
- losses["loss_contrastive_align"] = loss_contrastive_align * \
- self.cfg.MODEL.DYHEAD.FUSE_CONFIG.CONTRASTIVE_ALIGN_LOSS_WEIGHT
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS:
- losses["loss_dot_product_token"] = loss_dot_product_token * \
- self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DOT_PRODUCT_TOKEN_LOSS_WEIGHT
- if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS or \
- self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS:
- losses["loss_shallow_contrastive"] = loss_shallow_contrastive * \
- self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SHALLOW_CONTRASTIVE_LOSS_WEIGHT
-
- if self.cfg.MODEL.RPN_ONLY:
- return None, losses, None
- else:
- # Let's just use one image per batch
- assert (box_regression[0].shape[0]) == 1
- positive_map_label_to_token = create_positive_map_label_to_token_from_positive_map(positive_map, plus=1)
- boxes = self.box_selector_train(box_regression, centerness, anchors,
- box_cls,
- token_logits,
- dot_product_logits,
- positive_map=positive_map_label_to_token
- )
- train_boxes = []
- for b, t in zip(boxes, targets):
- tb = t.copy_with_fields(["labels"])
- tb.add_field("scores", torch.ones(tb.bbox.shape[0], dtype=torch.bool, device=tb.bbox.device))
- train_boxes.append(cat_boxlist([b, tb]))
- return train_boxes, losses, fused_visual_features
-
- def _forward_test(self, box_regression, centerness, anchors,
- box_cls=None,
- token_logits=None,
- dot_product_logits=None,
- positive_map=None,
- fused_visual_features=None
- ):
-
- boxes = self.box_selector_test(box_regression, centerness, anchors,
- box_cls,
- token_logits,
- dot_product_logits,
- positive_map,
- )
- return boxes, {}, fused_visual_features
diff --git a/spaces/harkov000/peft-lora-sd-dreambooth/app.py b/spaces/harkov000/peft-lora-sd-dreambooth/app.py
deleted file mode 100644
index 7b3562343491ca0348561afe7e0fa21466b5e55a..0000000000000000000000000000000000000000
--- a/spaces/harkov000/peft-lora-sd-dreambooth/app.py
+++ /dev/null
@@ -1,375 +0,0 @@
-#!/usr/bin/env python
-"""
-Demo showcasing parameter-efficient fine-tuning of Stable Dissfusion via Dreambooth leveraging 🤗 PEFT (https://github.com/huggingface/peft)
-
-The code in this repo is partly adapted from the following repositories:
-https://huggingface.co/spaces/hysts/LoRA-SD-training
-https://huggingface.co/spaces/multimodalart/dreambooth-training
-"""
-from __future__ import annotations
-
-import os
-import pathlib
-
-import gradio as gr
-import torch
-from typing import List
-
-from inference import InferencePipeline
-from trainer import Trainer
-from uploader import upload
-
-
-TITLE = "# LoRA + Dreambooth Training and Inference Demo 🎨"
-DESCRIPTION = "Demo showcasing parameter-efficient fine-tuning of Stable Dissfusion via Dreambooth leveraging 🤗 PEFT (https://github.com/huggingface/peft)."
-
-
-ORIGINAL_SPACE_ID = "smangrul/peft-lora-sd-dreambooth"
-
-SPACE_ID = os.getenv("SPACE_ID", ORIGINAL_SPACE_ID)
-SHARED_UI_WARNING = f"""# Attention - This Space doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU.
-
-"""
-if os.getenv("SYSTEM") == "spaces" and SPACE_ID != ORIGINAL_SPACE_ID:
- SETTINGS = f'Settings'
-
-else:
- SETTINGS = "Settings"
-CUDA_NOT_AVAILABLE_WARNING = f"""# Attention - Running on CPU.
-
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-"""
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-def update_output_files() -> dict:
- paths = sorted(pathlib.Path("results").glob("*.pt"))
- config_paths = sorted(pathlib.Path("results").glob("*.json"))
- paths = paths + config_paths
- paths = [path.as_posix() for path in paths] # type: ignore
- return gr.update(value=paths or None)
-
-
-def create_training_demo(trainer: Trainer, pipe: InferencePipeline) -> gr.Blocks:
- with gr.Blocks() as demo:
- base_model = gr.Dropdown(
- choices=[
- "CompVis/stable-diffusion-v1-4",
- "runwayml/stable-diffusion-v1-5",
- "stabilityai/stable-diffusion-2-1-base",
- "dreamlike-art/dreamlike-photoreal-2.0"
- ],
- value="runwayml/stable-diffusion-v1-5",
- label="Base Model",
- visible=True,
- )
- resolution = gr.Dropdown(choices=["512"], value="512", label="Resolution", visible=False)
-
- with gr.Row():
- with gr.Box():
- gr.Markdown("Training Data")
- concept_images = gr.Files(label="Images for your concept")
- class_images = gr.Files(label="Class images")
- concept_prompt = gr.Textbox(label="Concept Prompt", max_lines=1)
- gr.Markdown(
- """
- - Upload images of the style you are planning on training on.
- - For a concept prompt, use a unique, made up word to avoid collisions.
- - Guidelines for getting good results:
- - Dreambooth for an `object` or `style`:
- - 5-10 images of the object from different angles
- - 500-800 iterations should be good enough.
- - Prior preservation is recommended.
- - `class_prompt`:
- - `a photo of object`
- - `style`
- - `concept_prompt`:
- - ` object`
- - ` style`
- - `a photo of object`
- - `a photo of style`
- - Dreambooth for a `Person/Face`:
- - 15-50 images of the person from different angles, lighting, and expressions.
- Have considerable photos with close up faces.
- - 800-1200 iterations should be good enough.
- - good defaults for hyperparams
- - Model - `runwayml/stable-diffusion-v1-5` or `stabilityai/stable-diffusion-2-1-base`
- - Use/check Prior preservation.
- - Number of class images to use - 200
- - Prior Loss Weight - 1
- - LoRA Rank for unet - 16
- - LoRA Alpha for unet - 20
- - lora dropout - 0
- - LoRA Bias for unet - `all`
- - LoRA Rank for CLIP - 16
- - LoRA Alpha for CLIP - 17
- - LoRA Bias for CLIP - `all`
- - lora dropout for CLIP - 0
- - Uncheck `FP16` and `8bit-Adam` (don't use them for faces)
- - `class_prompt`: Use the gender related word of the person
- - `man`
- - `woman`
- - `boy`
- - `girl`
- - `concept_prompt`: just the unique, made up word, e.g., `srm`
- - Choose `all` for `lora_bias` and `text_encode_lora_bias`
- - Dreambooth for a `Scene`:
- - 15-50 images of the scene from different angles, lighting, and expressions.
- - 800-1200 iterations should be good enough.
- - Prior preservation is recommended.
- - `class_prompt`:
- - `scene`
- - `landscape`
- - `city`
- - `beach`
- - `mountain`
- - `concept_prompt`:
- - ` scene`
- - ` landscape`
- - Experiment with various values for lora dropouts, enabling/disabling fp16 and 8bit-Adam
- """
- )
- with gr.Box():
- gr.Markdown("Training Parameters")
- num_training_steps = gr.Number(label="Number of Training Steps", value=1000, precision=0)
- learning_rate = gr.Number(label="Learning Rate", value=0.0001)
- gradient_checkpointing = gr.Checkbox(label="Whether to use gradient checkpointing", value=True)
- train_text_encoder = gr.Checkbox(label="Train Text Encoder", value=True)
- with_prior_preservation = gr.Checkbox(label="Prior Preservation", value=True)
- class_prompt = gr.Textbox(
- label="Class Prompt", max_lines=1, placeholder='Example: "a photo of object"'
- )
- num_class_images = gr.Number(label="Number of class images to use", value=50, precision=0)
- prior_loss_weight = gr.Number(label="Prior Loss Weight", value=1.0, precision=1)
- # use_lora = gr.Checkbox(label="Whether to use LoRA", value=True)
- lora_r = gr.Number(label="LoRA Rank for unet", value=4, precision=0)
- lora_alpha = gr.Number(
- label="LoRA Alpha for unet. scaling factor = lora_alpha/lora_r", value=4, precision=0
- )
- lora_dropout = gr.Number(label="lora dropout", value=0.00)
- lora_bias = gr.Dropdown(
- choices=["none", "all", "lora_only"],
- value="none",
- label="LoRA Bias for unet. This enables bias params to be trainable based on the bias type",
- visible=True,
- )
- lora_text_encoder_r = gr.Number(label="LoRA Rank for CLIP", value=4, precision=0)
- lora_text_encoder_alpha = gr.Number(
- label="LoRA Alpha for CLIP. scaling factor = lora_alpha/lora_r", value=4, precision=0
- )
- lora_text_encoder_dropout = gr.Number(label="lora dropout for CLIP", value=0.00)
- lora_text_encoder_bias = gr.Dropdown(
- choices=["none", "all", "lora_only"],
- value="none",
- label="LoRA Bias for CLIP. This enables bias params to be trainable based on the bias type",
- visible=True,
- )
- gradient_accumulation = gr.Number(label="Number of Gradient Accumulation", value=1, precision=0)
- fp16 = gr.Checkbox(label="FP16", value=True)
- use_8bit_adam = gr.Checkbox(label="Use 8bit Adam", value=True)
- gr.Markdown(
- """
- - It will take about 20-30 minutes to train for 1000 steps with a T4 GPU.
- - You may want to try a small number of steps first, like 1, to see if everything works fine in your environment.
- - Note that your trained models will be deleted when the second training is started. You can upload your trained model in the "Upload" tab.
- """
- )
-
- run_button = gr.Button("Start Training")
- with gr.Box():
- with gr.Row():
- check_status_button = gr.Button("Check Training Status")
- with gr.Column():
- with gr.Box():
- gr.Markdown("Message")
- training_status = gr.Markdown()
- output_files = gr.Files(label="Trained Weight Files and Configs")
-
- run_button.click(fn=pipe.clear)
-
- run_button.click(
- fn=trainer.run,
- inputs=[
- base_model,
- resolution,
- num_training_steps,
- concept_images,
- concept_prompt,
- class_images,
- learning_rate,
- gradient_accumulation,
- fp16,
- use_8bit_adam,
- gradient_checkpointing,
- train_text_encoder,
- with_prior_preservation,
- prior_loss_weight,
- class_prompt,
- num_class_images,
- lora_r,
- lora_alpha,
- lora_bias,
- lora_dropout,
- lora_text_encoder_r,
- lora_text_encoder_alpha,
- lora_text_encoder_bias,
- lora_text_encoder_dropout,
- ],
- outputs=[
- training_status,
- output_files,
- ],
- queue=False,
- )
- check_status_button.click(fn=trainer.check_if_running, inputs=None, outputs=training_status, queue=False)
- check_status_button.click(fn=update_output_files, inputs=None, outputs=output_files, queue=False)
- return demo
-
-
-def find_weight_files() -> List[str]:
- curr_dir = pathlib.Path(__file__).parent
- paths = sorted(curr_dir.rglob("*.pt"))
- return [path.relative_to(curr_dir).as_posix() for path in paths]
-
-
-def reload_lora_weight_list() -> dict:
- return gr.update(choices=find_weight_files())
-
-
-def create_inference_demo(pipe: InferencePipeline) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- base_model = gr.Dropdown(
- choices=[
- "CompVis/stable-diffusion-v1-4",
- "runwayml/stable-diffusion-v1-5",
- "stabilityai/stable-diffusion-2-1-base",
- "dreamlike-art/dreamlike-photoreal-2.0"
- ],
- value="runwayml/stable-diffusion-v1-5",
- label="Base Model",
- visible=True,
- )
- reload_button = gr.Button("Reload Weight List")
- lora_weight_name = gr.Dropdown(
- choices=find_weight_files(), value="lora/lora_disney.pt", label="LoRA Weight File"
- )
- prompt = gr.Textbox(label="Prompt", max_lines=1, placeholder='Example: "style of sks, baby lion"')
- negative_prompt = gr.Textbox(
- label="Negative Prompt", max_lines=1, placeholder='Example: "blurry, botched, low quality"'
- )
- seed = gr.Slider(label="Seed", minimum=0, maximum=100000, step=1, value=1)
- with gr.Accordion("Other Parameters", open=False):
- num_steps = gr.Slider(label="Number of Steps", minimum=0, maximum=1000, step=1, value=50)
- guidance_scale = gr.Slider(label="CFG Scale", minimum=0, maximum=50, step=0.1, value=7)
-
- run_button = gr.Button("Generate")
-
- gr.Markdown(
- """
- - After training, you can press "Reload Weight List" button to load your trained model names.
- - Few repos to refer for ideas:
- - https://huggingface.co/smangrul/smangrul
- - https://huggingface.co/smangrul/painting-in-the-style-of-smangrul
- - https://huggingface.co/smangrul/erenyeager
- """
- )
- with gr.Column():
- result = gr.Image(label="Result")
-
- reload_button.click(fn=reload_lora_weight_list, inputs=None, outputs=lora_weight_name)
- prompt.submit(
- fn=pipe.run,
- inputs=[
- base_model,
- lora_weight_name,
- prompt,
- negative_prompt,
- seed,
- num_steps,
- guidance_scale,
- ],
- outputs=result,
- queue=False,
- )
- run_button.click(
- fn=pipe.run,
- inputs=[
- base_model,
- lora_weight_name,
- prompt,
- negative_prompt,
- seed,
- num_steps,
- guidance_scale,
- ],
- outputs=result,
- queue=False,
- )
- seed.change(
- fn=pipe.run,
- inputs=[
- base_model,
- lora_weight_name,
- prompt,
- negative_prompt,
- seed,
- num_steps,
- guidance_scale,
- ],
- outputs=result,
- queue=False,
- )
- return demo
-
-
-def create_upload_demo() -> gr.Blocks:
- with gr.Blocks() as demo:
- model_name = gr.Textbox(label="Model Name")
- hf_token = gr.Textbox(label="Hugging Face Token (with write permission)")
- upload_button = gr.Button("Upload")
- with gr.Box():
- gr.Markdown("Message")
- result = gr.Markdown()
- gr.Markdown(
- """
- - You can upload your trained model to your private Model repo (i.e. https://huggingface.co/{your_username}/{model_name}).
- - You can find your Hugging Face token [here](https://huggingface.co/settings/tokens).
- """
- )
-
- upload_button.click(fn=upload, inputs=[model_name, hf_token], outputs=result)
-
- return demo
-
-
-pipe = InferencePipeline()
-trainer = Trainer()
-
-with gr.Blocks(css="style.css") as demo:
- if os.getenv("IS_SHARED_UI"):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
-
- gr.Markdown(TITLE)
- gr.Markdown(DESCRIPTION)
-
- with gr.Tabs():
- with gr.TabItem("Train"):
- create_training_demo(trainer, pipe)
- with gr.TabItem("Test"):
- create_inference_demo(pipe)
- with gr.TabItem("Upload"):
- create_upload_demo()
-
-demo.queue(default_enabled=False).launch(share=False)
diff --git a/spaces/harmonai/dance-diffusion/README.md b/spaces/harmonai/dance-diffusion/README.md
deleted file mode 100644
index eaf42eceae231dce324da195e36fda63eb8d4ecb..0000000000000000000000000000000000000000
--- a/spaces/harmonai/dance-diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dance Diffusion
-emoji: 🦀
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md
deleted file mode 100644
index dd69a33478c85068cdd7b8b90161f97cc55c1621..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-name: "\U0001F680Feature Request"
-about: Submit a proposal/request for a new detectron2 feature
-
----
-
-## 🚀 Feature
-A clear and concise description of the feature proposal.
-
-
-## Motivation & Examples
-
-Tell us why the feature is useful.
-
-Describe what the feature would look like, if it is implemented.
-Best demonstrated using **code examples** in addition to words.
-
-## Note
-
-We only consider adding new features if they are relevant to many users.
-
-If you request implementation of research papers --
-we only consider papers that have enough significance and prevalance in the object detection field.
-
-We do not take requests for most projects in the `projects/` directory,
-because they are research code release that is mainly for other researchers to reproduce results.
-
-Instead of adding features inside detectron2,
-you can implement many features by [extending detectron2](https://detectron2.readthedocs.io/tutorials/extend.html).
-The [projects/](https://github.com/facebookresearch/detectron2/tree/master/projects/) directory contains many of such examples.
-
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py
deleted file mode 100644
index a7600714e1edb019def04f9d0d1a063668943101..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py
+++ /dev/null
@@ -1,277 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import copy
-import logging
-import os
-import torch
-from caffe2.proto import caffe2_pb2
-from torch import nn
-
-from detectron2.config import CfgNode as CN
-
-from .caffe2_export import export_caffe2_detection_model
-from .caffe2_export import export_onnx_model as export_onnx_model_impl
-from .caffe2_export import run_and_save_graph
-from .caffe2_inference import ProtobufDetectionModel
-from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format
-from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph
-
-__all__ = [
- "add_export_config",
- "export_caffe2_model",
- "Caffe2Model",
- "export_onnx_model",
- "Caffe2Tracer",
-]
-
-
-def add_export_config(cfg):
- """
- Args:
- cfg (CfgNode): a detectron2 config
-
- Returns:
- CfgNode: an updated config with new options that will be used
- by :class:`Caffe2Tracer`.
- """
- is_frozen = cfg.is_frozen()
- cfg.defrost()
- cfg.EXPORT_CAFFE2 = CN()
- cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT = False
- if is_frozen:
- cfg.freeze()
- return cfg
-
-
-class Caffe2Tracer:
- """
- Make a detectron2 model traceable with caffe2 style.
-
- An original detectron2 model may not be traceable, or
- cannot be deployed directly after being traced, due to some reasons:
- 1. control flow in some ops
- 2. custom ops
- 3. complicated pre/post processing
-
- This class provides a traceable version of a detectron2 model by:
- 1. Rewrite parts of the model using ops in caffe2. Note that some ops do
- not have GPU implementation.
- 2. Define the inputs "after pre-processing" as inputs to the model
- 3. Remove post-processing and produce raw layer outputs
-
- More specifically about inputs: all builtin models take two input tensors.
- (1) NCHW float "data" which is an image (usually in [0, 255])
- (2) Nx3 float "im_info", each row of which is (height, width, 1.0)
-
- After making a traceable model, the class provide methods to export such a
- model to different deployment formats.
-
- The class currently only supports models using builtin meta architectures.
- """
-
- def __init__(self, cfg, model, inputs):
- """
- Args:
- cfg (CfgNode): a detectron2 config, with extra export-related options
- added by :func:`add_export_config`.
- model (nn.Module): a model built by
- :func:`detectron2.modeling.build_model`.
- inputs: sample inputs that the given model takes for inference.
- Will be used to trace the model.
- """
- assert isinstance(cfg, CN), cfg
- assert isinstance(model, torch.nn.Module), type(model)
- if "EXPORT_CAFFE2" not in cfg:
- cfg = add_export_config(cfg) # will just the defaults
-
- self.cfg = cfg
- self.model = model
- self.inputs = inputs
-
- def _get_traceable(self):
- # TODO how to make it extensible to support custom models
- C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[self.cfg.MODEL.META_ARCHITECTURE]
- traceable_model = C2MetaArch(self.cfg, copy.deepcopy(self.model))
- traceable_inputs = traceable_model.get_caffe2_inputs(self.inputs)
- return traceable_model, traceable_inputs
-
- def export_caffe2(self):
- """
- Export the model to Caffe2's protobuf format.
- The returned object can be saved with `.save_protobuf()` method.
- The result can be loaded and executed using Caffe2 runtime.
-
- Returns:
- Caffe2Model
- """
- model, inputs = self._get_traceable()
- predict_net, init_net = export_caffe2_detection_model(model, inputs)
- return Caffe2Model(predict_net, init_net)
-
- def export_onnx(self):
- """
- Export the model to ONNX format.
- Note that the exported model contains custom ops only available in caffe2, therefore it
- cannot be directly executed by other runtime. Post-processing or transformation passes
- may be applied on the model to accommodate different runtimes.
-
- Returns:
- onnx.ModelProto: an onnx model.
- """
- model, inputs = self._get_traceable()
- return export_onnx_model_impl(model, (inputs,))
-
- def export_torchscript(self):
- """
- Export the model to a `torch.jit.TracedModule` by tracing.
- The returned object can be saved to a file by ".save()".
-
- Returns:
- torch.jit.TracedModule: a torch TracedModule
- """
- model, inputs = self._get_traceable()
- logger = logging.getLogger(__name__)
- logger.info("Tracing the model with torch.jit.trace ...")
- with torch.no_grad():
- return torch.jit.trace(model, (inputs,), optimize=True)
-
-
-def export_caffe2_model(cfg, model, inputs):
- """
- Export a detectron2 model to caffe2 format.
-
- Args:
- cfg (CfgNode): a detectron2 config, with extra export-related options
- added by :func:`add_export_config`.
- model (nn.Module): a model built by
- :func:`detectron2.modeling.build_model`.
- It will be modified by this function.
- inputs: sample inputs that the given model takes for inference.
- Will be used to trace the model.
-
- Returns:
- Caffe2Model
- """
- return Caffe2Tracer(cfg, model, inputs).export_caffe2()
-
-
-def export_onnx_model(cfg, model, inputs):
- """
- Export a detectron2 model to ONNX format.
- Note that the exported model contains custom ops only available in caffe2, therefore it
- cannot be directly executed by other runtime. Post-processing or transformation passes
- may be applied on the model to accommodate different runtimes.
- Args:
- cfg (CfgNode): a detectron2 config, with extra export-related options
- added by :func:`add_export_config`.
- model (nn.Module): a model built by
- :func:`detectron2.modeling.build_model`.
- It will be modified by this function.
- inputs: sample inputs that the given model takes for inference.
- Will be used to trace the model.
- Returns:
- onnx.ModelProto: an onnx model.
- """
- return Caffe2Tracer(cfg, model, inputs).export_onnx()
-
-
-class Caffe2Model(nn.Module):
- """
- A wrapper around the traced model in caffe2's pb format.
- """
-
- def __init__(self, predict_net, init_net):
- super().__init__()
- self.eval() # always in eval mode
- self._predict_net = predict_net
- self._init_net = init_net
- self._predictor = None
-
- @property
- def predict_net(self):
- """
- Returns:
- core.Net: the underlying caffe2 predict net
- """
- return self._predict_net
-
- @property
- def init_net(self):
- """
- Returns:
- core.Net: the underlying caffe2 init net
- """
- return self._init_net
-
- __init__.__HIDE_SPHINX_DOC__ = True
-
- def save_protobuf(self, output_dir):
- """
- Save the model as caffe2's protobuf format.
-
- Args:
- output_dir (str): the output directory to save protobuf files.
- """
- logger = logging.getLogger(__name__)
- logger.info("Saving model to {} ...".format(output_dir))
- os.makedirs(output_dir, exist_ok=True)
-
- with open(os.path.join(output_dir, "model.pb"), "wb") as f:
- f.write(self._predict_net.SerializeToString())
- with open(os.path.join(output_dir, "model.pbtxt"), "w") as f:
- f.write(str(self._predict_net))
- with open(os.path.join(output_dir, "model_init.pb"), "wb") as f:
- f.write(self._init_net.SerializeToString())
-
- def save_graph(self, output_file, inputs=None):
- """
- Save the graph as SVG format.
-
- Args:
- output_file (str): a SVG file
- inputs: optional inputs given to the model.
- If given, the inputs will be used to run the graph to record
- shape of every tensor. The shape information will be
- saved together with the graph.
- """
- if inputs is None:
- save_graph(self._predict_net, output_file, op_only=False)
- else:
- size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0)
- device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii")
- inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device)
- inputs = [x.cpu().numpy() for x in inputs]
- run_and_save_graph(self._predict_net, self._init_net, inputs, output_file)
-
- @staticmethod
- def load_protobuf(dir):
- """
- Args:
- dir (str): a directory used to save Caffe2Model with
- :meth:`save_protobuf`.
- The files "model.pb" and "model_init.pb" are needed.
-
- Returns:
- Caffe2Model: the caffe2 model loaded from this directory.
- """
- predict_net = caffe2_pb2.NetDef()
- with open(os.path.join(dir, "model.pb"), "rb") as f:
- predict_net.ParseFromString(f.read())
-
- init_net = caffe2_pb2.NetDef()
- with open(os.path.join(dir, "model_init.pb"), "rb") as f:
- init_net.ParseFromString(f.read())
-
- return Caffe2Model(predict_net, init_net)
-
- def __call__(self, inputs):
- """
- An interface that wraps around a caffe2 model and mimics detectron2's models'
- input & output format. This is used to compare the outputs of caffe2 model
- with its original torch model.
-
- Due to the extra conversion between torch/caffe2,
- this method is not meant for benchmark.
- """
- if self._predictor is None:
- self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net)
- return self._predictor(inputs)
diff --git a/spaces/hasibzunair/fifa-tryon-demo/util/util.py b/spaces/hasibzunair/fifa-tryon-demo/util/util.py
deleted file mode 100644
index 550560aac8dc82fe4f896fd0c37e36fab3e15dd2..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/util/util.py
+++ /dev/null
@@ -1,145 +0,0 @@
-from __future__ import print_function
-import os
-from PIL import Image
-import numpy as np
-import torch
-
-print('?')
-
-# Converts a Tensor into a Numpy array
-# |imtype|: the desired type of the converted numpy array
-
-
-def tensor2im(image_tensor, imtype=np.uint8, normalize=True):
- if isinstance(image_tensor, list):
- image_numpy = []
- for i in range(len(image_tensor)):
- image_numpy.append(tensor2im(image_tensor[i], imtype, normalize))
- return image_numpy
- image_numpy = image_tensor.cpu().float().numpy()
- # if normalize:
- # image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0
- # else:
- # image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0
- image_numpy = (image_numpy + 1) / 2.0
- image_numpy = np.clip(image_numpy, 0, 1)
- if image_numpy.shape[2] == 1 or image_numpy.shape[2] > 3:
- image_numpy = image_numpy[:, :, 0]
-
- return image_numpy
-
-# Converts a one-hot tensor into a colorful label map
-
-
-def tensor2label(label_tensor, n_label, imtype=np.uint8):
- if n_label == 0:
- return tensor2im(label_tensor, imtype)
- label_tensor = label_tensor.cpu().float()
- if label_tensor.size()[0] > 1:
- label_tensor = label_tensor.max(0, keepdim=True)[1]
- label_tensor = Colorize(n_label)(label_tensor)
- #label_numpy = np.transpose(label_tensor.numpy(), (1, 2, 0))
- label_numpy = label_tensor.numpy()
- label_numpy = label_numpy / 255.0
-
- return label_numpy
-
-
-def save_image(image_numpy, image_path, grayscale=False):
- image_pil = Image.fromarray(image_numpy)
- image_pil.save(image_path)
-
-
-def save_tensor_as_image(image_tensor, image_path, grayscale=False):
- image_numpy = tensor_to_image(image_tensor, grayscale)
- save_image(image_numpy, image_path, grayscale)
-
-
-def tensor_to_image(img_tensor, grayscale=False):
- if grayscale:
- tensor = img_tensor.cpu().clamp(0, 255)
- else:
- tensor = (img_tensor.clone() + 1) * 0.5 * 255
- tensor = tensor.cpu().clamp(0, 255)
-
- try:
- array = tensor.numpy().astype('uint8')
- except:
- array = tensor.detach().numpy().astype('uint8')
-
- if array.shape[0] == 1:
- array = array.squeeze(0)
- elif array.shape[0] == 3:
- array = array.swapaxes(0, 1).swapaxes(1, 2)
-
- return array
-
-
-def mkdirs(paths):
- if isinstance(paths, list) and not isinstance(paths, str):
- for path in paths:
- mkdir(path)
- else:
- mkdir(paths)
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-###############################################################################
-# Code from
-# https://github.com/ycszen/pytorch-seg/blob/master/transform.py
-# Modified so it complies with the Citscape label map colors
-###############################################################################
-
-
-def uint82bin(n, count=8):
- """returns the binary of integer n, count refers to amount of bits"""
- return ''.join([str((n >> y) & 1) for y in range(count-1, -1, -1)])
-
-
-def labelcolormap(N):
- if N == 35: # cityscape
- cmap = np.array([(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (111, 74, 0), (81, 0, 81),
- (128, 64, 128), (244, 35, 232), (250, 170, 160), (230,
- 150, 140), (70, 70, 70), (102, 102, 156), (190, 153, 153),
- (180, 165, 180), (150, 100, 100), (150, 120, 90), (153,
- 153, 153), (153, 153, 153), (250, 170, 30), (220, 220, 0),
- (107, 142, 35), (152, 251, 152), (70, 130, 180), (220,
- 20, 60), (255, 0, 0), (0, 0, 142), (0, 0, 70),
- (0, 60, 100), (0, 0, 90), (0, 0, 110), (0, 80, 100), (0, 0, 230), (119, 11, 32), (0, 0, 142)],
- dtype=np.uint8)
- else:
- cmap = np.zeros((N, 3), dtype=np.uint8)
- for i in range(N):
- r, g, b = 0, 0, 0
- id = i
- for j in range(7):
- str_id = uint82bin(id)
- r = r ^ (np.uint8(str_id[-1]) << (7-j))
- g = g ^ (np.uint8(str_id[-2]) << (7-j))
- b = b ^ (np.uint8(str_id[-3]) << (7-j))
- id = id >> 3
- cmap[i, 0] = r
- cmap[i, 1] = g
- cmap[i, 2] = b
- return cmap
-
-
-class Colorize(object):
- def __init__(self, n=35):
- self.cmap = labelcolormap(n)
- self.cmap = torch.from_numpy(self.cmap[:n])
-
- def __call__(self, gray_image):
- size = gray_image.size()
- color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0)
-
- for label in range(0, len(self.cmap)):
- mask = (label == gray_image[0]).cpu()
- color_image[0][mask] = self.cmap[label][0]
- color_image[1][mask] = self.cmap[label][1]
- color_image[2][mask] = self.cmap[label][2]
-
- return color_image
diff --git a/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py b/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py
deleted file mode 100644
index 0c468125fd0182b078f49202588a2d739918bfb3..0000000000000000000000000000000000000000
--- a/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py
+++ /dev/null
@@ -1,310 +0,0 @@
-
-"""
- 该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节
-
- 不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程
- 1. predict(...)
-
- 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
- 2. predict_no_ui_long_connection(...)
-"""
-import tiktoken
-from functools import lru_cache
-from concurrent.futures import ThreadPoolExecutor
-from toolbox import get_conf, trimmed_format_exc
-
-from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
-from .bridge_chatgpt import predict as chatgpt_ui
-
-from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
-from .bridge_chatglm import predict as chatglm_ui
-
-from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
-from .bridge_newbing import predict as newbing_ui
-
-# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
-# from .bridge_tgui import predict as tgui_ui
-
-colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
-
-class LazyloadTiktoken(object):
- def __init__(self, model):
- self.model = model
-
- @staticmethod
- @lru_cache(maxsize=128)
- def get_encoder(model):
- print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数')
- tmp = tiktoken.encoding_for_model(model)
- print('加载tokenizer完毕')
- return tmp
-
- def encode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.encode(*args, **kwargs)
-
- def decode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.decode(*args, **kwargs)
-
-# Endpoint 重定向
-API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
-openai_endpoint = "https://api.openai.com/v1/chat/completions"
-api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
-newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
-# 兼容旧版的配置
-try:
- API_URL, = get_conf("API_URL")
- if API_URL != "https://api.openai.com/v1/chat/completions":
- openai_endpoint = API_URL
- print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
-except:
- pass
-# 新版配置
-if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
-if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
-if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
-
-
-# 获取tokenizer
-tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
-tokenizer_gpt4 = LazyloadTiktoken("gpt-4")
-get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=()))
-get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=()))
-
-
-model_info = {
- # openai
- "gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # api_2d
- "api2d-gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "api2d-gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # chatglm
- "chatglm": {
- "fn_with_ui": chatglm_ui,
- "fn_without_ui": chatglm_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- # newbing
- "newbing": {
- "fn_with_ui": newbing_ui,
- "fn_without_ui": newbing_noui,
- "endpoint": newbing_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
-}
-
-
-AVAIL_LLM_MODELS, = get_conf("AVAIL_LLM_MODELS")
-if "jittorllms_rwkv" in AVAIL_LLM_MODELS:
- from .bridge_jittorllms_rwkv import predict_no_ui_long_connection as rwkv_noui
- from .bridge_jittorllms_rwkv import predict as rwkv_ui
- model_info.update({
- "jittorllms_rwkv": {
- "fn_with_ui": rwkv_ui,
- "fn_without_ui": rwkv_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- })
-if "jittorllms_llama" in AVAIL_LLM_MODELS:
- from .bridge_jittorllms_llama import predict_no_ui_long_connection as llama_noui
- from .bridge_jittorllms_llama import predict as llama_ui
- model_info.update({
- "jittorllms_llama": {
- "fn_with_ui": llama_ui,
- "fn_without_ui": llama_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- })
-if "jittorllms_pangualpha" in AVAIL_LLM_MODELS:
- from .bridge_jittorllms_pangualpha import predict_no_ui_long_connection as pangualpha_noui
- from .bridge_jittorllms_pangualpha import predict as pangualpha_ui
- model_info.update({
- "jittorllms_pangualpha": {
- "fn_with_ui": pangualpha_ui,
- "fn_without_ui": pangualpha_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- })
-if "moss" in AVAIL_LLM_MODELS:
- from .bridge_moss import predict_no_ui_long_connection as moss_noui
- from .bridge_moss import predict as moss_ui
- model_info.update({
- "moss": {
- "fn_with_ui": moss_ui,
- "fn_without_ui": moss_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- })
-if "stack-claude" in AVAIL_LLM_MODELS:
- from .bridge_stackclaude import predict_no_ui_long_connection as claude_noui
- from .bridge_stackclaude import predict as claude_ui
- # claude
- model_info.update({
- "stack-claude": {
- "fn_with_ui": claude_ui,
- "fn_without_ui": claude_noui,
- "endpoint": None,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- }
- })
-
-
-def LLM_CATCH_EXCEPTION(f):
- """
- 装饰器函数,将错误显示出来
- """
- def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience):
- try:
- return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- except Exception as e:
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- observe_window[0] = tb_str
- return tb_str
- return decorated
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False):
- """
- 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
- inputs:
- 是本次问询的输入
- sys_prompt:
- 系统静默prompt
- llm_kwargs:
- LLM的内部调优参数
- history:
- 是之前的对话列表
- observe_window = None:
- 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
- """
- import threading, time, copy
-
- model = llm_kwargs['llm_model']
- n_model = 1
- if '&' not in model:
- assert not model.startswith("tgui"), "TGUI不支持函数插件的实现"
-
- # 如果只询问1个大语言模型:
- method = model_info[model]["fn_without_ui"]
- return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- else:
- # 如果同时询问多个大语言模型:
- executor = ThreadPoolExecutor(max_workers=4)
- models = model.split('&')
- n_model = len(models)
-
- window_len = len(observe_window)
- assert window_len==3
- window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True]
-
- futures = []
- for i in range(n_model):
- model = models[i]
- method = model_info[model]["fn_without_ui"]
- llm_kwargs_feedin = copy.deepcopy(llm_kwargs)
- llm_kwargs_feedin['llm_model'] = model
- future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience)
- futures.append(future)
-
- def mutex_manager(window_mutex, observe_window):
- while True:
- time.sleep(0.25)
- if not window_mutex[-1]: break
- # 看门狗(watchdog)
- for i in range(n_model):
- window_mutex[i][1] = observe_window[1]
- # 观察窗(window)
- chat_string = []
- for i in range(n_model):
- chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " )
- res = '
\n\n---\n\n'.join(chat_string)
- # # # # # # # # # # #
- observe_window[0] = res
-
- t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True)
- t_model.start()
-
- return_string_collect = []
- while True:
- worker_done = [h.done() for h in futures]
- if all(worker_done):
- executor.shutdown()
- break
- time.sleep(1)
-
- for i, future in enumerate(futures): # wait and get
- return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " )
-
- window_mutex[-1] = False # stop mutex thread
- res = '
\n\n---\n\n'.join(return_string_collect)
- return res
-
-
-def predict(inputs, llm_kwargs, *args, **kwargs):
- """
- 发送至LLM,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是LLM的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
-
- method = model_info[llm_kwargs['llm_model']]["fn_with_ui"]
- yield from method(inputs, llm_kwargs, *args, **kwargs)
-
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py
deleted file mode 100644
index 5c91deed460fac732d1a4f9b94829617950f1464..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py
+++ /dev/null
@@ -1,259 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from collections import OrderedDict
-from batchgenerators.utilities.file_and_folder_operations import *
-import shutil
-import numpy as np
-from numpy.random.mtrand import RandomState
-import subprocess
-from multiprocessing import pool
-import pandas as pd
-
-
-
-def get_mnms_data(data_root):
- files_raw = []
- files_gt = []
- for r, dirs, files in os.walk(data_root):
- for f in files:
- if f.endswith('nii.gz'):
- file_path = os.path.join(r, f)
- if '_gt' in f:
- files_gt.append(file_path)
- else:
- files_raw.append(file_path)
- return files_raw, files_gt
-
-
-def generate_filename_for_nnunet(pat_id, ts, pat_folder=None, add_zeros=False, vendor=None, centre=None, mode='mnms',
- data_format='nii.gz'):
- if not vendor or not centre:
- if add_zeros:
- filename = "{}_{}_0000.{}".format(pat_id, str(ts).zfill(4), data_format)
- else:
- filename = "{}_{}.{}".format(pat_id, str(ts).zfill(4), data_format)
- else:
- if mode == 'mnms':
- if add_zeros:
- filename = "{}_{}_{}_{}_0000.{}".format(pat_id, str(ts).zfill(4), vendor, centre, data_format)
- else:
- filename = "{}_{}_{}_{}.{}".format(pat_id, str(ts).zfill(4), vendor, centre, data_format)
- else:
- if add_zeros:
- filename = "{}_{}_{}_{}_0000.{}".format(vendor, centre, pat_id, str(ts).zfill(4), data_format)
- else:
- filename = "{}_{}_{}_{}.{}".format(vendor, centre, pat_id, str(ts).zfill(4), data_format)
-
- if pat_folder:
- filename = os.path.join(pat_folder, filename)
- return filename
-
-
-def select_annotated_frames_mms(data_folder, out_folder, add_zeros=False, mode='mnms', df_path="/media/full/tera2/data/challenges/mms/Training-corrected_original/M&Ms Dataset Information.xlsx"):
- table = pd.read_excel(df_path, index_col='External code')
-
- for idx in table.index:
- ed = table.loc[idx, 'ED']
- es = table.loc[idx, 'ES']
- vendor = table.loc[idx, 'Vendor']
- centre = table.loc[idx, 'Centre']
-
- if vendor != "C":
-
- # generate old filename (w/o vendor and centre)
- filename_ed_original = generate_filename_for_nnunet(pat_id=idx, ts=ed, pat_folder=data_folder,
- vendor=None, centre=None, add_zeros=False)
- filename_es_original = generate_filename_for_nnunet(pat_id=idx, ts=es, pat_folder=data_folder,
- vendor=None, centre=None, add_zeros=False)
-
- # generate new filename with vendor and centre
- filename_ed = generate_filename_for_nnunet(pat_id=idx, ts=ed, pat_folder=out_folder,
- vendor=vendor, centre=centre, add_zeros=add_zeros, mode=mode)
- filename_es = generate_filename_for_nnunet(pat_id=idx, ts=es, pat_folder=out_folder,
- vendor=vendor, centre=centre, add_zeros=add_zeros, mode=mode)
-
- shutil.copy(filename_ed_original, filename_ed)
- shutil.copy(filename_es_original, filename_es)
-
-
-def create_custom_splits_for_experiments(task_path):
- data_keys = [i[:-4] for i in
- subfiles(os.path.join(task_path, "nnUNetData_plans_v2.1_2D_stage0"),
- join=False, suffix='npz')]
- existing_splits = os.path.join(task_path, "splits_final.pkl")
-
- splits = load_pickle(existing_splits)
- splits = splits[:5] # discard old changes
-
- unique_a_only = np.unique([i.split('_')[0] for i in data_keys if i.find('_A_') != -1])
- unique_b_only = np.unique([i.split('_')[0] for i in data_keys if i.find('_B_') != -1])
-
- num_train_a = int(np.round(0.8 * len(unique_a_only)))
- num_train_b = int(np.round(0.8 * len(unique_b_only)))
-
- p = RandomState(1234)
- idx_a_train = p.choice(len(unique_a_only), num_train_a, replace=False)
- idx_b_train = p.choice(len(unique_b_only), num_train_b, replace=False)
-
- identifiers_a_train = [unique_a_only[i] for i in idx_a_train]
- identifiers_b_train = [unique_b_only[i] for i in idx_b_train]
-
- identifiers_a_val = [i for i in unique_a_only if i not in identifiers_a_train]
- identifiers_b_val = [i for i in unique_b_only if i not in identifiers_b_train]
-
- # fold 5 will be train on a and eval on val sets of a and b
- splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_a_train],
- 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if
- i.split("_")[
- 0] in identifiers_b_val]})
-
- # fold 6 will be train on b and eval on val sets of a and b
- splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_b_train],
- 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if
- i.split("_")[
- 0] in identifiers_b_val]})
-
- # fold 7 train on both, eval on both
- splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_b_train] + [i for i in data_keys if i.split("_")[0] in identifiers_a_train],
- 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if
- i.split("_")[
- 0] in identifiers_b_val]})
- save_pickle(splits, existing_splits)
-
-def split_4d_nii(nii_path, split_folder, pat_name=None, add_zeros=False):
-
- # create temporary folder in which the 3d+t file will be split into many 3d files
- temp_base = os.path.dirname(nii_path)
- temp_location = os.path.join(temp_base, 'tmp')
- if not os.path.isdir(temp_location):
- os.mkdir(temp_location)
- os.chdir(temp_location)
-
- if not os.path.isdir(split_folder):
- os.mkdir(split_folder)
- _ = subprocess.call(['fslsplit', nii_path])
-
- # rename files so that the patient's ID is in the filename
- file_list = [f for f in os.listdir(temp_location) if os.path.isfile(f)]
- file_list = sorted(file_list)
-
- if not pat_name:
- pat_name = os.path.basename(os.path.dirname(nii_path))
-
- for ts, temp_file in enumerate(file_list):
- # get time
- time_step = temp_file.split('.')[0][3:]
- # make sure the time step is a number. Otherwise trust in pythons sort algorithm
- try:
- int(time_step)
- except:
- time_step = ts
-
- # change filename AND location -> move files
- if add_zeros:
- new_file_name = '{}_{}_0000.nii.gz'.format(pat_name, time_step)
- else:
- new_file_name = '{}_{}.nii.gz'.format(pat_name, time_step)
- os.rename(os.path.join(temp_location, temp_file),
- os.path.join(split_folder, new_file_name))
-
- os.rmdir(temp_location)
-
-def split_4d_parallel(args):
- nii_path, split_folder, pat_name = args
- split_4d_nii(nii_path, split_folder, pat_name)
-
-
-def split_4d_for_all_pat(files_paths, split_folder):
- p = pool.Pool(8)
- p.map(split_4d_parallel,
- zip(files_paths, [split_folder] * len(files_paths), [None] * len(files_paths)))
-
-if __name__ == "__main__":
- task_name = "Task114_heart_MNMs"
- train_dir = "/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/imagesTr".format(task_name)
- test_dir = "/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/imagesTs".format(task_name)
- #out_dir='/media/full/tera2/output_nnUNet/preprocessed_data/Task114_heart_mnms'
- out_dir='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/tmp'
-
- # train
- all_train_files = [os.path.join(train_dir, x) for x in os.listdir(train_dir)]
- # test
- all_test_files = [os.path.join(test_dir, x) for x in os.listdir(test_dir)]
-
- data_root = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/Training-corrected_original/Labeled'
- files_raw, files_gt = get_mnms_data(data_root=data_root)
- split_path_raw ='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/temp_split_raw'
- split_path_gt ='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/temp_split_gt'
- maybe_mkdir_p(split_path_raw)
- maybe_mkdir_p(split_path_gt)
-
- split_4d_for_all_pat(files_raw, split_path_raw)
- split_4d_for_all_pat(files_gt, split_path_gt)
-
- out_dir = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/'.format(task_name)
-
- maybe_mkdir_p(join(out_dir, "imagesTr"))
- maybe_mkdir_p(join(out_dir, "imagesTs"))
- maybe_mkdir_p(join(out_dir, "labelsTr"))
-
- imagesTr_path = os.path.join(out_dir, "imagesTr")
- labelsTr_path = os.path.join(out_dir, "labelsTr")
- select_annotated_frames_mms(split_path_raw, imagesTr_path, add_zeros=True)
- select_annotated_frames_mms(split_path_gt, labelsTr_path, add_zeros=False)
-
- labelsTr = subfiles(labelsTr_path)
-
-
- json_dict = OrderedDict()
- json_dict['name'] = "M&Ms"
- json_dict['description'] = "short axis cardiac cine MRI segmentation"
- json_dict['tensorImageSize'] = "4D"
- json_dict['reference'] = "Campello, Víctor M. et al.: Multi-Centre, Multi-Vendor & Multi-Disease Cardiac Image Segmentation. In preparation."
- json_dict['licence'] = "see M&Ms challenge"
- json_dict['release'] = "0.0"
- json_dict['modality'] = {
- "0": "MRI",
- }
- # labels differ for ACDC challenge
- json_dict['labels'] = {
- "0": "background",
- "1": "LVBP",
- "2": "LVM",
- "3": "RV"
- }
- json_dict['numTraining'] = len(labelsTr)
- json_dict['numTest'] = 0
- json_dict['training'] = [{'image': "./imagesTr/%s" % i.split("/")[-1], "label": "./labelsTr/%s" % i.split("/")[-1]} for i in
- labelsTr]
- json_dict['test'] = []
-
- save_json(json_dict, os.path.join(out_dir, "dataset.json"))
-
- # then preprocess data and plan training.
- # run in terminal
- # > nnUNet_plan_and_preprocess -t --verify_dataset_integrity
-
- # start training and stop it immediately to get a split.pkl file
- # > nnUNet_train 2d nnUNetTrainerV2_MMS 0
-
- #
- # then create custom splits as used for the final M&Ms submission
- #
-
- split_file_path = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/output_nnUNet/preprocessed_data/{}/'.format(task_name)
-
- create_custom_splits_for_experiments(split_file_path)
-
diff --git a/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py b/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py
deleted file mode 100644
index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000
--- a/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/huggingface-projects/diffusers-gallery/Dockerfile b/spaces/huggingface-projects/diffusers-gallery/Dockerfile
deleted file mode 100644
index 0ba18d346de09532882673442ee72107556a887d..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/diffusers-gallery/Dockerfile
+++ /dev/null
@@ -1,2 +0,0 @@
-FROM nginxinc/nginx-unprivileged:alpine
-COPY . /usr/share/nginx/html
\ No newline at end of file
diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py
deleted file mode 100644
index 14a6bb79da7eaa3f111e9efedf507e46a953c9aa..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.margin_list = (1.0, 0.0, 0.4)
-config.network = "mbf"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 0.2
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 1e-4
-config.batch_size = 512
-config.lr = 0.4
-config.verbose = 10000
-config.dali = False
-
-config.rec = "/train_tmp/WebFace42M"
-config.num_classes = 2059906
-config.num_image = 42474557
-config.num_epoch = 20
-config.warmup_epoch = 2
-config.val_targets = []
diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py
deleted file mode 100644
index a7e57275fa17a0a9dbf27fd0eb941dd0fec1823f..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import math
-import os
-import random
-
-import numpy as np
-import torch
-import torch.distributed as dist
-from torch.utils.data import DistributedSampler as _DistributedSampler
-
-
-def setup_seed(seed, cuda_deterministic=True):
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
- np.random.seed(seed)
- random.seed(seed)
- os.environ["PYTHONHASHSEED"] = str(seed)
- if cuda_deterministic: # slower, more reproducible
- torch.backends.cudnn.deterministic = True
- torch.backends.cudnn.benchmark = False
- else: # faster, less reproducible
- torch.backends.cudnn.deterministic = False
- torch.backends.cudnn.benchmark = True
-
-
-def worker_init_fn(worker_id, num_workers, rank, seed):
- # The seed of each worker equals to
- # num_worker * rank + worker_id + user_seed
- worker_seed = num_workers * rank + worker_id + seed
- np.random.seed(worker_seed)
- random.seed(worker_seed)
- torch.manual_seed(worker_seed)
-
-
-def get_dist_info():
- if dist.is_available() and dist.is_initialized():
- rank = dist.get_rank()
- world_size = dist.get_world_size()
- else:
- rank = 0
- world_size = 1
-
- return rank, world_size
-
-
-def sync_random_seed(seed=None, device="cuda"):
- """Make sure different ranks share the same seed.
- All workers must call this function, otherwise it will deadlock.
- This method is generally used in `DistributedSampler`,
- because the seed should be identical across all processes
- in the distributed group.
- In distributed sampling, different ranks should sample non-overlapped
- data in the dataset. Therefore, this function is used to make sure that
- each rank shuffles the data indices in the same order based
- on the same seed. Then different ranks could use different indices
- to select non-overlapped data from the same data list.
- Args:
- seed (int, Optional): The seed. Default to None.
- device (str): The device where the seed will be put on.
- Default to 'cuda'.
- Returns:
- int: Seed to be used.
- """
- if seed is None:
- seed = np.random.randint(2**31)
- assert isinstance(seed, int)
-
- rank, world_size = get_dist_info()
-
- if world_size == 1:
- return seed
-
- if rank == 0:
- random_num = torch.tensor(seed, dtype=torch.int32, device=device)
- else:
- random_num = torch.tensor(0, dtype=torch.int32, device=device)
-
- dist.broadcast(random_num, src=0)
-
- return random_num.item()
-
-
-class DistributedSampler(_DistributedSampler):
- def __init__(
- self,
- dataset,
- num_replicas=None, # world_size
- rank=None, # local_rank
- shuffle=True,
- seed=0,
- ):
-
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
-
- # In distributed sampling, different ranks should sample
- # non-overlapped data in the dataset. Therefore, this function
- # is used to make sure that each rank shuffles the data indices
- # in the same order based on the same seed. Then different ranks
- # could use different indices to select non-overlapped data from the
- # same data list.
- self.seed = sync_random_seed(seed)
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- if self.shuffle:
- g = torch.Generator()
- # When :attr:`shuffle=True`, this ensures all replicas
- # use a different random ordering for each epoch.
- # Otherwise, the next iteration of this sampler will
- # yield the same ordering.
- g.manual_seed(self.epoch + self.seed)
- indices = torch.randperm(len(self.dataset), generator=g).tolist()
- else:
- indices = torch.arange(len(self.dataset)).tolist()
-
- # add extra samples to make it evenly divisible
- # in case that indices is shorter than half of total_size
- indices = (indices * math.ceil(self.total_size / len(indices)))[: self.total_size]
- assert len(indices) == self.total_size
-
- # subsample
- indices = indices[self.rank : self.total_size : self.num_replicas]
- assert len(indices) == self.num_samples
-
- return iter(indices)
diff --git a/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py b/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
deleted file mode 100644
index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-
-from .comm import SyncMaster
-
-__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dementions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
-
- # Always using same "device order" makes the ReduceAdd operation faster.
- # Thanks to:: Tete Xiao (http://tetexiao.com/)
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
-
- return mean, bias_var.clamp(self.eps) ** -0.5
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
diff --git a/spaces/igashov/DiffLinker/src/molecule_builder.py b/spaces/igashov/DiffLinker/src/molecule_builder.py
deleted file mode 100644
index ef7417597c87c4d01b6bb624a814caf6e224707d..0000000000000000000000000000000000000000
--- a/spaces/igashov/DiffLinker/src/molecule_builder.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import torch
-import numpy as np
-
-from rdkit import Chem, Geometry
-
-from src import const
-
-
-def create_conformer(coords):
- conformer = Chem.Conformer()
- for i, (x, y, z) in enumerate(coords):
- conformer.SetAtomPosition(i, Geometry.Point3D(x, y, z))
- return conformer
-
-
-def build_molecules(one_hot, x, node_mask, is_geom, margins=const.MARGINS_EDM):
- molecules = []
- for i in range(len(one_hot)):
- mask = node_mask[i].squeeze() == 1
- atom_types = one_hot[i][mask].argmax(dim=1).detach().cpu()
- positions = x[i][mask].detach().cpu()
- mol = build_molecule(positions, atom_types, is_geom, margins=margins)
- molecules.append(mol)
-
- return molecules
-
-
-def build_molecule(positions, atom_types, is_geom, margins=const.MARGINS_EDM):
- idx2atom = const.GEOM_IDX2ATOM if is_geom else const.IDX2ATOM
- X, A, E = build_xae_molecule(positions, atom_types, is_geom=is_geom, margins=margins)
- mol = Chem.RWMol()
- for atom in X:
- a = Chem.Atom(idx2atom[atom.item()])
- mol.AddAtom(a)
-
- all_bonds = torch.nonzero(A)
- for bond in all_bonds:
- mol.AddBond(bond[0].item(), bond[1].item(), const.BOND_DICT[E[bond[0], bond[1]].item()])
-
- mol.AddConformer(create_conformer(positions.detach().cpu().numpy().astype(np.float64)))
- return mol
-
-
-def build_xae_molecule(positions, atom_types, is_geom, margins=const.MARGINS_EDM):
- """ Returns a triplet (X, A, E): atom_types, adjacency matrix, edge_types
- args:
- positions: N x 3 (already masked to keep final number nodes)
- atom_types: N
- returns:
- X: N (int)
- A: N x N (bool) (binary adjacency matrix)
- E: N x N (int) (bond type, 0 if no bond) such that A = E.bool()
- """
- n = positions.shape[0]
- X = atom_types
- A = torch.zeros((n, n), dtype=torch.bool)
- E = torch.zeros((n, n), dtype=torch.int)
-
- idx2atom = const.GEOM_IDX2ATOM if is_geom else const.IDX2ATOM
-
- pos = positions.unsqueeze(0)
- dists = torch.cdist(pos, pos, p=2).squeeze(0)
- for i in range(n):
- for j in range(i):
-
- pair = sorted([atom_types[i], atom_types[j]])
- order = get_bond_order(idx2atom[pair[0].item()], idx2atom[pair[1].item()], dists[i, j], margins=margins)
-
- # TODO: a batched version of get_bond_order to avoid the for loop
- if order > 0:
- # Warning: the graph should be DIRECTED
- A[i, j] = 1
- E[i, j] = order
-
- return X, A, E
-
-
-def get_bond_order(atom1, atom2, distance, check_exists=True, margins=const.MARGINS_EDM):
- distance = 100 * distance # We change the metric
-
- # Check exists for large molecules where some atom pairs do not have a
- # typical bond length.
- if check_exists:
- if atom1 not in const.BONDS_1:
- return 0
- if atom2 not in const.BONDS_1[atom1]:
- return 0
-
- # margin1, margin2 and margin3 have been tuned to maximize the stability of the QM9 true samples
- if distance < const.BONDS_1[atom1][atom2] + margins[0]:
-
- # Check if atoms in bonds2 dictionary.
- if atom1 in const.BONDS_2 and atom2 in const.BONDS_2[atom1]:
- thr_bond2 = const.BONDS_2[atom1][atom2] + margins[1]
- if distance < thr_bond2:
- if atom1 in const.BONDS_3 and atom2 in const.BONDS_3[atom1]:
- thr_bond3 = const.BONDS_3[atom1][atom2] + margins[2]
- if distance < thr_bond3:
- return 3 # Triple
- return 2 # Double
- return 1 # Single
- return 0 # No bond
diff --git a/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md b/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md
deleted file mode 100644
index d94ee5436420e0e055aa37bf7057b7473d780221..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Coolutils Total PDF Converter 6.1.0.195 – PDF jamacorni
Download ->->->-> https://gohhs.com/2uz5Q7
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md
deleted file mode 100644
index 251517f45f7ae02dcba9f709f12c394f6062a0b1..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md
+++ /dev/null
@@ -1,6 +0,0 @@
-disconnect hack download mu online
Download File → https://urlin.us/2uEwJg
-
-Play for free MU Online on our Server. 24/7 up time! High Exp Server, PVP & NON-PVP Servers, Friendly Game Masters, Player Rankings. Since 2006! 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md
deleted file mode 100644
index e5bc1570f3b1cb24afd3fe3ac15651051dfa8aea..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-How to Play Initial D Arcade Stage 7 on PC Using TeknoParrot
-If you are a fan of the Initial D manga and anime series, you might have heard of the arcade racing game Initial D Arcade Stage 7 AAX. This game is the seventh installment in the series and features improved graphics, new cars, new courses, and online multiplayer mode. But what if you don't have access to an arcade machine or a RingEdge system? Can you play Initial D Arcade Stage 7 on PC?
-The answer is yes, thanks to a software called TeknoParrot. TeknoParrot is a compatibility layer that allows you to run arcade games on your PC. It supports many games from Sega, Namco, Taito, and other arcade developers. In this article, we will show you how to download and install Initial D Arcade Stage 7 on PC using TeknoParrot.
-Initial D Arcade Stage 7 Pc Download
Download File ✅ https://urlin.us/2uEyAI
-What You Need to Play Initial D Arcade Stage 7 on PC
-Before you start, make sure you have the following requirements:
-
-- A PC with Windows 7 or higher, a decent CPU and GPU, and at least 4 GB of RAM.
-- TeknoParrot software. You can download it from here.[^1^]
-- Initial D Arcade Stage 7 AAX game files. You can download them from here.[^3^]
-- Microsoft .NET Framework 4.7.2 or higher. You can download it from here.[^3^]
-- Microsoft DirectX End-User Runtime Web Installer. You can download it from here.[^2^]
-- Microsoft Visual C++ 2010 Redistributable Package (x64) and (x86). You can download them from here and here.[^2^]
-- A controller or a racing wheel. TeknoParrot supports various input devices, such as Xbox 360 controllers, Logitech MOMO Racing Wheels, etc.
-
-How to Install Initial D Arcade Stage 7 on PC
-Once you have all the requirements ready, follow these steps to install Initial D Arcade Stage 7 on PC:
-
-- Extract the TeknoParrot software to a folder of your choice.
-- Extract the Initial D Arcade Stage 7 AAX game files to a folder of your choice.
-- Run TeknoParrotUI.exe as administrator.
-- Click on Add Game and select InitialD7_GLW_RE_SBYD from the list.
-- Click on Game Settings and browse for the game executable (InitialD7_GLW_RE_SBYD.exe) in the folder where you extracted the game files.
-- Adjust the game resolution, window mode, and other settings according to your preference.
-- Click on Save Settings.
-- Click on Controller Setup and configure your controller or racing wheel according to your preference.
-- Click on Save Settings.
-- Click on Test Game to launch Initial D Arcade Stage 7 on PC.
-
-How to Play Initial D Arcade Stage 7 on PC
-To play Initial D Arcade Stage 7 on PC, you need to create a card file that stores your progress and settings. To do this, follow these steps:
-
-- Launch Initial D Arcade Stage 7 on PC using TeknoParrot.
-- Press F2 to enter the test menu.
-- Select Card Management and press Enter.
-- Select Create New Card File and press Enter.
-- Select a card number (1-4) and press Enter d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md b/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md
deleted file mode 100644
index beef94c80935b58ac7cd3cce1ac3541ef2446bb3..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-```
-David Hindi Movie 720p: A Thrilling Action Drama
-David is a 2013 Hindi movie directed by Bejoy Nambiar and starring Vikram, Neil Nitin Mukesh, Vinay Virmani, Tabu, Lara Dutta and Isha Sharvani. The movie follows the lives of three men named David, who are connected by a common thread of fate.
-The first David (Vikram) is a fisherman in Goa, who falls in love with a deaf and mute girl named Roma (Isha Sharvani). The second David (Neil Nitin Mukesh) is a gangster in London, who works for a notorious crime lord named Ghani (Akhilendra Mishra). The third David (Vinay Virmani) is a musician in Mumbai, who struggles with his religious identity and his relationship with his father (Nasser), a Christian priest.
-David Hindi Movie 720p
Download –––––>>> https://tiurll.com/2uCj6p
-The movie explores how each David faces a crisis in his life and how he deals with it. The movie also showcases the different aspects of love, faith, betrayal and redemption. The movie has been praised for its stylish cinematography, music and performances.
-If you are looking for a thrilling action drama with a twist, you should watch David Hindi Movie 720p online. You can download or stream the movie from various platforms such as Netflix, Amazon Prime Video, Hotstar and more. You can also watch the trailer of the movie here:
-David Hindi Movie 720p Trailer
-```
-
-```
-David Hindi Movie 720p has a unique narrative structure, as it switches between the three stories of the three Davids. The movie also has a nonlinear timeline, as it jumps back and forth between different years and locations. The movie uses different color schemes and visual styles to differentiate the three stories and create a distinct mood for each one.
-The movie also has a stellar soundtrack, composed by various artists such as Anirudh Ravichander, Prashant Pillai, Mikey McCleary and Remo Fernandes. The movie features some catchy songs such as "Mast Kalandar", "Dama Dam Mast Kalandar", "Tere Mere Pyaar Ki" and "Yun Hi Re". The movie also has some soulful background scores that enhance the emotional impact of the scenes.
-David Hindi Movie 720p is a movie that will keep you hooked till the end with its gripping plot and engaging characters. The movie has received positive reviews from critics and audiences alike, and has been nominated for several awards. The movie is a must-watch for fans of action, drama and suspense.
-
-```
-
-```
-If you want to know more about David Hindi Movie 720p, you can visit the official website of the movie here:
-David Hindi Movie 720p Official Website
-You can also follow the social media pages of the movie and the cast and crew here:
-
-David Hindi Movie 720p is a movie that you should not miss. It is a movie that will make you think, feel and enjoy. It is a movie that will leave you with a lasting impression. Watch David Hindi Movie 720p online today and experience the thrill of this action drama.
-``` d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/jacob-petterle/cloudtop-deployer/Dockerfile b/spaces/jacob-petterle/cloudtop-deployer/Dockerfile
deleted file mode 100644
index 769ad2b452a235614aecf348a9cf32d67f8c5659..0000000000000000000000000000000000000000
--- a/spaces/jacob-petterle/cloudtop-deployer/Dockerfile
+++ /dev/null
@@ -1,28 +0,0 @@
-FROM python:3.9 as base
-
-WORKDIR app
-
-COPY requirements.txt .
-
-RUN --mount=type=secret,id=SSH_PRIVATE_KEY,mode=0400,required=true \
- mkdir -p /root/.ssh \
- && echo "$(cat /run/secrets/SSH_PRIVATE_KEY)" > /root/.ssh/id_rsa \
- && chmod 600 /root/.ssh/id_rsa \
- && echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config \
- && pip install --no-cache-dir --upgrade -r requirements.txt
-
-RUN apt-get update -yq \
- && apt-get install -y curl \
- && curl -sL https://deb.nodesource.com/setup_16.x | bash \
- && apt-get install -y nodejs npm
-
-RUN npm install -g aws-cdk \
- && cdk --version
-
-FROM base
-
-COPY . .
-
-RUN chmod 0777 /app
-
-CMD ["streamlit", "run", "streamlit_app.py", "--server.port", "7860"]
\ No newline at end of file
diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py b/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py
deleted file mode 100644
index a53a7b3b7be5345477e82b154eb535f75da59b78..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py
+++ /dev/null
@@ -1,276 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Compare average power spectra between real and generated images,
-or between multiple generators."""
-
-import os
-import numpy as np
-import torch
-import torch.fft
-import scipy.ndimage
-import matplotlib.pyplot as plt
-import click
-import tqdm
-import dnnlib
-
-import legacy
-from training import dataset
-
-#----------------------------------------------------------------------------
-# Setup an iterator for streaming images, in uint8 NCHW format, based on the
-# respective command line options.
-
-def stream_source_images(source, num, seed, device, data_loader_kwargs=None): # => num_images, image_size, image_iter
- ext = source.split('.')[-1].lower()
- if data_loader_kwargs is None:
- data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2)
-
- if ext == 'pkl':
- if num is None:
- raise click.ClickException('--num is required when --source points to network pickle')
- with dnnlib.util.open_url(source) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device)
- def generate_image(seed):
- rnd = np.random.RandomState(seed)
- z = torch.from_numpy(rnd.randn(1, G.z_dim)).to(device)
- c = torch.zeros([1, G.c_dim], device=device)
- if G.c_dim > 0:
- c[:, rnd.randint(G.c_dim)] = 1
- return (G(z=z, c=c) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- _ = generate_image(seed) # warm up
- image_iter = (generate_image(seed + idx) for idx in range(num))
- return num, G.img_resolution, image_iter
-
- elif ext == 'zip' or os.path.isdir(source):
- dataset_obj = dataset.ImageFolderDataset(path=source, max_size=num, random_seed=seed)
- if num is not None and num != len(dataset_obj):
- raise click.ClickException(f'--source contains fewer than {num} images')
- data_loader = torch.utils.data.DataLoader(dataset_obj, batch_size=1, **data_loader_kwargs)
- image_iter = (image.to(device) for image, _label in data_loader)
- return len(dataset_obj), dataset_obj.resolution, image_iter
-
- else:
- raise click.ClickException('--source must point to network pickle, dataset zip, or directory')
-
-#----------------------------------------------------------------------------
-# Load average power spectrum from the specified .npz file and construct
-# the corresponding heatmap for visualization.
-
-def construct_heatmap(npz_file, smooth):
- npz_data = np.load(npz_file)
- spectrum = npz_data['spectrum']
- image_size = npz_data['image_size']
- hmap = np.log10(spectrum) * 10 # dB
- hmap = np.fft.fftshift(hmap)
- hmap = np.concatenate([hmap, hmap[:1, :]], axis=0)
- hmap = np.concatenate([hmap, hmap[:, :1]], axis=1)
- if smooth > 0:
- sigma = spectrum.shape[0] / image_size * smooth
- hmap = scipy.ndimage.gaussian_filter(hmap, sigma=sigma, mode='nearest')
- return hmap, image_size
-
-#----------------------------------------------------------------------------
-
-@click.group()
-def main():
- """Compare average power spectra between real and generated images,
- or between multiple generators.
-
- Example:
-
- \b
- # Calculate dataset mean and std, needed in subsequent steps.
- python avg_spectra.py stats --source=~/datasets/ffhq-1024x1024.zip
-
- \b
- # Calculate average spectrum for the training data.
- python avg_spectra.py calc --source=~/datasets/ffhq-1024x1024.zip \\
- --dest=tmp/training-data.npz --mean=112.684 --std=69.509
-
- \b
- # Calculate average spectrum for a pre-trained generator.
- python avg_spectra.py calc \\
- --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \\
- --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000
-
- \b
- # Display results.
- python avg_spectra.py heatmap tmp/training-data.npz
- python avg_spectra.py heatmap tmp/stylegan3-r.npz
- python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz
-
- \b
- # Save as PNG.
- python avg_spectra.py heatmap tmp/training-data.npz --save=tmp/training-data.png --dpi=300
- python avg_spectra.py heatmap tmp/stylegan3-r.npz --save=tmp/stylegan3-r.png --dpi=300
- python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz --save=tmp/slices.png --dpi=300
- """
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True)
-@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1))
-@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
-def stats(source, num, seed, device=torch.device('cuda')):
- """Calculate dataset mean and standard deviation needed by 'calc'."""
- torch.multiprocessing.set_start_method('spawn')
- num_images, _image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device)
-
- # Accumulate moments.
- moments = torch.zeros([3], dtype=torch.float64, device=device)
- for image in tqdm.tqdm(image_iter, total=num_images):
- image = image.to(torch.float64)
- moments += torch.stack([torch.ones_like(image).sum(), image.sum(), image.square().sum()])
- moments = moments / moments[0]
-
- # Compute mean and standard deviation.
- mean = moments[1]
- std = (moments[2] - moments[1].square()).sqrt()
- print(f'--mean={mean:g} --std={std:g}')
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True)
-@click.option('--dest', help='Where to store the result', metavar='NPZ', required=True)
-@click.option('--mean', help='Dataset mean for whitening', metavar='FLOAT', type=float, required=True)
-@click.option('--std', help='Dataset standard deviation for whitening', metavar='FLOAT', type=click.FloatRange(min=0), required=True)
-@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1))
-@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
-@click.option('--beta', help='Shape parameter for the Kaiser window', metavar='FLOAT', type=click.FloatRange(min=0), default=8, show_default=True)
-@click.option('--interp', help='Frequency-domain interpolation factor', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True)
-def calc(source, dest, mean, std, num, seed, beta, interp, device=torch.device('cuda')):
- """Calculate average power spectrum and store it in .npz file."""
- torch.multiprocessing.set_start_method('spawn')
- num_images, image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device)
- spectrum_size = image_size * interp
- padding = spectrum_size - image_size
-
- # Setup window function.
- window = torch.kaiser_window(image_size, periodic=False, beta=beta, device=device)
- window *= window.square().sum().rsqrt()
- window = window.ger(window).unsqueeze(0).unsqueeze(1)
-
- # Accumulate power spectrum.
- spectrum = torch.zeros([spectrum_size, spectrum_size], dtype=torch.float64, device=device)
- for image in tqdm.tqdm(image_iter, total=num_images):
- image = (image.to(torch.float64) - mean) / std
- image = torch.nn.functional.pad(image * window, [0, padding, 0, padding])
- spectrum += torch.fft.fftn(image, dim=[2,3]).abs().square().mean(dim=[0,1])
- spectrum /= num_images
-
- # Save result.
- if os.path.dirname(dest):
- os.makedirs(os.path.dirname(dest), exist_ok=True)
- np.savez(dest, spectrum=spectrum.cpu().numpy(), image_size=image_size)
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.argument('npz-file', nargs=1)
-@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]')
-@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True)
-@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=1.25, show_default=True)
-def heatmap(npz_file, save, smooth, dpi):
- """Visualize 2D heatmap based on the given .npz file."""
- hmap, image_size = construct_heatmap(npz_file=npz_file, smooth=smooth)
-
- # Setup plot.
- plt.figure(figsize=[6, 4.8], dpi=dpi, tight_layout=True)
- freqs = np.linspace(-0.5, 0.5, num=hmap.shape[0], endpoint=True) * image_size
- ticks = np.linspace(freqs[0], freqs[-1], num=5, endpoint=True)
- levels = np.linspace(-40, 20, num=13, endpoint=True)
-
- # Draw heatmap.
- plt.xlim(ticks[0], ticks[-1])
- plt.ylim(ticks[0], ticks[-1])
- plt.xticks(ticks)
- plt.yticks(ticks)
- plt.contourf(freqs, freqs, hmap, levels=levels, extend='both', cmap='Blues')
- plt.gca().set_aspect('equal')
- plt.colorbar(ticks=levels)
- plt.contour(freqs, freqs, hmap, levels=levels, extend='both', linestyles='solid', linewidths=1, colors='midnightblue', alpha=0.2)
-
- # Display or save.
- if save is None:
- plt.show()
- else:
- if os.path.dirname(save):
- os.makedirs(os.path.dirname(save), exist_ok=True)
- plt.savefig(save)
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.argument('npz-files', nargs=-1, required=True)
-@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]')
-@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True)
-@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=0, show_default=True)
-def slices(npz_files, save, dpi, smooth):
- """Visualize 1D slices based on the given .npz files."""
- cases = [dnnlib.EasyDict(npz_file=npz_file) for npz_file in npz_files]
- for c in cases:
- c.hmap, c.image_size = construct_heatmap(npz_file=c.npz_file, smooth=smooth)
- c.label = os.path.splitext(os.path.basename(c.npz_file))[0]
-
- # Check consistency.
- image_size = cases[0].image_size
- hmap_size = cases[0].hmap.shape[0]
- if any(c.image_size != image_size or c.hmap.shape[0] != hmap_size for c in cases):
- raise click.ClickException('All .npz must have the same resolution')
-
- # Setup plot.
- plt.figure(figsize=[12, 4.6], dpi=dpi, tight_layout=True)
- hmap_center = hmap_size // 2
- hmap_range = np.arange(hmap_center, hmap_size)
- freqs0 = np.linspace(0, image_size / 2, num=(hmap_size // 2 + 1), endpoint=True)
- freqs45 = np.linspace(0, image_size / np.sqrt(2), num=(hmap_size // 2 + 1), endpoint=True)
- xticks0 = np.linspace(freqs0[0], freqs0[-1], num=9, endpoint=True)
- xticks45 = np.round(np.linspace(freqs45[0], freqs45[-1], num=9, endpoint=True))
- yticks = np.linspace(-50, 30, num=9, endpoint=True)
-
- # Draw 0 degree slice.
- plt.subplot(1, 2, 1)
- plt.title('0\u00b0 slice')
- plt.xlim(xticks0[0], xticks0[-1])
- plt.ylim(yticks[0], yticks[-1])
- plt.xticks(xticks0)
- plt.yticks(yticks)
- for c in cases:
- plt.plot(freqs0, c.hmap[hmap_center, hmap_range], label=c.label)
- plt.grid()
- plt.legend(loc='upper right')
-
- # Draw 45 degree slice.
- plt.subplot(1, 2, 2)
- plt.title('45\u00b0 slice')
- plt.xlim(xticks45[0], xticks45[-1])
- plt.ylim(yticks[0], yticks[-1])
- plt.xticks(xticks45)
- plt.yticks(yticks)
- for c in cases:
- plt.plot(freqs45, c.hmap[hmap_range, hmap_range], label=c.label)
- plt.grid()
- plt.legend(loc='upper right')
-
- # Display or save.
- if save is None:
- plt.show()
- else:
- if os.path.dirname(save):
- os.makedirs(os.path.dirname(save), exist_ok=True)
- plt.savefig(save)
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- main() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/jamescalam/dream-cacher/app.py b/spaces/jamescalam/dream-cacher/app.py
deleted file mode 100644
index 89b317529c205deecf584ee2c1c27dd2ba9416a5..0000000000000000000000000000000000000000
--- a/spaces/jamescalam/dream-cacher/app.py
+++ /dev/null
@@ -1,252 +0,0 @@
-import gradio as gr
-from diffusers import StableDiffusionPipeline
-import torch
-import io
-from PIL import Image
-import os
-from cryptography.fernet import Fernet
-from google.cloud import storage
-import pinecone
-import json
-import uuid
-import pandas as pd
-
-# decrypt Storage Cloud credentials
-fernet = Fernet(os.environ['DECRYPTION_KEY'])
-
-with open('cloud-storage.encrypted', 'rb') as fp:
- encrypted = fp.read()
- creds = json.loads(fernet.decrypt(encrypted).decode())
-# then save creds to file
-with open('cloud-storage.json', 'w', encoding='utf-8') as fp:
- fp.write(json.dumps(creds, indent=4))
-# connect to Cloud Storage
-os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'cloud-storage.json'
-storage_client = storage.Client()
-bucket = storage_client.get_bucket('hf-diffusion-images')
-
-# get api key for pinecone auth
-PINECONE_KEY = os.environ['PINECONE_KEY']
-
-index_id = "hf-diffusion"
-
-# init connection to pinecone
-pinecone.init(
- api_key=PINECONE_KEY,
- environment="us-west1-gcp"
-)
-if index_id not in pinecone.list_indexes():
- raise ValueError(f"Index '{index_id}' not found")
-
-index = pinecone.Index(index_id)
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-print(f"Using '{device}' device...")
-
-# init all of the models and move them to a given GPU
-pipe = StableDiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4", use_auth_token=os.environ['HF_AUTH']
-)
-pipe.to(device)
-
-missing_im = Image.open('missing.png')
-threshold = 0.85
-
-def encode_text(text: str):
- text_inputs = pipe.tokenizer(
- text, return_tensors='pt'
- ).to(device)
- text_embeds = pipe.text_encoder(**text_inputs)
- text_embeds = text_embeds.pooler_output.cpu().tolist()[0]
- return text_embeds
-
-def prompt_query(text: str):
- print(f"Running prompt_query('{text}')")
- embeds = encode_text(text)
- try:
- print("Try query pinecone")
- xc = index.query(embeds, top_k=30, include_metadata=True)
- print("query successful")
- except Exception as e:
- print(f"Error during query: {e}")
- # reinitialize connection
- print("Try reinitialize Pinecone connection")
- pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp')
- index2 = pinecone.Index(index_id)
- try:
- print("Now try querying pinecone again")
- xc = index2.query(embeds, top_k=30, include_metadata=True)
- print("query successful")
- except Exception as e:
- raise ValueError(e)
- prompts = [
- match['metadata']['prompt'] for match in xc['matches']
- ]
- scores = [round(match['score'], 2) for match in xc['matches']]
- # deduplicate while preserving order
- df = pd.DataFrame({'Similarity': scores, 'Prompt': prompts})
- df = df.drop_duplicates(subset='Prompt', keep='first')
- df = df[df['Prompt'].str.len() > 7].head()
- return df
-
-def diffuse(text: str):
- # diffuse
- out = pipe(text)
- if any(out.nsfw_content_detected):
- return {}
- else:
- _id = str(uuid.uuid4())
- # add image to Cloud Storage
- im = out.images[0]
- im.save(f'{_id}.png', format='png')
- added_gcp = False
- # push to storage
- try:
- print("try push to Cloud Storage")
- blob = bucket.blob(f'images/{_id}.png')
- print("try upload_from_filename")
- blob.upload_from_filename(f'{_id}.png')
- added_gcp = True
- # add embedding and metadata to Pinecone
- embeds = encode_text(text)
- meta = {
- 'prompt': text,
- 'image_url': f'images/{_id}.png'
- }
- try:
- print("now try upsert to pinecone")
- index.upsert([(_id, embeds, meta)])
- print("upsert successful")
- except Exception as e:
- try:
- print("hit exception, now trying to reinit Pinecone connection")
- pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp')
- index2 = pinecone.Index(index_id)
- print(f"reconnected to pinecone '{index_id}' index")
- index2.upsert([(_id, embeds, meta)])
- print("upsert successful")
- except Exception as e:
- print(f"PINECONE_ERROR: {e}")
- except Exception as e:
- print(f"ERROR: New image not uploaded due to error with {'Pinecone' if added_gcp else 'Cloud Storage'}")
- # delete local file
- os.remove(f'{_id}.png')
- return out.images[0]
-
-def get_image(url: str):
- blob = bucket.blob(url).download_as_string()
- blob_bytes = io.BytesIO(blob)
- im = Image.open(blob_bytes)
- return im
-
-def test_image(_id, image):
- try:
- image.save('tmp.png')
- return True
- except OSError:
- # delete corrupted file from pinecone and cloud
- index.delete(ids=[_id])
- bucket.blob(f"images/{_id}.png").delete()
- print(f"DELETED '{_id}'")
- return False
-
-def prompt_image(text: str):
- print(f"prompt_image('{text}')")
- embeds = encode_text(text)
- try:
- print("try query pinecone")
- xc = index.query(embeds, top_k=9, include_metadata=True)
- except Exception as e:
- print(f"Error during query: {e}")
- # reinitialize connection
- pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp')
- index2 = pinecone.Index(index_id)
- try:
- print("try query pinecone after reinit")
- xc = index2.query(embeds, top_k=9, include_metadata=True)
- except Exception as e:
- raise ValueError(e)
- image_urls = [
- match['metadata']['image_url'] for match in xc['matches']
- ]
- scores = [match['score'] for match in xc['matches']]
- ids = [match['id'] for match in xc['matches']]
- images = []
- print("Begin looping through (ids, image_urls)")
- for _id, image_url in zip(ids, image_urls):
- try:
- print("download_as_string from GCP")
- blob = bucket.blob(image_url).download_as_string()
- print("downloaded successfully")
- blob_bytes = io.BytesIO(blob)
- im = Image.open(blob_bytes)
- print("image opened successfully")
- if test_image(_id, im):
- images.append(im)
- print("image accessible")
- else:
- images.append(missing_im)
- print("image NOT accessible")
- except ValueError:
- print(f"ValueError: '{image_url}'")
- return images, scores
-
-# __APP FUNCTIONS__
-
-def set_suggestion(text: str):
- return gr.TextArea.update(value=text[0])
-
-def set_images(text: str):
- images, scores = prompt_image(text)
- match_found = False
- for score in scores:
- if score > threshold:
- match_found = True
- if match_found:
- print("MATCH FOUND")
- return gr.Gallery.update(value=images)
- else:
- print("NO MATCH FOUND")
- diffuse(text)
- print(f"diffusion for '{text}' complete")
- images, scores = prompt_image(text)
- return gr.Gallery.update(value=images)
-
-# __CREATE APP__
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown(
- """
- # Dream Cacher
- """
- )
- with gr.Row():
- with gr.Column():
- prompt = gr.TextArea(
- value="A person surfing",
- placeholder="Enter a prompt to dream about",
- interactive=True
- )
- search = gr.Button(value="Search!")
- suggestions = gr.Dataframe(
- values=[],
- headers=['Similarity', 'Prompt']
- )
- # event listener for change in prompt
- prompt.change(
- prompt_query, prompt, suggestions,
- show_progress=False
- )
-
- # results column
- with gr.Column():
- pics = gr.Gallery()
- pics.style(grid=3)
- # search event listening
- try:
- search.click(set_images, prompt, pics)
- except OSError:
- print("OSError")
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/jbetker/tortoise/tortoise/models/autoregressive.py b/spaces/jbetker/tortoise/tortoise/models/autoregressive.py
deleted file mode 100644
index 757a7a8555b3bbc1ca0cff9c38cf0d8699c0c4b7..0000000000000000000000000000000000000000
--- a/spaces/jbetker/tortoise/tortoise/models/autoregressive.py
+++ /dev/null
@@ -1,511 +0,0 @@
-import functools
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList
-from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
-from transformers.utils.model_parallel_utils import get_device_map, assert_device_map
-from tortoise.models.arch_util import AttentionBlock
-from tortoise.utils.typical_sampling import TypicalLogitsWarper
-
-
-def null_position_embeddings(range, dim):
- return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device)
-
-
-class ResBlock(nn.Module):
- """
- Basic residual convolutional block that uses GroupNorm.
- """
- def __init__(self, chan):
- super().__init__()
- self.net = nn.Sequential(
- nn.Conv1d(chan, chan, kernel_size=3, padding=1),
- nn.GroupNorm(chan//8, chan),
- nn.ReLU(),
- nn.Conv1d(chan, chan, kernel_size=3, padding=1),
- nn.GroupNorm(chan//8, chan)
- )
-
- def forward(self, x):
- return F.relu(self.net(x) + x)
-
-
-class GPT2InferenceModel(GPT2PreTrainedModel):
- def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear):
- super().__init__(config)
- self.transformer = gpt
- self.text_pos_embedding = text_pos_emb
- self.embeddings = embeddings
- self.lm_head = nn.Sequential(norm, linear)
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
- self.cached_mel_emb = None
-
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.transformer.h), range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.transformer.h))
- self.transformer.parallelize(self.device_map)
- self.lm_head = self.lm_head.to(self.transformer.first_device)
- self.model_parallel = True
-
- def deparallelize(self):
- self.transformer.deparallelize()
- self.transformer = self.transformer.to("cpu")
- self.lm_head = self.lm_head.to("cpu")
- self.model_parallel = False
- torch.cuda.empty_cache()
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def store_mel_emb(self, mel_emb):
- self.cached_mel_emb = mel_emb
-
- def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs):
-
- token_type_ids = kwargs.get("token_type_ids", None)
- # only last token for inputs_ids if past is defined in kwargs
- if past:
- input_ids = input_ids[:, -1].unsqueeze(-1)
- if token_type_ids is not None:
- token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
-
- attention_mask = kwargs.get("attention_mask", None)
- position_ids = kwargs.get("position_ids", None)
-
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past:
- position_ids = position_ids[:, -1].unsqueeze(-1)
- else:
- position_ids = None
- return {
- "input_ids": input_ids,
- "past_key_values": past,
- "use_cache": kwargs.get("use_cache"),
- "position_ids": position_ids,
- "attention_mask": attention_mask,
- "token_type_ids": token_type_ids,
- }
-
- def forward(
- self,
- input_ids=None,
- past_key_values=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- labels=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- assert self.cached_mel_emb is not None
- assert inputs_embeds is None # Not supported by this inference model.
- assert labels is None # Training not supported by this inference model.
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # Create embedding
- mel_len = self.cached_mel_emb.shape[1]
- if input_ids.shape[1] != 1:
- text_inputs = input_ids[:, mel_len:]
- text_emb = self.embeddings(text_inputs)
- text_emb = text_emb + self.text_pos_embedding(text_emb)
- if self.cached_mel_emb.shape[0] != text_emb.shape[0]:
- mel_emb = self.cached_mel_emb.repeat_interleave(text_emb.shape[0]//self.cached_mel_emb.shape[0], 0)
- else:
- mel_emb = self.cached_mel_emb
- emb = torch.cat([mel_emb, text_emb], dim=1)
- else:
- emb = self.embeddings(input_ids)
- emb = emb + self.text_pos_embedding.get_fixed_embedding(attention_mask.shape[1]-mel_len, attention_mask.device)
-
- transformer_outputs = self.transformer(
- inputs_embeds=emb,
- past_key_values=past_key_values,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- hidden_states = transformer_outputs[0]
-
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.transformer.first_device)
- hidden_states = hidden_states.to(self.lm_head.weight.device)
-
- lm_logits = self.lm_head(hidden_states)
-
- if not return_dict:
- return (lm_logits,) + transformer_outputs[1:]
-
- return CausalLMOutputWithCrossAttentions(
- loss=None,
- logits=lm_logits,
- past_key_values=transformer_outputs.past_key_values,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- cross_attentions=transformer_outputs.cross_attentions,
- )
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- """
- This function is used to re-order the :obj:`past_key_values` cache if
- :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is
- called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step.
- """
- return tuple(
- tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
- for layer_past in past
- )
-
-
-class ConditioningEncoder(nn.Module):
- def __init__(self,
- spec_dim,
- embedding_dim,
- attn_blocks=6,
- num_attn_heads=4,
- do_checkpointing=False,
- mean=False):
- super().__init__()
- attn = []
- self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1)
- for a in range(attn_blocks):
- attn.append(AttentionBlock(embedding_dim, num_attn_heads))
- self.attn = nn.Sequential(*attn)
- self.dim = embedding_dim
- self.do_checkpointing = do_checkpointing
- self.mean = mean
-
- def forward(self, x):
- h = self.init(x)
- h = self.attn(h)
- if self.mean:
- return h.mean(dim=2)
- else:
- return h[:, :, 0]
-
-
-class LearnedPositionEmbeddings(nn.Module):
- def __init__(self, seq_len, model_dim, init=.02):
- super().__init__()
- self.emb = nn.Embedding(seq_len, model_dim)
- # Initializing this way is standard for GPT-2
- self.emb.weight.data.normal_(mean=0.0, std=init)
-
- def forward(self, x):
- sl = x.shape[1]
- return self.emb(torch.arange(0, sl, device=x.device))
-
- def get_fixed_embedding(self, ind, dev):
- return self.emb(torch.tensor([ind], device=dev)).unsqueeze(0)
-
-
-def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing):
- """
- GPT-2 implemented by the HuggingFace library.
- """
- from transformers import GPT2Config, GPT2Model
- gpt_config = GPT2Config(vocab_size=256, # Unused.
- n_positions=max_mel_seq_len+max_text_seq_len,
- n_ctx=max_mel_seq_len+max_text_seq_len,
- n_embd=model_dim,
- n_layer=layers,
- n_head=heads,
- gradient_checkpointing=checkpointing,
- use_cache=not checkpointing)
- gpt = GPT2Model(gpt_config)
- # Override the built in positional embeddings
- del gpt.wpe
- gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim)
- # Built-in token embeddings are unused.
- del gpt.wte
- return gpt, LearnedPositionEmbeddings(max_mel_seq_len, model_dim), LearnedPositionEmbeddings(max_text_seq_len, model_dim),\
- None, None
-
-
-class MelEncoder(nn.Module):
- def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2):
- super().__init__()
- self.channels = channels
- self.encoder = nn.Sequential(nn.Conv1d(mel_channels, channels//4, kernel_size=3, padding=1),
- nn.Sequential(*[ResBlock(channels//4) for _ in range(resblocks_per_reduction)]),
- nn.Conv1d(channels//4, channels//2, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(channels//16, channels//2),
- nn.ReLU(),
- nn.Sequential(*[ResBlock(channels//2) for _ in range(resblocks_per_reduction)]),
- nn.Conv1d(channels//2, channels, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(channels//8, channels),
- nn.ReLU(),
- nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]),
- )
- self.reduction = 4
-
-
- def forward(self, x):
- for e in self.encoder:
- x = e(x)
- return x.permute(0,2,1)
-
-
-class UnifiedVoice(nn.Module):
- def __init__(self, layers=8, model_dim=512, heads=8, max_text_tokens=120, max_mel_tokens=250, max_conditioning_inputs=1,
- mel_length_compression=1024, number_text_tokens=256,
- start_text_token=None, number_mel_codes=8194, start_mel_token=8192,
- stop_mel_token=8193, train_solo_embeddings=False, use_mel_codes_as_input=True,
- checkpointing=True, types=1):
- """
- Args:
- layers: Number of layers in transformer stack.
- model_dim: Operating dimensions of the transformer
- heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64
- max_text_tokens: Maximum number of text tokens that will be encountered by model.
- max_mel_tokens: Maximum number of MEL tokens that will be encountered by model.
- max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s).
- mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length.
- number_text_tokens:
- start_text_token:
- stop_text_token:
- number_mel_codes:
- start_mel_token:
- stop_mel_token:
- train_solo_embeddings:
- use_mel_codes_as_input:
- checkpointing:
- """
- super().__init__()
-
- self.number_text_tokens = number_text_tokens
- self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token
- self.stop_text_token = 0
- self.number_mel_codes = number_mel_codes
- self.start_mel_token = start_mel_token
- self.stop_mel_token = stop_mel_token
- self.layers = layers
- self.heads = heads
- self.max_mel_tokens = max_mel_tokens
- self.max_text_tokens = max_text_tokens
- self.model_dim = model_dim
- self.max_conditioning_inputs = max_conditioning_inputs
- self.mel_length_compression = mel_length_compression
- self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads)
- self.text_embedding = nn.Embedding(self.number_text_tokens*types+1, model_dim)
- if use_mel_codes_as_input:
- self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim)
- else:
- self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1)
- self.gpt, self.mel_pos_embedding, self.text_pos_embedding, self.mel_layer_pos_embedding, self.text_layer_pos_embedding = \
- build_hf_gpt_transformer(layers, model_dim, heads, self.max_mel_tokens+2+self.max_conditioning_inputs, self.max_text_tokens+2, checkpointing)
- if train_solo_embeddings:
- self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True)
- self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True)
- else:
- self.mel_solo_embedding = 0
- self.text_solo_embedding = 0
-
- self.final_norm = nn.LayerNorm(model_dim)
- self.text_head = nn.Linear(model_dim, self.number_text_tokens*types+1)
- self.mel_head = nn.Linear(model_dim, self.number_mel_codes)
-
- # Initialize the embeddings per the GPT-2 scheme
- embeddings = [self.text_embedding]
- if use_mel_codes_as_input:
- embeddings.append(self.mel_embedding)
- for module in embeddings:
- module.weight.data.normal_(mean=0.0, std=.02)
-
- def build_aligned_inputs_and_targets(self, input, start_token, stop_token):
- inp = F.pad(input, (1,0), value=start_token)
- tar = F.pad(input, (0,1), value=stop_token)
- return inp, tar
-
- def set_mel_padding(self, mel_input_tokens, wav_lengths):
- """
- Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in
- that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required
- preformatting to create a working TTS model.
- """
- # Set padding areas within MEL (currently it is coded with the MEL code for ).
- mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode='trunc')
- for b in range(len(mel_lengths)):
- actual_end = mel_lengths[b] + 1 # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token.
- if actual_end < mel_input_tokens.shape[-1]:
- mel_input_tokens[b, actual_end:] = self.stop_mel_token
- return mel_input_tokens
-
- def get_logits(self, speech_conditioning_inputs, first_inputs, first_head, second_inputs=None, second_head=None, get_attns=False, return_latent=False):
- if second_inputs is not None:
- emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1)
- else:
- emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1)
-
- gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns)
- if get_attns:
- return gpt_out.attentions
-
- enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input
- enc = self.final_norm(enc)
-
- if return_latent:
- return enc[:, speech_conditioning_inputs.shape[1]:speech_conditioning_inputs.shape[1]+first_inputs.shape[1]], enc[:, -second_inputs.shape[1]:]
-
- first_logits = enc[:, :first_inputs.shape[1]]
- first_logits = first_head(first_logits)
- first_logits = first_logits.permute(0,2,1)
- if second_inputs is not None:
- second_logits = enc[:, -second_inputs.shape[1]:]
- second_logits = second_head(second_logits)
- second_logits = second_logits.permute(0,2,1)
- return first_logits, second_logits
- else:
- return first_logits
-
- def get_conditioning(self, speech_conditioning_input):
- speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len(
- speech_conditioning_input.shape) == 3 else speech_conditioning_input
- conds = []
- for j in range(speech_conditioning_input.shape[1]):
- conds.append(self.conditioning_encoder(speech_conditioning_input[:, j]))
- conds = torch.stack(conds, dim=1)
- conds = conds.mean(dim=1)
- return conds
-
- def forward(self, speech_conditioning_latent, text_inputs, text_lengths, mel_codes, wav_lengths, types=None, text_first=True, raw_mels=None, return_attentions=False,
- return_latent=False, clip_inputs=True):
- """
- Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode
- (actuated by `text_first`).
-
- speech_conditioning_input: MEL float tensor, (b,1024)
- text_inputs: long tensor, (b,t)
- text_lengths: long tensor, (b,)
- mel_inputs: long tensor, (b,m)
- wav_lengths: long tensor, (b,)
- raw_mels: MEL float tensor (b,80,s)
-
- If return_attentions is specified, only logits are returned.
- If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned.
- If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality.
- """
- # Types are expressed by expanding the text embedding space.
- if types is not None:
- text_inputs = text_inputs * (1+types).unsqueeze(-1)
-
- if clip_inputs:
- # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by
- # chopping the inputs by the maximum actual length.
- max_text_len = text_lengths.max()
- text_inputs = text_inputs[:, :max_text_len]
- max_mel_len = wav_lengths.max() // self.mel_length_compression
- mel_codes = mel_codes[:, :max_mel_len]
- if raw_mels is not None:
- raw_mels = raw_mels[:, :, :max_mel_len*4]
- mel_codes = self.set_mel_padding(mel_codes, wav_lengths)
- text_inputs = F.pad(text_inputs, (0,1), value=self.stop_text_token)
- mel_codes = F.pad(mel_codes, (0,1), value=self.stop_mel_token)
-
- conds = speech_conditioning_latent.unsqueeze(1)
- text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token)
- text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs)
- mel_codes, mel_targets = self.build_aligned_inputs_and_targets(mel_codes, self.start_mel_token, self.stop_mel_token)
- if raw_mels is not None:
- mel_inp = F.pad(raw_mels, (0, 8))
- else:
- mel_inp = mel_codes
- mel_emb = self.mel_embedding(mel_inp)
- mel_emb = mel_emb + self.mel_pos_embedding(mel_codes)
-
- if text_first:
- text_logits, mel_logits = self.get_logits(conds, text_emb, self.text_head, mel_emb, self.mel_head, get_attns=return_attentions, return_latent=return_latent)
- if return_latent:
- return mel_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass.
- else:
- mel_logits, text_logits = self.get_logits(conds, mel_emb, self.mel_head, text_emb, self.text_head, get_attns=return_attentions, return_latent=return_latent)
- if return_latent:
- return text_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass.
-
- if return_attentions:
- return mel_logits
- loss_text = F.cross_entropy(text_logits, text_targets.long())
- loss_mel = F.cross_entropy(mel_logits, mel_targets.long())
- return loss_text.mean(), loss_mel.mean(), mel_logits
-
- def inference_speech(self, speech_conditioning_latent, text_inputs, input_tokens=None, num_return_sequences=1,
- max_generate_length=None, typical_sampling=False, typical_mass=.9, **hf_generate_kwargs):
- seq_length = self.max_mel_tokens + self.max_text_tokens + 2
- if not hasattr(self, 'inference_model'):
- # TODO: Decouple gpt_config from this inference model.
- gpt_config = GPT2Config(vocab_size=self.max_mel_tokens,
- n_positions=seq_length,
- n_ctx=seq_length,
- n_embd=self.model_dim,
- n_layer=self.layers,
- n_head=self.heads,
- gradient_checkpointing=False,
- use_cache=True)
- self.inference_model = GPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head)
- self.gpt.wte = self.mel_embedding
-
- text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token)
- text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token)
- text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs)
-
- conds = speech_conditioning_latent.unsqueeze(1)
- emb = torch.cat([conds, text_emb], dim=1)
- self.inference_model.store_mel_emb(emb)
-
- fake_inputs = torch.full((emb.shape[0], conds.shape[1] + emb.shape[1],), fill_value=1, dtype=torch.long,
- device=text_inputs.device)
- fake_inputs[:, -1] = self.start_mel_token
- trunc_index = fake_inputs.shape[1]
- if input_tokens is None:
- inputs = fake_inputs
- else:
- assert num_return_sequences % input_tokens.shape[0] == 0, "The number of return sequences must be divisible by the number of input sequences"
- fake_inputs = fake_inputs.repeat(num_return_sequences, 1)
- input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1)
- inputs = torch.cat([fake_inputs, input_tokens], dim=1)
-
- logits_processor = LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList()
- max_length = trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length
- gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token,
- max_length=max_length, logits_processor=logits_processor,
- num_return_sequences=num_return_sequences, **hf_generate_kwargs)
- return gen[:, trunc_index:]
-
-
-if __name__ == '__main__':
- gpt = UnifiedVoice(model_dim=256, heads=4, train_solo_embeddings=True, use_mel_codes_as_input=True, max_conditioning_inputs=4)
- l = gpt(torch.randn(2, 3, 80, 800),
- torch.randint(high=120, size=(2,120)),
- torch.tensor([32, 120]),
- torch.randint(high=8192, size=(2,250)),
- torch.tensor([250*256,195*256]))
- gpt.text_forward(torch.randn(2,80,800), torch.randint(high=50, size=(2,80)), torch.tensor([32, 80]))
diff --git a/spaces/jbondy007/Video_Search_CLIP/README.md b/spaces/jbondy007/Video_Search_CLIP/README.md
deleted file mode 100644
index 8cba47dcbb4971bbfc4861be0e5a2adb8ae9e388..0000000000000000000000000000000000000000
--- a/spaces/jbondy007/Video_Search_CLIP/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Video_Search_CLIP
-emoji: 📚
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/Video_Search_CLIP
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py
deleted file mode 100644
index 1dd085f9435650bbd982c81a1cf0d9899ce7feb2..0000000000000000000000000000000000000000
--- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Copyright (c) Meta Platforms, Inc. All Rights Reserved
-# Modified by Feng Liang from
-# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/text_prompt.py
-# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/utils.py
-
-from typing import List
-
-import clip
-import torch
-from torch import nn
-
-IMAGENET_PROMPT = [
- "a bad photo of a {}.",
- "a photo of many {}.",
- "a sculpture of a {}.",
- "a photo of the hard to see {}.",
- "a low resolution photo of the {}.",
- "a rendering of a {}.",
- "graffiti of a {}.",
- "a bad photo of the {}.",
- "a cropped photo of the {}.",
- "a tattoo of a {}.",
- "the embroidered {}.",
- "a photo of a hard to see {}.",
- "a bright photo of a {}.",
- "a photo of a clean {}.",
- "a photo of a dirty {}.",
- "a dark photo of the {}.",
- "a drawing of a {}.",
- "a photo of my {}.",
- "the plastic {}.",
- "a photo of the cool {}.",
- "a close-up photo of a {}.",
- "a black and white photo of the {}.",
- "a painting of the {}.",
- "a painting of a {}.",
- "a pixelated photo of the {}.",
- "a sculpture of the {}.",
- "a bright photo of the {}.",
- "a cropped photo of a {}.",
- "a plastic {}.",
- "a photo of the dirty {}.",
- "a jpeg corrupted photo of a {}.",
- "a blurry photo of the {}.",
- "a photo of the {}.",
- "a good photo of the {}.",
- "a rendering of the {}.",
- "a {} in a video game.",
- "a photo of one {}.",
- "a doodle of a {}.",
- "a close-up photo of the {}.",
- "a photo of a {}.",
- "the origami {}.",
- "the {} in a video game.",
- "a sketch of a {}.",
- "a doodle of the {}.",
- "a origami {}.",
- "a low resolution photo of a {}.",
- "the toy {}.",
- "a rendition of the {}.",
- "a photo of the clean {}.",
- "a photo of a large {}.",
- "a rendition of a {}.",
- "a photo of a nice {}.",
- "a photo of a weird {}.",
- "a blurry photo of a {}.",
- "a cartoon {}.",
- "art of a {}.",
- "a sketch of the {}.",
- "a embroidered {}.",
- "a pixelated photo of a {}.",
- "itap of the {}.",
- "a jpeg corrupted photo of the {}.",
- "a good photo of a {}.",
- "a plushie {}.",
- "a photo of the nice {}.",
- "a photo of the small {}.",
- "a photo of the weird {}.",
- "the cartoon {}.",
- "art of the {}.",
- "a drawing of the {}.",
- "a photo of the large {}.",
- "a black and white photo of a {}.",
- "the plushie {}.",
- "a dark photo of a {}.",
- "itap of a {}.",
- "graffiti of the {}.",
- "a toy {}.",
- "itap of my {}.",
- "a photo of a cool {}.",
- "a photo of a small {}.",
- "a tattoo of the {}.",
-]
-
-VILD_PROMPT = [
- "a photo of a {}.",
- "This is a photo of a {}",
- "There is a {} in the scene",
- "There is the {} in the scene",
- "a photo of a {} in the scene",
- "a photo of a small {}.",
- "a photo of a medium {}.",
- "a photo of a large {}.",
- "This is a photo of a small {}.",
- "This is a photo of a medium {}.",
- "This is a photo of a large {}.",
- "There is a small {} in the scene.",
- "There is a medium {} in the scene.",
- "There is a large {} in the scene.",
-]
-
-class PromptExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self._buffer_init = False
-
- def init_buffer(self, clip_model):
- self._buffer_init = True
-
- def forward(self, noun_list: List[str], clip_model: nn.Module):
- raise NotImplementedError()
-
-
-class PredefinedPromptExtractor(PromptExtractor):
- def __init__(self, templates: List[str]):
- super().__init__()
- self.templates = templates
-
- def forward(self, noun_list: List[str], clip_model: nn.Module):
- text_features_bucket = []
- for template in self.templates:
- noun_tokens = [clip.tokenize(template.format(noun)) for noun in noun_list]
- text_inputs = torch.cat(noun_tokens).to(
- clip_model.text_projection.data.device
- )
- text_features = clip_model.encode_text(text_inputs)
- text_features /= text_features.norm(dim=-1, keepdim=True)
- text_features_bucket.append(text_features)
- del text_inputs
- # ensemble by averaging
- text_features = torch.stack(text_features_bucket).mean(dim=0)
- text_features = text_features / text_features.norm(dim=-1, keepdim=True)
-
- return text_features
-
-
-class ImageNetPromptExtractor(PredefinedPromptExtractor):
- def __init__(self):
- super().__init__(IMAGENET_PROMPT)
-
-
-class VILDPromptExtractor(PredefinedPromptExtractor):
- def __init__(self):
- super().__init__(VILD_PROMPT)
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py
deleted file mode 100644
index e91fc74913356481065c5f5906acd50fb05f521c..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py
+++ /dev/null
@@ -1,332 +0,0 @@
-import math
-import random
-import hashlib
-import logging
-from enum import Enum
-
-import cv2
-import numpy as np
-
-from saicinpainting.evaluation.masks.mask import SegmentationMask
-from saicinpainting.utils import LinearRamp
-
-LOGGER = logging.getLogger(__name__)
-
-
-class DrawMethod(Enum):
- LINE = 'line'
- CIRCLE = 'circle'
- SQUARE = 'square'
-
-
-def make_random_irregular_mask(shape, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10,
- draw_method=DrawMethod.LINE):
- draw_method = DrawMethod(draw_method)
-
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- times = np.random.randint(min_times, max_times + 1)
- for i in range(times):
- start_x = np.random.randint(width)
- start_y = np.random.randint(height)
- for j in range(1 + np.random.randint(5)):
- angle = 0.01 + np.random.randint(max_angle)
- if i % 2 == 0:
- angle = 2 * 3.1415926 - angle
- length = 10 + np.random.randint(max_len)
- brush_w = 5 + np.random.randint(max_width)
- end_x = np.clip((start_x + length * np.sin(angle)).astype(np.int32), 0, width)
- end_y = np.clip((start_y + length * np.cos(angle)).astype(np.int32), 0, height)
- if draw_method == DrawMethod.LINE:
- cv2.line(mask, (start_x, start_y), (end_x, end_y), 1.0, brush_w)
- elif draw_method == DrawMethod.CIRCLE:
- cv2.circle(mask, (start_x, start_y), radius=brush_w, color=1., thickness=-1)
- elif draw_method == DrawMethod.SQUARE:
- radius = brush_w // 2
- mask[start_y - radius:start_y + radius, start_x - radius:start_x + radius] = 1
- start_x, start_y = end_x, end_y
- return mask[None, ...]
-
-
-class RandomIrregularMaskGenerator:
- def __init__(self, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, ramp_kwargs=None,
- draw_method=DrawMethod.LINE):
- self.max_angle = max_angle
- self.max_len = max_len
- self.max_width = max_width
- self.min_times = min_times
- self.max_times = max_times
- self.draw_method = draw_method
- self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
-
- def __call__(self, img, iter_i=None, raw_image=None):
- coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
- cur_max_len = int(max(1, self.max_len * coef))
- cur_max_width = int(max(1, self.max_width * coef))
- cur_max_times = int(self.min_times + 1 + (self.max_times - self.min_times) * coef)
- return make_random_irregular_mask(img.shape[1:], max_angle=self.max_angle, max_len=cur_max_len,
- max_width=cur_max_width, min_times=self.min_times, max_times=cur_max_times,
- draw_method=self.draw_method)
-
-
-def make_random_rectangle_mask(shape, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3):
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- bbox_max_size = min(bbox_max_size, height - margin * 2, width - margin * 2)
- times = np.random.randint(min_times, max_times + 1)
- for i in range(times):
- box_width = np.random.randint(bbox_min_size, bbox_max_size)
- box_height = np.random.randint(bbox_min_size, bbox_max_size)
- start_x = np.random.randint(margin, width - margin - box_width + 1)
- start_y = np.random.randint(margin, height - margin - box_height + 1)
- mask[start_y:start_y + box_height, start_x:start_x + box_width] = 1
- return mask[None, ...]
-
-
-class RandomRectangleMaskGenerator:
- def __init__(self, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3, ramp_kwargs=None):
- self.margin = margin
- self.bbox_min_size = bbox_min_size
- self.bbox_max_size = bbox_max_size
- self.min_times = min_times
- self.max_times = max_times
- self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
-
- def __call__(self, img, iter_i=None, raw_image=None):
- coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
- cur_bbox_max_size = int(self.bbox_min_size + 1 + (self.bbox_max_size - self.bbox_min_size) * coef)
- cur_max_times = int(self.min_times + (self.max_times - self.min_times) * coef)
- return make_random_rectangle_mask(img.shape[1:], margin=self.margin, bbox_min_size=self.bbox_min_size,
- bbox_max_size=cur_bbox_max_size, min_times=self.min_times,
- max_times=cur_max_times)
-
-
-class RandomSegmentationMaskGenerator:
- def __init__(self, **kwargs):
- self.impl = None # will be instantiated in first call (effectively in subprocess)
- self.kwargs = kwargs
-
- def __call__(self, img, iter_i=None, raw_image=None):
- if self.impl is None:
- self.impl = SegmentationMask(**self.kwargs)
-
- masks = self.impl.get_masks(np.transpose(img, (1, 2, 0)))
- masks = [m for m in masks if len(np.unique(m)) > 1]
- return np.random.choice(masks)
-
-
-def make_random_superres_mask(shape, min_step=2, max_step=4, min_width=1, max_width=3):
- height, width = shape
- mask = np.zeros((height, width), np.float32)
- step_x = np.random.randint(min_step, max_step + 1)
- width_x = np.random.randint(min_width, min(step_x, max_width + 1))
- offset_x = np.random.randint(0, step_x)
-
- step_y = np.random.randint(min_step, max_step + 1)
- width_y = np.random.randint(min_width, min(step_y, max_width + 1))
- offset_y = np.random.randint(0, step_y)
-
- for dy in range(width_y):
- mask[offset_y + dy::step_y] = 1
- for dx in range(width_x):
- mask[:, offset_x + dx::step_x] = 1
- return mask[None, ...]
-
-
-class RandomSuperresMaskGenerator:
- def __init__(self, **kwargs):
- self.kwargs = kwargs
-
- def __call__(self, img, iter_i=None):
- return make_random_superres_mask(img.shape[1:], **self.kwargs)
-
-
-class DumbAreaMaskGenerator:
- min_ratio = 0.1
- max_ratio = 0.35
- default_ratio = 0.225
-
- def __init__(self, is_training):
- #Parameters:
- # is_training(bool): If true - random rectangular mask, if false - central square mask
- self.is_training = is_training
-
- def _random_vector(self, dimension):
- if self.is_training:
- lower_limit = math.sqrt(self.min_ratio)
- upper_limit = math.sqrt(self.max_ratio)
- mask_side = round((random.random() * (upper_limit - lower_limit) + lower_limit) * dimension)
- u = random.randint(0, dimension-mask_side-1)
- v = u+mask_side
- else:
- margin = (math.sqrt(self.default_ratio) / 2) * dimension
- u = round(dimension/2 - margin)
- v = round(dimension/2 + margin)
- return u, v
-
- def __call__(self, img, iter_i=None, raw_image=None):
- c, height, width = img.shape
- mask = np.zeros((height, width), np.float32)
- x1, x2 = self._random_vector(width)
- y1, y2 = self._random_vector(height)
- mask[x1:x2, y1:y2] = 1
- return mask[None, ...]
-
-
-class OutpaintingMaskGenerator:
- def __init__(self, min_padding_percent:float=0.04, max_padding_percent:int=0.25, left_padding_prob:float=0.5, top_padding_prob:float=0.5,
- right_padding_prob:float=0.5, bottom_padding_prob:float=0.5, is_fixed_randomness:bool=False):
- """
- is_fixed_randomness - get identical paddings for the same image if args are the same
- """
- self.min_padding_percent = min_padding_percent
- self.max_padding_percent = max_padding_percent
- self.probs = [left_padding_prob, top_padding_prob, right_padding_prob, bottom_padding_prob]
- self.is_fixed_randomness = is_fixed_randomness
-
- assert self.min_padding_percent <= self.max_padding_percent
- assert self.max_padding_percent > 0
- assert len([x for x in [self.min_padding_percent, self.max_padding_percent] if (x>=0 and x<=1)]) == 2, f"Padding percentage should be in [0,1]"
- assert sum(self.probs) > 0, f"At least one of the padding probs should be greater than 0 - {self.probs}"
- assert len([x for x in self.probs if (x >= 0) and (x <= 1)]) == 4, f"At least one of padding probs is not in [0,1] - {self.probs}"
- if len([x for x in self.probs if x > 0]) == 1:
- LOGGER.warning(f"Only one padding prob is greater than zero - {self.probs}. That means that the outpainting masks will be always on the same side")
-
- def apply_padding(self, mask, coord):
- mask[int(coord[0][0]*self.img_h):int(coord[1][0]*self.img_h),
- int(coord[0][1]*self.img_w):int(coord[1][1]*self.img_w)] = 1
- return mask
-
- def get_padding(self, size):
- n1 = int(self.min_padding_percent*size)
- n2 = int(self.max_padding_percent*size)
- return self.rnd.randint(n1, n2) / size
-
- @staticmethod
- def _img2rs(img):
- arr = np.ascontiguousarray(img.astype(np.uint8))
- str_hash = hashlib.sha1(arr).hexdigest()
- res = hash(str_hash)%(2**32)
- return res
-
- def __call__(self, img, iter_i=None, raw_image=None):
- c, self.img_h, self.img_w = img.shape
- mask = np.zeros((self.img_h, self.img_w), np.float32)
- at_least_one_mask_applied = False
-
- if self.is_fixed_randomness:
- assert raw_image is not None, f"Cant calculate hash on raw_image=None"
- rs = self._img2rs(raw_image)
- self.rnd = np.random.RandomState(rs)
- else:
- self.rnd = np.random
-
- coords = [[
- (0,0),
- (1,self.get_padding(size=self.img_h))
- ],
- [
- (0,0),
- (self.get_padding(size=self.img_w),1)
- ],
- [
- (0,1-self.get_padding(size=self.img_h)),
- (1,1)
- ],
- [
- (1-self.get_padding(size=self.img_w),0),
- (1,1)
- ]]
-
- for pp, coord in zip(self.probs, coords):
- if self.rnd.random() < pp:
- at_least_one_mask_applied = True
- mask = self.apply_padding(mask=mask, coord=coord)
-
- if not at_least_one_mask_applied:
- idx = self.rnd.choice(range(len(coords)), p=np.array(self.probs)/sum(self.probs))
- mask = self.apply_padding(mask=mask, coord=coords[idx])
- return mask[None, ...]
-
-
-class MixedMaskGenerator:
- def __init__(self, irregular_proba=1/3, irregular_kwargs=None,
- box_proba=1/3, box_kwargs=None,
- segm_proba=1/3, segm_kwargs=None,
- squares_proba=0, squares_kwargs=None,
- superres_proba=0, superres_kwargs=None,
- outpainting_proba=0, outpainting_kwargs=None,
- invert_proba=0):
- self.probas = []
- self.gens = []
-
- if irregular_proba > 0:
- self.probas.append(irregular_proba)
- if irregular_kwargs is None:
- irregular_kwargs = {}
- else:
- irregular_kwargs = dict(irregular_kwargs)
- irregular_kwargs['draw_method'] = DrawMethod.LINE
- self.gens.append(RandomIrregularMaskGenerator(**irregular_kwargs))
-
- if box_proba > 0:
- self.probas.append(box_proba)
- if box_kwargs is None:
- box_kwargs = {}
- self.gens.append(RandomRectangleMaskGenerator(**box_kwargs))
-
- if segm_proba > 0:
- self.probas.append(segm_proba)
- if segm_kwargs is None:
- segm_kwargs = {}
- self.gens.append(RandomSegmentationMaskGenerator(**segm_kwargs))
-
- if squares_proba > 0:
- self.probas.append(squares_proba)
- if squares_kwargs is None:
- squares_kwargs = {}
- else:
- squares_kwargs = dict(squares_kwargs)
- squares_kwargs['draw_method'] = DrawMethod.SQUARE
- self.gens.append(RandomIrregularMaskGenerator(**squares_kwargs))
-
- if superres_proba > 0:
- self.probas.append(superres_proba)
- if superres_kwargs is None:
- superres_kwargs = {}
- self.gens.append(RandomSuperresMaskGenerator(**superres_kwargs))
-
- if outpainting_proba > 0:
- self.probas.append(outpainting_proba)
- if outpainting_kwargs is None:
- outpainting_kwargs = {}
- self.gens.append(OutpaintingMaskGenerator(**outpainting_kwargs))
-
- self.probas = np.array(self.probas, dtype='float32')
- self.probas /= self.probas.sum()
- self.invert_proba = invert_proba
-
- def __call__(self, img, iter_i=None, raw_image=None):
- kind = np.random.choice(len(self.probas), p=self.probas)
- gen = self.gens[kind]
- result = gen(img, iter_i=iter_i, raw_image=raw_image)
- if self.invert_proba > 0 and random.random() < self.invert_proba:
- result = 1 - result
- return result
-
-
-def get_mask_generator(kind, kwargs):
- if kind is None:
- kind = "mixed"
- if kwargs is None:
- kwargs = {}
-
- if kind == "mixed":
- cl = MixedMaskGenerator
- elif kind == "outpainting":
- cl = OutpaintingMaskGenerator
- elif kind == "dumb":
- cl = DumbAreaMaskGenerator
- else:
- raise NotImplementedError(f"No such generator kind = {kind}")
- return cl(**kwargs)
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py
deleted file mode 100644
index 4770d1f15a6790ab9606c7b9881f798c8e2d9545..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import logging
-
-from saicinpainting.training.visualizers.directory import DirectoryVisualizer
-from saicinpainting.training.visualizers.noop import NoopVisualizer
-
-
-def make_visualizer(kind, **kwargs):
- logging.info(f'Make visualizer {kind}')
-
- if kind == 'directory':
- return DirectoryVisualizer(**kwargs)
- if kind == 'noop':
- return NoopVisualizer()
-
- raise ValueError(f'Unknown visualizer kind {kind}')
diff --git a/spaces/jitubutwal1441/image-to-story/README.md b/spaces/jitubutwal1441/image-to-story/README.md
deleted file mode 100644
index c3e119f4250e43a2728b6e3168ad87d484efbed9..0000000000000000000000000000000000000000
--- a/spaces/jitubutwal1441/image-to-story/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image To Story
-emoji: 🦀
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jmesikto/whisper-webui/src/vad.py b/spaces/jmesikto/whisper-webui/src/vad.py
deleted file mode 100644
index 9b5ae606a9efdcc34dada47d0613bb8194d2f269..0000000000000000000000000000000000000000
--- a/spaces/jmesikto/whisper-webui/src/vad.py
+++ /dev/null
@@ -1,560 +0,0 @@
-from abc import ABC, abstractmethod
-from collections import Counter, deque
-import time
-
-from typing import Any, Deque, Iterator, List, Dict
-
-from pprint import pprint
-from src.hooks.progressListener import ProgressListener
-from src.hooks.subTaskProgressListener import SubTaskProgressListener
-from src.hooks.whisperProgressHook import create_progress_listener_handle
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-from src.segments import merge_timestamps
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback
-
-# Workaround for https://github.com/tensorflow/tensorflow/issues/48797
-try:
- import tensorflow as tf
-except ModuleNotFoundError:
- # Error handling
- pass
-
-import torch
-
-import ffmpeg
-import numpy as np
-
-from src.utils import format_timestamp
-from enum import Enum
-
-class NonSpeechStrategy(Enum):
- """
- Ignore non-speech frames segments.
- """
- SKIP = 1
- """
- Just treat non-speech segments as speech.
- """
- CREATE_SEGMENT = 2
- """
- Expand speech segments into subsequent non-speech segments.
- """
- EXPAND_SEGMENT = 3
-
-# Defaults for Silero
-SPEECH_TRESHOLD = 0.3
-
-# Minimum size of segments to process
-MIN_SEGMENT_DURATION = 1
-
-# The maximum time for texts from old segments to be used in the next segment
-MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled)
-PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this
-
-VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio
-
-class TranscriptionConfig(ABC):
- def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- self.non_speech_strategy = non_speech_strategy
- self.segment_padding_left = segment_padding_left
- self.segment_padding_right = segment_padding_right
- self.max_silent_period = max_silent_period
- self.max_merge_size = max_merge_size
- self.max_prompt_window = max_prompt_window
- self.initial_segment_index = initial_segment_index
-
-class PeriodicTranscriptionConfig(TranscriptionConfig):
- def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index)
- self.periodic_duration = periodic_duration
-
-class AbstractTranscription(ABC):
- def __init__(self, sampling_rate: int = 16000):
- self.sampling_rate = sampling_rate
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- return load_audio(str, self.sampling_rate, start_time, duration)
-
- def is_transcribe_timestamps_fast(self):
- """
- Determine if get_transcribe_timestamps is fast enough to not need parallelization.
- """
- return False
-
- @abstractmethod
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- return
-
- def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method,
- after merging the given segments using the specified configuration.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size,
- config.segment_padding_left, config.segment_padding_right)
-
- if config.non_speech_strategy != NonSpeechStrategy.SKIP:
- # Expand segments to include the gaps between them
- if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT):
- # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size
- merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size)
- elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT:
- # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment)
- merged = self.expand_gaps(merged, total_duration=total_duration)
- else:
- raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy))
-
- print("Transcribing non-speech:")
- pprint(merged)
- return merged
-
- def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig,
- progressListener: ProgressListener = None):
- """
- Transcribe the given audo file.
-
- Parameters
- ----------
- audio: str
- The audio file.
- whisperCallable: WhisperCallback
- A callback object to call to transcribe each segment.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
-
- try:
- max_audio_duration = self.get_audio_duration(audio, config)
- timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration)
-
- # Get speech timestamps from full audio file
- merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration)
-
- # A deque of transcribed segments that is passed to the next segment as a prompt
- prompt_window = deque()
-
- print("Processing timestamps:")
- pprint(merged)
-
- result = {
- 'text': "",
- 'segments': [],
- 'language': ""
- }
- languageCounter = Counter()
- detected_language = None
-
- segment_index = config.initial_segment_index
-
- # Calculate progress
- progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0
- progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged])
-
- # For each time segment, run whisper
- for segment in merged:
- segment_index += 1
- segment_start = segment['start']
- segment_end = segment['end']
- segment_expand_amount = segment.get('expand_amount', 0)
- segment_gap = segment.get('gap', False)
-
- segment_duration = segment_end - segment_start
-
- if segment_duration < MIN_SEGMENT_DURATION:
- continue
-
- # Audio to run on Whisper
- segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration))
- # Previous segments to use as a prompt
- segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None
-
- # Detected language
- detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None
-
- print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ",
- segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language)
-
- perf_start_time = time.perf_counter()
-
- scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration,
- sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration)
- segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener)
-
- perf_end_time = time.perf_counter()
- print("Whisper took {} seconds".format(perf_end_time - perf_start_time))
-
- adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration)
-
- # Propagate expand amount to the segments
- if (segment_expand_amount > 0):
- segment_without_expansion = segment_duration - segment_expand_amount
-
- for adjusted_segment in adjusted_segments:
- adjusted_segment_end = adjusted_segment['end']
-
- # Add expand amount if the segment got expanded
- if (adjusted_segment_end > segment_without_expansion):
- adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion
-
- # Append to output
- result['text'] += segment_result['text']
- result['segments'].extend(adjusted_segments)
-
- # Increment detected language
- if not segment_gap:
- languageCounter[segment_result['language']] += 1
-
- # Update prompt window
- self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config)
-
- if detected_language is not None:
- result['language'] = detected_language
- finally:
- # Notify progress listener that we are done
- if progressListener is not None:
- progressListener.on_finished()
- return result
-
- def get_audio_duration(self, audio: str, config: TranscriptionConfig):
- return get_audio_duration(audio)
-
- def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig):
- if (config.max_prompt_window is not None and config.max_prompt_window > 0):
- # Add segments to the current prompt window (unless it is a speech gap)
- if not segment_gap:
- for segment in adjusted_segments:
- if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB:
- prompt_window.append(segment)
-
- while (len(prompt_window) > 0):
- first_end_time = prompt_window[0].get('end', 0)
- # Time expanded in the segments should be discounted from the prompt window
- first_expand_time = prompt_window[0].get('expand_amount', 0)
-
- if (first_end_time - first_expand_time < segment_end - config.max_prompt_window):
- prompt_window.popleft()
- else:
- break
-
- def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float):
- result = []
- last_end_time = 0
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- if (last_end_time != segment_start):
- delta = segment_start - last_end_time
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } )
-
- last_end_time = segment_end
- result.append(segment)
-
- # Also include total duration if specified
- if (total_duration is not None and last_end_time < total_duration):
- delta = total_duration - segment_start
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } )
-
- return result
-
- # Expand the end time of each segment to the start of the next segment
- def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- # Expand if the gap actually exists
- if (delta >= 0):
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
-
- result.append(current_segment)
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- if (last_segment['end'] < total_duration):
- last_segment = last_segment.copy()
- last_segment['end'] = total_duration
- result[-1] = last_segment
-
- return result
-
- def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- expanded = False
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- if (max_expand_size is not None and delta <= max_expand_size):
- # Just expand the current segment
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
- expanded = True
-
- result.append(current_segment)
-
- # Add a gap to the next segment if needed
- if (delta >= 0 and not expanded):
- result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } )
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- delta = total_duration - last_segment['end']
-
- if (delta > 0):
- if (max_expand_size is not None and delta <= max_expand_size):
- # Expand the last segment
- last_segment = last_segment.copy()
- last_segment['expand_amount'] = delta
- last_segment['end'] = total_duration
- result[-1] = last_segment
- else:
- result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } )
-
- return result
-
- def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None):
- result = []
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- # Filter segments?
- if (max_source_time is not None):
- if (segment_start > max_source_time):
- continue
- segment_end = min(max_source_time, segment_end)
-
- new_segment = segment.copy()
-
- # Add to start and end
- new_segment['start'] = segment_start + adjust_seconds
- new_segment['end'] = segment_end + adjust_seconds
- result.append(new_segment)
- return result
-
- def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float):
- result = []
-
- for entry in timestamps:
- start = entry['start']
- end = entry['end']
-
- result.append({
- 'start': start * factor,
- 'end': end * factor
- })
- return result
-
-
-class VadSileroTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None):
- super().__init__(sampling_rate=sampling_rate)
- self.model = None
- self.cache = cache
- self._initialize_model()
-
- def _initialize_model(self):
- if (self.cache is not None):
- model_key = "VadSileroTranscription"
- self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model)
- print("Loaded Silerio model from cache.")
- else:
- self.model, self.get_speech_timestamps = self._create_model()
- print("Created Silerio model")
-
- def _create_model(self):
- model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad')
-
- # Silero does not benefit from multi-threading
- torch.set_num_threads(1) # JIT
- (get_speech_timestamps, _, _, _, _) = utils
-
- return model, get_speech_timestamps
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time))
- perf_start_time = time.perf_counter()
-
- # Divide procesisng of audio into chunks
- chunk_start = start_time
-
- while (chunk_start < end_time):
- chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK)
-
- print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration)))
- wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration))
-
- sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD)
- seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate)
- adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration)
-
- #pprint(adjusted)
-
- result.extend(adjusted)
- chunk_start += chunk_duration
-
- perf_end_time = time.perf_counter()
- print("VAD processing took {} seconds".format(perf_end_time - perf_start_time))
-
- return result
-
- def __getstate__(self):
- # We only need the sampling rate
- return { 'sampling_rate': self.sampling_rate }
-
- def __setstate__(self, state):
- self.sampling_rate = state['sampling_rate']
- self.model = None
- # Use the global cache
- self.cache = GLOBAL_MODEL_CACHE
- self._initialize_model()
-
-# A very simple VAD that just marks every N seconds as speech
-class VadPeriodicTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def is_transcribe_timestamps_fast(self):
- # This is a very fast VAD - no need to parallelize it
- return True
-
- def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- # Generate a timestamp every N seconds
- start_timestamp = start_time
-
- while (start_timestamp < end_time):
- end_timestamp = min(start_timestamp + config.periodic_duration, end_time)
- segment_duration = end_timestamp - start_timestamp
-
- # Minimum duration is 1 second
- if (segment_duration >= 1):
- result.append( { 'start': start_timestamp, 'end': end_timestamp } )
-
- start_timestamp = end_timestamp
-
- return result
-
-def get_audio_duration(file: str):
- return float(ffmpeg.probe(file)["format"]["duration"])
-
-def load_audio(file: str, sample_rate: int = 16000,
- start_time: str = None, duration: str = None):
- """
- Open an audio file and read as mono waveform, resampling as necessary
-
- Parameters
- ----------
- file: str
- The audio file to open
-
- sr: int
- The sample rate to resample the audio if necessary
-
- start_time: str
- The start time, using the standard FFMPEG time duration syntax, or None to disable.
-
- duration: str
- The duration, using the standard FFMPEG time duration syntax, or None to disable.
-
- Returns
- -------
- A NumPy array containing the audio waveform, in float32 dtype.
- """
- try:
- inputArgs = {'threads': 0}
-
- if (start_time is not None):
- inputArgs['ss'] = start_time
- if (duration is not None):
- inputArgs['t'] = duration
-
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- out, _ = (
- ffmpeg.input(file, **inputArgs)
- .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate)
- .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True)
- )
- except ffmpeg.Error as e:
- raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}")
-
- return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py
deleted file mode 100644
index 994a6e8ebb2f0f2e69990a211d7a1ec4f06b7fd1..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py
+++ /dev/null
@@ -1,102 +0,0 @@
-#
-# The Python Imaging Library
-#
-# load a GIMP brush file
-#
-# History:
-# 96-03-14 fl Created
-# 16-01-08 es Version 2
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1996.
-# Copyright (c) Eric Soroos 2016.
-#
-# See the README file for information on usage and redistribution.
-#
-#
-# See https://github.com/GNOME/gimp/blob/mainline/devel-docs/gbr.txt for
-# format documentation.
-#
-# This code Interprets version 1 and 2 .gbr files.
-# Version 1 files are obsolete, and should not be used for new
-# brushes.
-# Version 2 files are saved by GIMP v2.8 (at least)
-# Version 3 files have a format specifier of 18 for 16bit floats in
-# the color depth field. This is currently unsupported by Pillow.
-
-from . import Image, ImageFile
-from ._binary import i32be as i32
-
-
-def _accept(prefix):
- return len(prefix) >= 8 and i32(prefix, 0) >= 20 and i32(prefix, 4) in (1, 2)
-
-
-##
-# Image plugin for the GIMP brush format.
-
-
-class GbrImageFile(ImageFile.ImageFile):
- format = "GBR"
- format_description = "GIMP brush file"
-
- def _open(self):
- header_size = i32(self.fp.read(4))
- if header_size < 20:
- msg = "not a GIMP brush"
- raise SyntaxError(msg)
- version = i32(self.fp.read(4))
- if version not in (1, 2):
- msg = f"Unsupported GIMP brush version: {version}"
- raise SyntaxError(msg)
-
- width = i32(self.fp.read(4))
- height = i32(self.fp.read(4))
- color_depth = i32(self.fp.read(4))
- if width <= 0 or height <= 0:
- msg = "not a GIMP brush"
- raise SyntaxError(msg)
- if color_depth not in (1, 4):
- msg = f"Unsupported GIMP brush color depth: {color_depth}"
- raise SyntaxError(msg)
-
- if version == 1:
- comment_length = header_size - 20
- else:
- comment_length = header_size - 28
- magic_number = self.fp.read(4)
- if magic_number != b"GIMP":
- msg = "not a GIMP brush, bad magic number"
- raise SyntaxError(msg)
- self.info["spacing"] = i32(self.fp.read(4))
-
- comment = self.fp.read(comment_length)[:-1]
-
- if color_depth == 1:
- self.mode = "L"
- else:
- self.mode = "RGBA"
-
- self._size = width, height
-
- self.info["comment"] = comment
-
- # Image might not be small
- Image._decompression_bomb_check(self.size)
-
- # Data is an uncompressed block of w * h * bytes/pixel
- self._data_size = width * height * color_depth
-
- def load(self):
- if not self.im:
- self.im = Image.core.new(self.mode, self.size)
- self.frombytes(self.fp.read(self._data_size))
- return Image.Image.load(self)
-
-
-#
-# registry
-
-
-Image.register_open(GbrImageFile.format, GbrImageFile, _accept)
-Image.register_extension(GbrImageFile.format, ".gbr")
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py
deleted file mode 100644
index 6e56fe84a9e0e63b918141bc27d708b2d915563f..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import importlib
-from codecs import IncrementalDecoder
-from collections import Counter
-from functools import lru_cache
-from typing import Counter as TypeCounter, Dict, List, Optional, Tuple
-
-from .assets import FREQUENCIES
-from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES
-from .md import is_suspiciously_successive_range
-from .models import CoherenceMatches
-from .utils import (
- is_accentuated,
- is_latin,
- is_multi_byte_encoding,
- is_unicode_range_secondary,
- unicode_range,
-)
-
-
-def encoding_unicode_range(iana_name: str) -> List[str]:
- """
- Return associated unicode ranges in a single byte code page.
- """
- if is_multi_byte_encoding(iana_name):
- raise IOError("Function not supported on multi-byte code page")
-
- decoder = importlib.import_module(
- "encodings.{}".format(iana_name)
- ).IncrementalDecoder
-
- p: IncrementalDecoder = decoder(errors="ignore")
- seen_ranges: Dict[str, int] = {}
- character_count: int = 0
-
- for i in range(0x40, 0xFF):
- chunk: str = p.decode(bytes([i]))
-
- if chunk:
- character_range: Optional[str] = unicode_range(chunk)
-
- if character_range is None:
- continue
-
- if is_unicode_range_secondary(character_range) is False:
- if character_range not in seen_ranges:
- seen_ranges[character_range] = 0
- seen_ranges[character_range] += 1
- character_count += 1
-
- return sorted(
- [
- character_range
- for character_range in seen_ranges
- if seen_ranges[character_range] / character_count >= 0.15
- ]
- )
-
-
-def unicode_range_languages(primary_range: str) -> List[str]:
- """
- Return inferred languages used with a unicode range.
- """
- languages: List[str] = []
-
- for language, characters in FREQUENCIES.items():
- for character in characters:
- if unicode_range(character) == primary_range:
- languages.append(language)
- break
-
- return languages
-
-
-@lru_cache()
-def encoding_languages(iana_name: str) -> List[str]:
- """
- Single-byte encoding language association. Some code page are heavily linked to particular language(s).
- This function does the correspondence.
- """
- unicode_ranges: List[str] = encoding_unicode_range(iana_name)
- primary_range: Optional[str] = None
-
- for specified_range in unicode_ranges:
- if "Latin" not in specified_range:
- primary_range = specified_range
- break
-
- if primary_range is None:
- return ["Latin Based"]
-
- return unicode_range_languages(primary_range)
-
-
-@lru_cache()
-def mb_encoding_languages(iana_name: str) -> List[str]:
- """
- Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
- This function does the correspondence.
- """
- if (
- iana_name.startswith("shift_")
- or iana_name.startswith("iso2022_jp")
- or iana_name.startswith("euc_j")
- or iana_name == "cp932"
- ):
- return ["Japanese"]
- if iana_name.startswith("gb") or iana_name in ZH_NAMES:
- return ["Chinese"]
- if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
- return ["Korean"]
-
- return []
-
-
-@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
-def get_target_features(language: str) -> Tuple[bool, bool]:
- """
- Determine main aspects from a supported language if it contains accents and if is pure Latin.
- """
- target_have_accents: bool = False
- target_pure_latin: bool = True
-
- for character in FREQUENCIES[language]:
- if not target_have_accents and is_accentuated(character):
- target_have_accents = True
- if target_pure_latin and is_latin(character) is False:
- target_pure_latin = False
-
- return target_have_accents, target_pure_latin
-
-
-def alphabet_languages(
- characters: List[str], ignore_non_latin: bool = False
-) -> List[str]:
- """
- Return associated languages associated to given characters.
- """
- languages: List[Tuple[str, float]] = []
-
- source_have_accents = any(is_accentuated(character) for character in characters)
-
- for language, language_characters in FREQUENCIES.items():
- target_have_accents, target_pure_latin = get_target_features(language)
-
- if ignore_non_latin and target_pure_latin is False:
- continue
-
- if target_have_accents is False and source_have_accents:
- continue
-
- character_count: int = len(language_characters)
-
- character_match_count: int = len(
- [c for c in language_characters if c in characters]
- )
-
- ratio: float = character_match_count / character_count
-
- if ratio >= 0.2:
- languages.append((language, ratio))
-
- languages = sorted(languages, key=lambda x: x[1], reverse=True)
-
- return [compatible_language[0] for compatible_language in languages]
-
-
-def characters_popularity_compare(
- language: str, ordered_characters: List[str]
-) -> float:
- """
- Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language.
- The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit).
- Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.)
- """
- if language not in FREQUENCIES:
- raise ValueError("{} not available".format(language))
-
- character_approved_count: int = 0
- FREQUENCIES_language_set = set(FREQUENCIES[language])
-
- ordered_characters_count: int = len(ordered_characters)
- target_language_characters_count: int = len(FREQUENCIES[language])
-
- large_alphabet: bool = target_language_characters_count > 26
-
- for character, character_rank in zip(
- ordered_characters, range(0, ordered_characters_count)
- ):
- if character not in FREQUENCIES_language_set:
- continue
-
- character_rank_in_language: int = FREQUENCIES[language].index(character)
- expected_projection_ratio: float = (
- target_language_characters_count / ordered_characters_count
- )
- character_rank_projection: int = int(character_rank * expected_projection_ratio)
-
- if (
- large_alphabet is False
- and abs(character_rank_projection - character_rank_in_language) > 4
- ):
- continue
-
- if (
- large_alphabet is True
- and abs(character_rank_projection - character_rank_in_language)
- < target_language_characters_count / 3
- ):
- character_approved_count += 1
- continue
-
- characters_before_source: List[str] = FREQUENCIES[language][
- 0:character_rank_in_language
- ]
- characters_after_source: List[str] = FREQUENCIES[language][
- character_rank_in_language:
- ]
- characters_before: List[str] = ordered_characters[0:character_rank]
- characters_after: List[str] = ordered_characters[character_rank:]
-
- before_match_count: int = len(
- set(characters_before) & set(characters_before_source)
- )
-
- after_match_count: int = len(
- set(characters_after) & set(characters_after_source)
- )
-
- if len(characters_before_source) == 0 and before_match_count <= 4:
- character_approved_count += 1
- continue
-
- if len(characters_after_source) == 0 and after_match_count <= 4:
- character_approved_count += 1
- continue
-
- if (
- before_match_count / len(characters_before_source) >= 0.4
- or after_match_count / len(characters_after_source) >= 0.4
- ):
- character_approved_count += 1
- continue
-
- return character_approved_count / len(ordered_characters)
-
-
-def alpha_unicode_split(decoded_sequence: str) -> List[str]:
- """
- Given a decoded text sequence, return a list of str. Unicode range / alphabet separation.
- Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list;
- One containing the latin letters and the other hebrew.
- """
- layers: Dict[str, str] = {}
-
- for character in decoded_sequence:
- if character.isalpha() is False:
- continue
-
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- continue
-
- layer_target_range: Optional[str] = None
-
- for discovered_range in layers:
- if (
- is_suspiciously_successive_range(discovered_range, character_range)
- is False
- ):
- layer_target_range = discovered_range
- break
-
- if layer_target_range is None:
- layer_target_range = character_range
-
- if layer_target_range not in layers:
- layers[layer_target_range] = character.lower()
- continue
-
- layers[layer_target_range] += character.lower()
-
- return list(layers.values())
-
-
-def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches:
- """
- This function merge results previously given by the function coherence_ratio.
- The return type is the same as coherence_ratio.
- """
- per_language_ratios: Dict[str, List[float]] = {}
- for result in results:
- for sub_result in result:
- language, ratio = sub_result
- if language not in per_language_ratios:
- per_language_ratios[language] = [ratio]
- continue
- per_language_ratios[language].append(ratio)
-
- merge = [
- (
- language,
- round(
- sum(per_language_ratios[language]) / len(per_language_ratios[language]),
- 4,
- ),
- )
- for language in per_language_ratios
- ]
-
- return sorted(merge, key=lambda x: x[1], reverse=True)
-
-
-def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches:
- """
- We shall NOT return "English—" in CoherenceMatches because it is an alternative
- of "English". This function only keeps the best match and remove the em-dash in it.
- """
- index_results: Dict[str, List[float]] = dict()
-
- for result in results:
- language, ratio = result
- no_em_name: str = language.replace("—", "")
-
- if no_em_name not in index_results:
- index_results[no_em_name] = []
-
- index_results[no_em_name].append(ratio)
-
- if any(len(index_results[e]) > 1 for e in index_results):
- filtered_results: CoherenceMatches = []
-
- for language in index_results:
- filtered_results.append((language, max(index_results[language])))
-
- return filtered_results
-
- return results
-
-
-@lru_cache(maxsize=2048)
-def coherence_ratio(
- decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None
-) -> CoherenceMatches:
- """
- Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers.
- A layer = Character extraction by alphabets/ranges.
- """
-
- results: List[Tuple[str, float]] = []
- ignore_non_latin: bool = False
-
- sufficient_match_count: int = 0
-
- lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else []
- if "Latin Based" in lg_inclusion_list:
- ignore_non_latin = True
- lg_inclusion_list.remove("Latin Based")
-
- for layer in alpha_unicode_split(decoded_sequence):
- sequence_frequencies: TypeCounter[str] = Counter(layer)
- most_common = sequence_frequencies.most_common()
-
- character_count: int = sum(o for c, o in most_common)
-
- if character_count <= TOO_SMALL_SEQUENCE:
- continue
-
- popular_character_ordered: List[str] = [c for c, o in most_common]
-
- for language in lg_inclusion_list or alphabet_languages(
- popular_character_ordered, ignore_non_latin
- ):
- ratio: float = characters_popularity_compare(
- language, popular_character_ordered
- )
-
- if ratio < threshold:
- continue
- elif ratio >= 0.8:
- sufficient_match_count += 1
-
- results.append((language, round(ratio, 4)))
-
- if sufficient_match_count >= 3:
- break
-
- return sorted(
- filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True
- )
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py
deleted file mode 100644
index 10eff133fae5d025f940b962c232a39bd0c23a74..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py
+++ /dev/null
@@ -1,211 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod, Roozbeh Pournader
-
-from fontTools import ttLib
-import fontTools.merge.base
-from fontTools.merge.cmap import (
- computeMegaGlyphOrder,
- computeMegaCmap,
- renameCFFCharStrings,
-)
-from fontTools.merge.layout import layoutPreMerge, layoutPostMerge
-from fontTools.merge.options import Options
-import fontTools.merge.tables
-from fontTools.misc.loggingTools import Timer
-from functools import reduce
-import sys
-import logging
-
-
-log = logging.getLogger("fontTools.merge")
-timer = Timer(logger=logging.getLogger(__name__ + ".timer"), level=logging.INFO)
-
-
-class Merger(object):
- """Font merger.
-
- This class merges multiple files into a single OpenType font, taking into
- account complexities such as OpenType layout (``GSUB``/``GPOS``) tables and
- cross-font metrics (e.g. ``hhea.ascent`` is set to the maximum value across
- all the fonts).
-
- If multiple glyphs map to the same Unicode value, and the glyphs are considered
- sufficiently different (that is, they differ in any of paths, widths, or
- height), then subsequent glyphs are renamed and a lookup in the ``locl``
- feature will be created to disambiguate them. For example, if the arguments
- are an Arabic font and a Latin font and both contain a set of parentheses,
- the Latin glyphs will be renamed to ``parenleft#1`` and ``parenright#1``,
- and a lookup will be inserted into the to ``locl`` feature (creating it if
- necessary) under the ``latn`` script to substitute ``parenleft`` with
- ``parenleft#1`` etc.
-
- Restrictions:
-
- - All fonts must have the same units per em.
- - If duplicate glyph disambiguation takes place as described above then the
- fonts must have a ``GSUB`` table.
-
- Attributes:
- options: Currently unused.
- """
-
- def __init__(self, options=None):
-
- if not options:
- options = Options()
-
- self.options = options
-
- def _openFonts(self, fontfiles):
- fonts = [ttLib.TTFont(fontfile) for fontfile in fontfiles]
- for font, fontfile in zip(fonts, fontfiles):
- font._merger__fontfile = fontfile
- font._merger__name = font["name"].getDebugName(4)
- return fonts
-
- def merge(self, fontfiles):
- """Merges fonts together.
-
- Args:
- fontfiles: A list of file names to be merged
-
- Returns:
- A :class:`fontTools.ttLib.TTFont` object. Call the ``save`` method on
- this to write it out to an OTF file.
- """
- #
- # Settle on a mega glyph order.
- #
- fonts = self._openFonts(fontfiles)
- glyphOrders = [list(font.getGlyphOrder()) for font in fonts]
- computeMegaGlyphOrder(self, glyphOrders)
-
- # Take first input file sfntVersion
- sfntVersion = fonts[0].sfntVersion
-
- # Reload fonts and set new glyph names on them.
- fonts = self._openFonts(fontfiles)
- for font, glyphOrder in zip(fonts, glyphOrders):
- font.setGlyphOrder(glyphOrder)
- if "CFF " in font:
- renameCFFCharStrings(self, glyphOrder, font["CFF "])
-
- cmaps = [font["cmap"] for font in fonts]
- self.duplicateGlyphsPerFont = [{} for _ in fonts]
- computeMegaCmap(self, cmaps)
-
- mega = ttLib.TTFont(sfntVersion=sfntVersion)
- mega.setGlyphOrder(self.glyphOrder)
-
- for font in fonts:
- self._preMerge(font)
-
- self.fonts = fonts
-
- allTags = reduce(set.union, (list(font.keys()) for font in fonts), set())
- allTags.remove("GlyphOrder")
-
- for tag in sorted(allTags):
- if tag in self.options.drop_tables:
- continue
-
- with timer("merge '%s'" % tag):
- tables = [font.get(tag, NotImplemented) for font in fonts]
-
- log.info("Merging '%s'.", tag)
- clazz = ttLib.getTableClass(tag)
- table = clazz(tag).merge(self, tables)
- # XXX Clean this up and use: table = mergeObjects(tables)
-
- if table is not NotImplemented and table is not False:
- mega[tag] = table
- log.info("Merged '%s'.", tag)
- else:
- log.info("Dropped '%s'.", tag)
-
- del self.duplicateGlyphsPerFont
- del self.fonts
-
- self._postMerge(mega)
-
- return mega
-
- def mergeObjects(self, returnTable, logic, tables):
- # Right now we don't use self at all. Will use in the future
- # for options and logging.
-
- allKeys = set.union(
- set(),
- *(vars(table).keys() for table in tables if table is not NotImplemented),
- )
- for key in allKeys:
- try:
- mergeLogic = logic[key]
- except KeyError:
- try:
- mergeLogic = logic["*"]
- except KeyError:
- raise Exception(
- "Don't know how to merge key %s of class %s"
- % (key, returnTable.__class__.__name__)
- )
- if mergeLogic is NotImplemented:
- continue
- value = mergeLogic(getattr(table, key, NotImplemented) for table in tables)
- if value is not NotImplemented:
- setattr(returnTable, key, value)
-
- return returnTable
-
- def _preMerge(self, font):
- layoutPreMerge(font)
-
- def _postMerge(self, font):
- layoutPostMerge(font)
-
- if "OS/2" in font:
- # https://github.com/fonttools/fonttools/issues/2538
- # TODO: Add an option to disable this?
- font["OS/2"].recalcAvgCharWidth(font)
-
-
-__all__ = ["Options", "Merger", "main"]
-
-
-@timer("make one with everything (TOTAL TIME)")
-def main(args=None):
- """Merge multiple fonts into one"""
- from fontTools import configLogger
-
- if args is None:
- args = sys.argv[1:]
-
- options = Options()
- args = options.parse_opts(args, ignore_unknown=["output-file"])
- outfile = "merged.ttf"
- fontfiles = []
- for g in args:
- if g.startswith("--output-file="):
- outfile = g[14:]
- continue
- fontfiles.append(g)
-
- if len(args) < 1:
- print("usage: pyftmerge font...", file=sys.stderr)
- return 1
-
- configLogger(level=logging.INFO if options.verbose else logging.WARNING)
- if options.timing:
- timer.logger.setLevel(logging.DEBUG)
- else:
- timer.logger.disabled = True
-
- merger = Merger(options=options)
- font = merger.merge(fontfiles)
- with timer("compile and save font"):
- font.save(outfile)
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py b/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py
deleted file mode 100644
index 844cb10346f94b03859b263ae601bd181b24bbe1..0000000000000000000000000000000000000000
--- a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py
+++ /dev/null
@@ -1,241 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from functools import partial
-
-from ldmlib.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
- extract_into_tensor
-
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- return x_dec
diff --git a/spaces/johnson906/recipedia/src/utils/metrics.py b/spaces/johnson906/recipedia/src/utils/metrics.py
deleted file mode 100644
index 7f16675b38b6960940b9f507e321464005ac83e1..0000000000000000000000000000000000000000
--- a/spaces/johnson906/recipedia/src/utils/metrics.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-import sys
-import time
-import math
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.loss import _WeightedLoss
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-map_loc = None if torch.cuda.is_available() else 'cpu'
-
-
-class MaskedCrossEntropyCriterion(_WeightedLoss):
-
- def __init__(self, ignore_index=[-100], reduce=None):
- super(MaskedCrossEntropyCriterion, self).__init__()
- self.padding_idx = ignore_index
- self.reduce = reduce
-
- def forward(self, outputs, targets):
- lprobs = nn.functional.log_softmax(outputs, dim=-1)
- lprobs = lprobs.view(-1, lprobs.size(-1))
-
- for idx in self.padding_idx:
- # remove padding idx from targets to allow gathering without error (padded entries will be suppressed later)
- targets[targets == idx] = 0
-
- nll_loss = -lprobs.gather(dim=-1, index=targets.unsqueeze(1))
- if self.reduce:
- nll_loss = nll_loss.sum()
-
- return nll_loss.squeeze()
-
-
-def softIoU(out, target, e=1e-6, sum_axis=1):
-
- num = (out*target).sum(sum_axis, True)
- den = (out+target-out*target).sum(sum_axis, True) + e
- iou = num / den
-
- return iou
-
-
-def update_error_types(error_types, y_pred, y_true):
-
- error_types['tp_i'] += (y_pred * y_true).sum(0).cpu().data.numpy()
- error_types['fp_i'] += (y_pred * (1-y_true)).sum(0).cpu().data.numpy()
- error_types['fn_i'] += ((1-y_pred) * y_true).sum(0).cpu().data.numpy()
- error_types['tn_i'] += ((1-y_pred) * (1-y_true)).sum(0).cpu().data.numpy()
-
- error_types['tp_all'] += (y_pred * y_true).sum().item()
- error_types['fp_all'] += (y_pred * (1-y_true)).sum().item()
- error_types['fn_all'] += ((1-y_pred) * y_true).sum().item()
-
-
-def compute_metrics(ret_metrics, error_types, metric_names, eps=1e-10, weights=None):
-
- if 'accuracy' in metric_names:
- ret_metrics['accuracy'].append(np.mean((error_types['tp_i'] + error_types['tn_i']) / (error_types['tp_i'] + error_types['fp_i'] + error_types['fn_i'] + error_types['tn_i'])))
- if 'jaccard' in metric_names:
- ret_metrics['jaccard'].append(error_types['tp_all'] / (error_types['tp_all'] + error_types['fp_all'] + error_types['fn_all'] + eps))
- if 'dice' in metric_names:
- ret_metrics['dice'].append(2*error_types['tp_all'] / (2*(error_types['tp_all'] + error_types['fp_all'] + error_types['fn_all']) + eps))
- if 'f1' in metric_names:
- pre = error_types['tp_i'] / (error_types['tp_i'] + error_types['fp_i'] + eps)
- rec = error_types['tp_i'] / (error_types['tp_i'] + error_types['fn_i'] + eps)
- f1_perclass = 2*(pre * rec) / (pre + rec + eps)
- if 'f1_ingredients' not in ret_metrics.keys():
- ret_metrics['f1_ingredients'] = [np.average(f1_perclass, weights=weights)]
- else:
- ret_metrics['f1_ingredients'].append(np.average(f1_perclass, weights=weights))
-
- pre = error_types['tp_all'] / (error_types['tp_all'] + error_types['fp_all'] + eps)
- rec = error_types['tp_all'] / (error_types['tp_all'] + error_types['fn_all'] + eps)
- f1 = 2*(pre * rec) / (pre + rec + eps)
- ret_metrics['f1'].append(f1)
diff --git a/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md b/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md
deleted file mode 100644
index 1ac3e409b71e2f568457d2c0ae4a1cbc8eeaea68..0000000000000000000000000000000000000000
--- a/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md
+++ /dev/null
@@ -1,21 +0,0 @@
-MIT License
-
-Copyright (c) 2020 Refactoring UI Inc.
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
\ No newline at end of file
diff --git a/spaces/jordonpeter01/MusicGen/setup.py b/spaces/jordonpeter01/MusicGen/setup.py
deleted file mode 100644
index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/MusicGen/setup.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""
- Copyright (c) Meta Platforms, Inc. and affiliates.
- All rights reserved.
-
- This source code is licensed under the license found in the
- LICENSE file in the root directory of this source tree.
-
-"""
-
-from pathlib import Path
-
-from setuptools import setup, find_packages
-
-
-NAME = 'audiocraft'
-DESCRIPTION = 'Audio research library for PyTorch'
-
-URL = 'https://github.com/fairinternal/audiocraft'
-AUTHOR = 'FAIR Speech & Audio'
-EMAIL = 'defossez@meta.com'
-REQUIRES_PYTHON = '>=3.8.0'
-
-for line in open('audiocraft/__init__.py'):
- line = line.strip()
- if '__version__' in line:
- context = {}
- exec(line, context)
- VERSION = context['__version__']
-
-HERE = Path(__file__).parent
-
-try:
- with open(HERE / "README.md", encoding='utf-8') as f:
- long_description = '\n' + f.read()
-except FileNotFoundError:
- long_description = DESCRIPTION
-
-REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')]
-
-setup(
- name=NAME,
- version=VERSION,
- description=DESCRIPTION,
- author_email=EMAIL,
- long_description=long_description,
- long_description_content_type='text/markdown',
- author=AUTHOR,
- url=URL,
- python_requires=REQUIRES_PYTHON,
- install_requires=REQUIRED,
- extras_require={
- 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'],
- },
- packages=find_packages(),
- package_data={'audiocraft': ['py.typed']},
- include_package_data=True,
- license='MIT License',
- classifiers=[
- # Trove classifiers
- # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
- 'License :: OSI Approved :: MIT License',
- 'Topic :: Multimedia :: Sound/Audio',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- ],
-)
diff --git a/spaces/jskalbg/ChatDev01/chatdev/documents.py b/spaces/jskalbg/ChatDev01/chatdev/documents.py
deleted file mode 100644
index e37cd21a82fe8a6d92a2b0fd743182310ae2d0ab..0000000000000000000000000000000000000000
--- a/spaces/jskalbg/ChatDev01/chatdev/documents.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import re
-import os
-import time
-from colorama import Fore
-
-
-class Documents():
- def __init__(self, generated_content = "", parse = True, predifined_filename = None):
- self.directory: str = None
- self.generated_content = generated_content
- self.docbooks = {}
-
- if generated_content != "":
- if parse:
- regex = r"```\n(.*?)```"
- matches = re.finditer(regex, self.generated_content, re.DOTALL)
- for match in matches:
- filename = "requirements.txt"
- doc = match.group(1)
- self.docbooks[filename] = doc
- else:
- self.docbooks[predifined_filename] = self.generated_content
-
- def _update_docs(self, generated_content, parse = True, predifined_filename = ""):
- new_docs = Documents(generated_content, parse, predifined_filename)
- for key in new_docs.docbooks.keys():
- if key not in self.docbooks.keys() or self.docbooks[key] != new_docs.docbooks[key]:
- print("{} updated.".format(key))
- print(Fore.WHITE + "------Old:\n{}\n------New:\n{}".format(self.docbooks[key] if key in self.docbooks.keys() else "# None", new_docs.docbooks[key]))
- self.docbooks[key] = new_docs.docbooks[key]
-
-
- def _rewrite_docs(self):
- directory = self.directory
- if not os.path.exists(directory):
- os.mkdir(directory)
- print("{} Created.".format(directory))
- for filename in self.docbooks.keys():
- with open(os.path.join(directory, filename), "w", encoding="utf-8") as writer:
- writer.write(self.docbooks[filename])
- print(os.path.join(directory, filename), "Writed")
-
- def _get_docs(self):
- content = ""
- for filename in self.docbooks.keys():
- content += "{}\n```\n{}\n```\n\n".format(filename, self.docbooks[filename])
- return content
diff --git a/spaces/kadirnar/Anime4k/utils.py b/spaces/kadirnar/Anime4k/utils.py
deleted file mode 100644
index 3dfd97dda0161c8af821830b99e9bc3e99f84633..0000000000000000000000000000000000000000
--- a/spaces/kadirnar/Anime4k/utils.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import pathlib
-import ffmpeg
-import tempfile
-from pyanime4k.ac import AC, Parameters, ProcessorType, Codec
-from yt_dlp import YoutubeDL
-
-def url_download(url):
- outtmpl = url[-5:] + '.mp4'
- ydl_opts = {'format': 'bestvideo[ext=mp4]+bestaudio[ext=mp4]/mp4+best[height<=144]', 'outtmpl': outtmpl}
- with YoutubeDL(ydl_opts) as ydl:
- ydl.extract_info(url, download=True)
-
- return outtmpl
-
-def _sanitize_input_paths(input_paths):
- """ sanitize input file paths
-
- Args:
- input_paths (any): input paths variable to sanitize
- """
- sanitized_list = []
-
- # if input is single file in string format
- # convert it into pathlib.Path object
- if isinstance(input_paths, str):
- sanitized_list.append(pathlib.Path(input_paths))
-
- # if the input is single file instead of a list
- # convert it into a list
- elif isinstance(input_paths, pathlib.Path):
- sanitized_list.append(input_paths)
-
- # if the input is already a list
- # make sure all elements are path objects
- elif isinstance(input_paths, list):
- for path in input_paths:
-
- # if the path is not a pathlib.Path object
- # convert it into an object
- if not isinstance(path, pathlib.Path):
- sanitized_list.append(pathlib.Path(path))
-
- # otherwise, the path is clean
- else:
- sanitized_list.append(path)
-
- # return the sanitized lsit
- return sanitized_list
-
-
-
-def migrate_audio_streams(upscaled_video: str, original_video: str, output_path: str):
- upscaled_video = pathlib.Path(upscaled_video)
- original_video = pathlib.Path(original_video)
- output_path = pathlib.Path(output_path)
- upscaled_input = ffmpeg.input(str(upscaled_video.absolute()))
- original_input = ffmpeg.input(str(original_video.absolute()))
-
- # find upscaled video stream and original audio stream
- upscaled_video = upscaled_input.video
- original_audio = original_input.audio
- # create output file with selected streams
- output = ffmpeg.output(upscaled_video, original_audio,
- str(output_path.absolute()), c="copy")
-
- ffmpeg.run(output, overwrite_output=True)
-
-
-def upscale_videos(input_paths: list, output_suffix: str = "_output", output_path: pathlib.Path = None, parameters: Parameters = Parameters(), GPU_mode: bool = False, ACNet: bool = True, codec: Codec = Codec.MP4V):
- """ upscale a list of video files with Anime4k
-
- Args:
- input_paths (list): list of input file paths
- output_suffix (str, optional): output files suffix. Defaults to "_output".
- output_path (pathlib.Path, optional): parent directory of output paths. Defaults to None.
- parameters (Parameters, optional): custom arguments passed to Anime4KCPP.
- GPU_mode (bool, optional): enable GPU mode. Defaults to False.
- ACNet (bool, optional): enable ACNet mode. Defaults to True.
- codec (Codec, optional): codec for video encodeing. Defaults to MP4V
-
- Raises:
- FileExistsError: when output path exists and isn't a directory
- ACError
- """
-
- # sanitize input list
- input_paths = _sanitize_input_paths(input_paths)
-
- # if destination path unspecified
- if output_path is None:
-
- # destination path is first input file's parent directory
- output_path = input_paths[0].parent
-
- # if destination path doesn't exist
- if not output_path.exists():
- # create directory and its parents if necessary
- output_path.mkdir(parents=True, exist_ok=True)
-
- # else if it already exists but isn't a directory
- elif not output_path.is_dir():
- raise FileExistsError(
- 'destination path already exists and isn\'t a directory')
-
- # set parameters to video mode
- parameters.videoMode = True
-
- # create anime4k object
- if GPU_mode:
- if ACNet:
- ac_object = AC(False, True, type=ProcessorType.GPUCNN,
- parameters=parameters)
- else:
- ac_object = AC(True, False, type=ProcessorType.GPU,
- parameters=parameters)
- else:
- if ACNet:
- ac_object = AC(False, False, type=ProcessorType.CPUCNN,
- parameters=parameters)
- else:
- ac_object = AC(False, False, type=ProcessorType.CPU,
- parameters=parameters)
-
- # process each of the files in the list
- for path in input_paths:
-
- # create temporary directory to save the upscaled video
- temporary_directory = pathlib.Path(tempfile.mkdtemp())
- temporary_video_file_path = temporary_directory.joinpath('temp.mp4')
- # process and save video file to temp/temp.mp4
-
- ac_object.load_video(str(path))
- ac_object.set_save_video_info(str(temporary_video_file_path), codec)
- ac_object.process_with_progress()
- ac_object.save_video()
- migrate_audio_streams(upscaled_video=temporary_video_file_path,
- original_video=path,
- output_path=(output_path.joinpath(path.stem + output_suffix + path.suffix)))
- return temporary_video_file_path
-
\ No newline at end of file
diff --git a/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md b/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md
deleted file mode 100644
index fc56eec0b81c22ff0a49e3960aa52ffd7d6dc5cb..0000000000000000000000000000000000000000
--- a/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
-川虎 Chat 🐯 Chuanhu Chat
-
-
-
-
-
-
-
ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI
-
-
-
-
-
-
-
-
-
-
-
- ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
- ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
- オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
- マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
- GPT-4対応/LLMのローカルデプロイ可能。
-
-
動画チュートリアル
- ·
-
2.0 イントロダクション
- ·
-
3.0 イントロダクション & チュートリアル
- ||
-
オンライントライアル
- ·
-
ワンクリックデプロイ
-
-
-
-
-
-
-
-## 使う上でのTips
-
-- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。
-- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。
-- 入力ボックスで改行するには、Shift + Enterキーを押してください。
-- 入力履歴を素早く切り替えるには、入力ボックスで ↑と↓キーを押す。
-- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。
-- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。
-- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。
-
-## インストール
-
-```shell
-git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
-cd ChuanhuChatGPT
-pip install -r requirements.txt
-```
-
-次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。
-
-```shell
-python ChuanhuChatbot.py
-```
-
-ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。
-
-> **Note**
->
-> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。
-
-## トラブルシューティング
-
-問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです:
-
-1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または
- ```shell
- git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
- ```
-2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。
- ```
- pip install -r requirements.txt
- ```
-3. Gradioを更新
- ```
- pip install gradio --upgrade --force-reinstall
- ```
-
-一般的に、以下の手順でほとんどの問題を解決することができます。
-
-それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
-
-このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。
-
-## More Information
-
-より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。:
-
-- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization)
-- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
-- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
-- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
-- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
-
-## Starchart
-
-[](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
-
-## Contributors
-
-
-
-
-
-## Sponsor
-
-🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。
-
-
-
-
diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py b/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py
deleted file mode 100644
index 75020a0bd732830f603d7c7d250c9e087033cc24..0000000000000000000000000000000000000000
--- a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import re
-import inflect
-
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r"([0-9][0-9\,]+[0-9])")
-_decimal_number_re = re.compile(r"([0-9]+\.[0-9]+)")
-_pounds_re = re.compile(r"£([0-9\,]*[0-9]+)")
-_dollars_re = re.compile(r"\$([0-9\.\,]*[0-9]+)")
-_ordinal_re = re.compile(r"[0-9]+(st|nd|rd|th)")
-_number_re = re.compile(r"[0-9]+")
-
-
-def _remove_commas(m):
- return m.group(1).replace(",", "")
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace(".", " point ")
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split(".")
- if len(parts) > 2:
- return match + " dollars" # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = "dollar" if dollars == 1 else "dollars"
- cent_unit = "cent" if cents == 1 else "cents"
- return "%s %s, %s %s" % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = "dollar" if dollars == 1 else "dollars"
- return "%s %s" % (dollars, dollar_unit)
- elif cents:
- cent_unit = "cent" if cents == 1 else "cents"
- return "%s %s" % (cents, cent_unit)
- else:
- return "zero dollars"
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return "two thousand"
- elif num > 2000 and num < 2010:
- return "two thousand " + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + " hundred"
- else:
- return _inflect.number_to_words(num, andword="", zero="oh", group=2).replace(", ", " ")
- else:
- return _inflect.number_to_words(num, andword="")
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r"\1 pounds", text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
diff --git a/spaces/keras-io/integrated_gradients/app.py b/spaces/keras-io/integrated_gradients/app.py
deleted file mode 100644
index 48cd09982a6e33a1fe3e77debfe9879def86bc9f..0000000000000000000000000000000000000000
--- a/spaces/keras-io/integrated_gradients/app.py
+++ /dev/null
@@ -1,400 +0,0 @@
-import gradio as gr
-import numpy as np
-import matplotlib.pyplot as plt
-from scipy import ndimage
-from IPython.display import Image
-
-import tensorflow as tf
-from tensorflow import keras
-from tensorflow.keras import layers
-from tensorflow.keras.applications import xception
-
-# Size of the input image
-img_size = (299, 299, 3)
-
-# Load Xception model with imagenet weights
-model = xception.Xception(weights="imagenet")
-
-# The local path to our target image
-img_path = keras.utils.get_file("elephant.jpg", "https://i.imgur.com/Bvro0YD.png")
-
-def get_gradients(img_input, top_pred_idx):
- """Computes the gradients of outputs w.r.t input image.
-
- Args:
- img_input: 4D image tensor
- top_pred_idx: Predicted label for the input image
-
- Returns:
- Gradients of the predictions w.r.t img_input
- """
- images = tf.cast(img_input, tf.float32)
-
- with tf.GradientTape() as tape:
- tape.watch(images)
- preds = model(images)
- top_class = preds[:, top_pred_idx]
-
- grads = tape.gradient(top_class, images)
- return grads
-
-
-def get_integrated_gradients(img_input, top_pred_idx, baseline=None, num_steps=50):
- """Computes Integrated Gradients for a predicted label.
-
- Args:
- img_input (ndarray): Original image
- top_pred_idx: Predicted label for the input image
- baseline (ndarray): The baseline image to start with for interpolation
- num_steps: Number of interpolation steps between the baseline
- and the input used in the computation of integrated gradients. These
- steps along determine the integral approximation error. By default,
- num_steps is set to 50.
-
- Returns:
- Integrated gradients w.r.t input image
- """
- # If baseline is not provided, start with a black image
- # having same size as the input image.
- if baseline is None:
- baseline = np.zeros(img_size).astype(np.float32)
- else:
- baseline = baseline.astype(np.float32)
-
- # 1. Do interpolation.
- img_input = img_input.astype(np.float32)
- interpolated_image = [
- baseline + (step / num_steps) * (img_input - baseline)
- for step in range(num_steps + 1)
- ]
- interpolated_image = np.array(interpolated_image).astype(np.float32)
-
- # 2. Preprocess the interpolated images
- interpolated_image = xception.preprocess_input(interpolated_image)
-
- # 3. Get the gradients
- grads = []
- for i, img in enumerate(interpolated_image):
- img = tf.expand_dims(img, axis=0)
- grad = get_gradients(img, top_pred_idx=top_pred_idx)
- grads.append(grad[0])
- grads = tf.convert_to_tensor(grads, dtype=tf.float32)
-
- # 4. Approximate the integral using the trapezoidal rule
- grads = (grads[:-1] + grads[1:]) / 2.0
- avg_grads = tf.reduce_mean(grads, axis=0)
-
- # 5. Calculate integrated gradients and return
- integrated_grads = (img_input - baseline) * avg_grads
- return integrated_grads
-
-
-def random_baseline_integrated_gradients(
- img_input, top_pred_idx, num_steps=50, num_runs=2
-):
- """Generates a number of random baseline images.
-
- Args:
- img_input (ndarray): 3D image
- top_pred_idx: Predicted label for the input image
- num_steps: Number of interpolation steps between the baseline
- and the input used in the computation of integrated gradients. These
- steps along determine the integral approximation error. By default,
- num_steps is set to 50.
- num_runs: number of baseline images to generate
-
- Returns:
- Averaged integrated gradients for `num_runs` baseline images
- """
- # 1. List to keep track of Integrated Gradients (IG) for all the images
- integrated_grads = []
-
- # 2. Get the integrated gradients for all the baselines
- for run in range(num_runs):
- baseline = np.random.random(img_size) * 255
- igrads = get_integrated_gradients(
- img_input=img_input,
- top_pred_idx=top_pred_idx,
- baseline=baseline,
- num_steps=num_steps,
- )
- integrated_grads.append(igrads)
-
- # 3. Return the average integrated gradients for the image
- integrated_grads = tf.convert_to_tensor(integrated_grads)
- return tf.reduce_mean(integrated_grads, axis=0)
-
-class GradVisualizer:
- """Plot gradients of the outputs w.r.t an input image."""
-
- def __init__(self, positive_channel=None, negative_channel=None):
- if positive_channel is None:
- self.positive_channel = [0, 255, 0]
- else:
- self.positive_channel = positive_channel
-
- if negative_channel is None:
- self.negative_channel = [255, 0, 0]
- else:
- self.negative_channel = negative_channel
-
- def apply_polarity(self, attributions, polarity):
- if polarity == "positive":
- return np.clip(attributions, 0, 1)
- else:
- return np.clip(attributions, -1, 0)
-
- def apply_linear_transformation(
- self,
- attributions,
- clip_above_percentile=99.9,
- clip_below_percentile=70.0,
- lower_end=0.2,
- ):
- # 1. Get the thresholds
- m = self.get_thresholded_attributions(
- attributions, percentage=100 - clip_above_percentile
- )
- e = self.get_thresholded_attributions(
- attributions, percentage=100 - clip_below_percentile
- )
-
- # 2. Transform the attributions by a linear function f(x) = a*x + b such that
- # f(m) = 1.0 and f(e) = lower_end
- transformed_attributions = (1 - lower_end) * (np.abs(attributions) - e) / (
- m - e
- ) + lower_end
-
- # 3. Make sure that the sign of transformed attributions is the same as original attributions
- transformed_attributions *= np.sign(attributions)
-
- # 4. Only keep values that are bigger than the lower_end
- transformed_attributions *= transformed_attributions >= lower_end
-
- # 5. Clip values and return
- transformed_attributions = np.clip(transformed_attributions, 0.0, 1.0)
- return transformed_attributions
-
- def get_thresholded_attributions(self, attributions, percentage):
- if percentage == 100.0:
- return np.min(attributions)
-
- # 1. Flatten the attributions
- flatten_attr = attributions.flatten()
-
- # 2. Get the sum of the attributions
- total = np.sum(flatten_attr)
-
- # 3. Sort the attributions from largest to smallest.
- sorted_attributions = np.sort(np.abs(flatten_attr))[::-1]
-
- # 4. Calculate the percentage of the total sum that each attribution
- # and the values about it contribute.
- cum_sum = 100.0 * np.cumsum(sorted_attributions) / total
-
- # 5. Threshold the attributions by the percentage
- indices_to_consider = np.where(cum_sum >= percentage)[0][0]
-
- # 6. Select the desired attributions and return
- attributions = sorted_attributions[indices_to_consider]
- return attributions
-
- def binarize(self, attributions, threshold=0.001):
- return attributions > threshold
-
- def morphological_cleanup_fn(self, attributions, structure=np.ones((4, 4))):
- closed = ndimage.grey_closing(attributions, structure=structure)
- opened = ndimage.grey_opening(closed, structure=structure)
- return opened
-
- def draw_outlines(
- self, attributions, percentage=90, connected_component_structure=np.ones((3, 3))
- ):
- # 1. Binarize the attributions.
- attributions = self.binarize(attributions)
-
- # 2. Fill the gaps
- attributions = ndimage.binary_fill_holes(attributions)
-
- # 3. Compute connected components
- connected_components, num_comp = ndimage.measurements.label(
- attributions, structure=connected_component_structure
- )
-
- # 4. Sum up the attributions for each component
- total = np.sum(attributions[connected_components > 0])
- component_sums = []
- for comp in range(1, num_comp + 1):
- mask = connected_components == comp
- component_sum = np.sum(attributions[mask])
- component_sums.append((component_sum, mask))
-
- # 5. Compute the percentage of top components to keep
- sorted_sums_and_masks = sorted(component_sums, key=lambda x: x[0], reverse=True)
- sorted_sums = list(zip(*sorted_sums_and_masks))[0]
- cumulative_sorted_sums = np.cumsum(sorted_sums)
- cutoff_threshold = percentage * total / 100
- cutoff_idx = np.where(cumulative_sorted_sums >= cutoff_threshold)[0][0]
- if cutoff_idx > 2:
- cutoff_idx = 2
-
- # 6. Set the values for the kept components
- border_mask = np.zeros_like(attributions)
- for i in range(cutoff_idx + 1):
- border_mask[sorted_sums_and_masks[i][1]] = 1
-
- # 7. Make the mask hollow and show only the border
- eroded_mask = ndimage.binary_erosion(border_mask, iterations=1)
- border_mask[eroded_mask] = 0
-
- # 8. Return the outlined mask
- return border_mask
-
- def process_grads(
- self,
- image,
- attributions,
- polarity="positive",
- clip_above_percentile=99.9,
- clip_below_percentile=0,
- morphological_cleanup=False,
- structure=np.ones((3, 3)),
- outlines=False,
- outlines_component_percentage=90,
- overlay=True,
- ):
- if polarity not in ["positive", "negative"]:
- raise ValueError(
- f""" Allowed polarity values: 'positive' or 'negative'
- but provided {polarity}"""
- )
- if clip_above_percentile < 0 or clip_above_percentile > 100:
- raise ValueError("clip_above_percentile must be in [0, 100]")
-
- if clip_below_percentile < 0 or clip_below_percentile > 100:
- raise ValueError("clip_below_percentile must be in [0, 100]")
-
- # 1. Apply polarity
- if polarity == "positive":
- attributions = self.apply_polarity(attributions, polarity=polarity)
- channel = self.positive_channel
- else:
- attributions = self.apply_polarity(attributions, polarity=polarity)
- attributions = np.abs(attributions)
- channel = self.negative_channel
-
- # 2. Take average over the channels
- attributions = np.average(attributions, axis=2)
-
- # 3. Apply linear transformation to the attributions
- attributions = self.apply_linear_transformation(
- attributions,
- clip_above_percentile=clip_above_percentile,
- clip_below_percentile=clip_below_percentile,
- lower_end=0.0,
- )
-
- # 4. Cleanup
- if morphological_cleanup:
- attributions = self.morphological_cleanup_fn(
- attributions, structure=structure
- )
- # 5. Draw the outlines
- if outlines:
- attributions = self.draw_outlines(
- attributions, percentage=outlines_component_percentage
- )
-
- # 6. Expand the channel axis and convert to RGB
- attributions = np.expand_dims(attributions, 2) * channel
-
- # 7.Superimpose on the original image
- if overlay:
- attributions = np.clip((attributions * 0.8 + image), 0, 255)
- return attributions
-
- def visualize(
- self,
- image,
- gradients,
- integrated_gradients,
- polarity="positive",
- clip_above_percentile=99.9,
- clip_below_percentile=0,
- morphological_cleanup=False,
- structure=np.ones((3, 3)),
- outlines=False,
- outlines_component_percentage=90,
- overlay=True,
- figsize=(15, 8),
- ):
- # 1. Make two copies of the original image
- img1 = np.copy(image)
- img2 = np.copy(image)
-
- # 2. Process the normal gradients
- grads_attr = self.process_grads(
- image=img1,
- attributions=gradients,
- polarity=polarity,
- clip_above_percentile=clip_above_percentile,
- clip_below_percentile=clip_below_percentile,
- morphological_cleanup=morphological_cleanup,
- structure=structure,
- outlines=outlines,
- outlines_component_percentage=outlines_component_percentage,
- overlay=overlay,
- )
-
- # 3. Process the integrated gradients
- igrads_attr = self.process_grads(
- image=img2,
- attributions=integrated_gradients,
- polarity=polarity,
- clip_above_percentile=clip_above_percentile,
- clip_below_percentile=clip_below_percentile,
- morphological_cleanup=morphological_cleanup,
- structure=structure,
- outlines=outlines,
- outlines_component_percentage=outlines_component_percentage,
- overlay=overlay,
- )
-
- return igrads_attr.astype(np.uint8)
-
-def classify_image(image):
- img = np.expand_dims(image, axis=0)
- orig_img = np.copy(img[0]).astype(np.uint8)
- img_processed = tf.cast(xception.preprocess_input(img), dtype=tf.float32)
- preds = model.predict(img_processed)
- top_pred_idx = tf.argmax(preds[0])
- print("Predicted:", top_pred_idx, xception.decode_predictions(preds, top=1)[0])
- grads = get_gradients(img_processed, top_pred_idx=top_pred_idx)
- igrads = random_baseline_integrated_gradients(
- np.copy(orig_img), top_pred_idx=top_pred_idx, num_steps=50, num_runs=2)
- vis = GradVisualizer()
- img_grads = vis.visualize(
- image=orig_img,
- gradients=grads[0].numpy(),
- integrated_gradients=igrads.numpy(),
- clip_above_percentile=99,
- clip_below_percentile=0,
- )
- return img_grads
-
-image = gr.inputs.Image(shape=(299,299))
-label = gr.outputs.Image()
-
-iface = gr.Interface(classify_image,image,label,
- #outputs=[
- # gr.outputs.Textbox(label="Engine issue"),
- # gr.outputs.Textbox(label="Engine issue score")],
- examples=["elephant.jpg"],
- title="Model interpretability with Integrated Gradients",
- description = "Model interpretability with Integrated Gradients.",
- article = "Author: Jónathan Heras. Based on the keras example from A_K_Nain"
-# examples = ["sample.csv"],
-)
-
-
-iface.launch()
diff --git a/spaces/kevinwang676/M4Singer/utils/pitch_utils.py b/spaces/kevinwang676/M4Singer/utils/pitch_utils.py
deleted file mode 100644
index f7fd166abd3a03bac5909e498669b482447435cf..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/M4Singer/utils/pitch_utils.py
+++ /dev/null
@@ -1,76 +0,0 @@
-#########
-# world
-##########
-import librosa
-import numpy as np
-import torch
-
-gamma = 0
-mcepInput = 3 # 0 for dB, 3 for magnitude
-alpha = 0.45
-en_floor = 10 ** (-80 / 20)
-FFT_SIZE = 2048
-
-
-f0_bin = 256
-f0_max = 1100.0
-f0_min = 50.0
-f0_mel_min = 1127 * np.log(1 + f0_min / 700)
-f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
-
-def f0_to_coarse(f0):
- is_torch = isinstance(f0, torch.Tensor)
- f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
-
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
- f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
- return f0_coarse
-
-
-def norm_f0(f0, uv, hparams):
- is_torch = isinstance(f0, torch.Tensor)
- if hparams['pitch_norm'] == 'standard':
- f0 = (f0 - hparams['f0_mean']) / hparams['f0_std']
- if hparams['pitch_norm'] == 'log':
- f0 = torch.log2(f0) if is_torch else np.log2(f0)
- if uv is not None and hparams['use_uv']:
- f0[uv > 0] = 0
- return f0
-
-
-def norm_interp_f0(f0, hparams):
- is_torch = isinstance(f0, torch.Tensor)
- if is_torch:
- device = f0.device
- f0 = f0.data.cpu().numpy()
- uv = f0 == 0
- f0 = norm_f0(f0, uv, hparams)
- if sum(uv) == len(f0):
- f0[uv] = 0
- elif sum(uv) > 0:
- f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv])
- uv = torch.FloatTensor(uv)
- f0 = torch.FloatTensor(f0)
- if is_torch:
- f0 = f0.to(device)
- return f0, uv
-
-
-def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None):
- if hparams['pitch_norm'] == 'standard':
- f0 = f0 * hparams['f0_std'] + hparams['f0_mean']
- if hparams['pitch_norm'] == 'log':
- f0 = 2 ** f0
- if min is not None:
- f0 = f0.clamp(min=min)
- if max is not None:
- f0 = f0.clamp(max=max)
- if uv is not None and hparams['use_uv']:
- f0[uv > 0] = 0
- if pitch_padding is not None:
- f0[pitch_padding] = 0
- return f0
diff --git a/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py b/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py b/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py
deleted file mode 100644
index 7261742c30f64df435ed3fdebaafd969e9563d98..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py
+++ /dev/null
@@ -1,363 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss,librosa
-from scipy import signal
-from functools import lru_cache
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav={}
-@lru_cache
-def cache_harvest_f0(input_audio_path,fs,f0max,f0min,frame_period):
- audio=input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-def change_rms(data1,sr1,data2,sr2,rate):#1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(y=data1, frame_length=sr1//2*2, hop_length=sr1//2)#每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2//2*2, hop_length=sr2//2)
- rms1=torch.from_numpy(rms1)
- rms1=F.interpolate(rms1.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
- rms2=torch.from_numpy(rms2)
- rms2=F.interpolate(rms2.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
- rms2=torch.max(rms2,torch.zeros_like(rms2)+1e-6)
- data2*=(torch.pow(rms1,torch.tensor(1-rate))*torch.pow(rms2,torch.tensor(rate-1))).numpy()
- return data2
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(self, input_audio_path,x, p_len, f0_up_key, f0_method,filter_radius, inp_f0=None):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path]=x.astype(np.double)
- f0=cache_harvest_f0(input_audio_path,self.sr,f0_max,f0_min,10)
- if(filter_radius>2):
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])if version=="v1"else logits[0]
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(input_audio_path,audio_pad, p_len, f0_up_key, f0_method,filter_radius, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if(rms_mix_rate!=1):
- audio_opt=change_rms(audio,16000,audio_opt,tgt_sr,rms_mix_rate)
- if(resample_sr>=16000 and tgt_sr!=resample_sr):
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max=np.abs(audio_opt).max()/0.99
- max_int16=32768
- if(audio_max>1):max_int16/=audio_max
- audio_opt=(audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/eval/__init__.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/eval/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md b/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md
deleted file mode 100644
index 602a3aad0fd8f5fa01d85be45ffd1f36b457265b..0000000000000000000000000000000000000000
--- a/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md
+++ /dev/null
@@ -1,21 +0,0 @@
-We need to give Heroku the ability to pull in data from DVC upon app start up. We will install a [buildpack](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-apt) that allows the installation of apt-files and then define the Aptfile that contains a path to DVC. I.e., in the CLI run:
-
-```
-heroku buildpacks:add --index 1 heroku-community/apt
-```
-
-Then in your root project folder create a file called `Aptfile` that specifies the release of DVC you want installed, https://github.com/iterative/dvc/releases/download/2.8.3/dvc_2.8.3_amd64.deb
-
-Add the following code block to your **streamlit_app.py**:
-
-```python
-import os
-
-if "DYNO" in os.environ and os.path.isdir(".dvc"):
- os.system("dvc config core.no_scm true")
- if os.system(f"dvc pull {model} {image}") != 0:
- exit("dvc pull failed")
- os.system("rm -r .dvc .apt/usr/lib/dvc")
-```
-
-Reference: [Heroku ML](https://github.com/GuilhermeBrejeiro/deploy_ML_model_Heroku_FastAPI)
\ No newline at end of file
diff --git a/spaces/koajoel/PolyFormer/bert/tokenization_bert.py b/spaces/koajoel/PolyFormer/bert/tokenization_bert.py
deleted file mode 100644
index 972e1733163522359750dddedf6dea885085b2ca..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/bert/tokenization_bert.py
+++ /dev/null
@@ -1,545 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Tokenization classes."""
-
-
-import collections
-import logging
-import os
-import unicodedata
-from typing import List, Optional
-
-from .tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
-
-
-logger = logging.getLogger(__name__)
-
-VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt"}
-
-PRETRAINED_VOCAB_FILES_MAP = {
- "vocab_file": {
- "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt",
- "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt",
- "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt",
- "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt",
- "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt",
- "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt",
- "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt",
- "bert-base-german-cased": "https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt",
- "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt",
- "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt",
- "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt",
- "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt",
- "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt",
- "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt",
- "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt",
- "TurkuNLP/bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/vocab.txt",
- "TurkuNLP/bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/vocab.txt",
- "wietsedv/bert-base-dutch-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/vocab.txt",
- }
-}
-
-PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
- "bert-base-uncased": 512,
- "bert-large-uncased": 512,
- "bert-base-cased": 512,
- "bert-large-cased": 512,
- "bert-base-multilingual-uncased": 512,
- "bert-base-multilingual-cased": 512,
- "bert-base-chinese": 512,
- "bert-base-german-cased": 512,
- "bert-large-uncased-whole-word-masking": 512,
- "bert-large-cased-whole-word-masking": 512,
- "bert-large-uncased-whole-word-masking-finetuned-squad": 512,
- "bert-large-cased-whole-word-masking-finetuned-squad": 512,
- "bert-base-cased-finetuned-mrpc": 512,
- "bert-base-german-dbmdz-cased": 512,
- "bert-base-german-dbmdz-uncased": 512,
- "TurkuNLP/bert-base-finnish-cased-v1": 512,
- "TurkuNLP/bert-base-finnish-uncased-v1": 512,
- "wietsedv/bert-base-dutch-cased": 512,
-}
-
-PRETRAINED_INIT_CONFIGURATION = {
- "bert-base-uncased": {"do_lower_case": True},
- "bert-large-uncased": {"do_lower_case": True},
- "bert-base-cased": {"do_lower_case": False},
- "bert-large-cased": {"do_lower_case": False},
- "bert-base-multilingual-uncased": {"do_lower_case": True},
- "bert-base-multilingual-cased": {"do_lower_case": False},
- "bert-base-chinese": {"do_lower_case": False},
- "bert-base-german-cased": {"do_lower_case": False},
- "bert-large-uncased-whole-word-masking": {"do_lower_case": True},
- "bert-large-cased-whole-word-masking": {"do_lower_case": False},
- "bert-large-uncased-whole-word-masking-finetuned-squad": {"do_lower_case": True},
- "bert-large-cased-whole-word-masking-finetuned-squad": {"do_lower_case": False},
- "bert-base-cased-finetuned-mrpc": {"do_lower_case": False},
- "bert-base-german-dbmdz-cased": {"do_lower_case": False},
- "bert-base-german-dbmdz-uncased": {"do_lower_case": True},
- "TurkuNLP/bert-base-finnish-cased-v1": {"do_lower_case": False},
- "TurkuNLP/bert-base-finnish-uncased-v1": {"do_lower_case": True},
- "wietsedv/bert-base-dutch-cased": {"do_lower_case": False},
-}
-
-
-def load_vocab(vocab_file):
- """Loads a vocabulary file into a dictionary."""
- vocab = collections.OrderedDict()
- with open(vocab_file, "r", encoding="utf-8") as reader:
- tokens = reader.readlines()
- for index, token in enumerate(tokens):
- token = token.rstrip("\n")
- vocab[token] = index
- return vocab
-
-
-def whitespace_tokenize(text):
- """Runs basic whitespace cleaning and splitting on a piece of text."""
- text = text.strip()
- if not text:
- return []
- tokens = text.split()
- return tokens
-
-
-class BertTokenizer(PreTrainedTokenizer):
- r"""
- Constructs a BERT tokenizer. Based on WordPiece.
-
- This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the methods. Users
- should refer to the superclass for more information regarding methods.
-
- Args:
- vocab_file (:obj:`string`):
- File containing the vocabulary.
- do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`True`):
- Whether to lowercase the input when tokenizing.
- do_basic_tokenize (:obj:`bool`, `optional`, defaults to :obj:`True`):
- Whether to do basic tokenization before WordPiece.
- never_split (:obj:`Iterable`, `optional`, defaults to :obj:`None`):
- Collection of tokens which will never be split during tokenization. Only has an effect when
- :obj:`do_basic_tokenize=True`
- unk_token (:obj:`string`, `optional`, defaults to "[UNK]"):
- The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
- token instead.
- sep_token (:obj:`string`, `optional`, defaults to "[SEP]"):
- The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences
- for sequence classification or for a text and a question for question answering.
- It is also used as the last token of a sequence built with special tokens.
- pad_token (:obj:`string`, `optional`, defaults to "[PAD]"):
- The token used for padding, for example when batching sequences of different lengths.
- cls_token (:obj:`string`, `optional`, defaults to "[CLS]"):
- The classifier token which is used when doing sequence classification (classification of the whole
- sequence instead of per-token classification). It is the first token of the sequence when built with
- special tokens.
- mask_token (:obj:`string`, `optional`, defaults to "[MASK]"):
- The token used for masking values. This is the token used when training this model with masked language
- modeling. This is the token which the model will try to predict.
- tokenize_chinese_chars (:obj:`bool`, `optional`, defaults to :obj:`True`):
- Whether to tokenize Chinese characters.
- This should likely be deactivated for Japanese:
- see: https://github.com/huggingface/transformers/issues/328
- """
-
- vocab_files_names = VOCAB_FILES_NAMES
- pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
- pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION
- max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
-
- def __init__(
- self,
- vocab_file,
- do_lower_case=True,
- do_basic_tokenize=True,
- never_split=None,
- unk_token="[UNK]",
- sep_token="[SEP]",
- pad_token="[PAD]",
- cls_token="[CLS]",
- mask_token="[MASK]",
- tokenize_chinese_chars=True,
- **kwargs
- ):
- super().__init__(
- unk_token=unk_token,
- sep_token=sep_token,
- pad_token=pad_token,
- cls_token=cls_token,
- mask_token=mask_token,
- **kwargs,
- )
-
- if not os.path.isfile(vocab_file):
- raise ValueError(
- "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained "
- "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)
- )
- self.vocab = load_vocab(vocab_file)
- self.ids_to_tokens = collections.OrderedDict([(ids, tok) for tok, ids in self.vocab.items()])
- self.do_basic_tokenize = do_basic_tokenize
- if do_basic_tokenize:
- self.basic_tokenizer = BasicTokenizer(
- do_lower_case=do_lower_case, never_split=never_split, tokenize_chinese_chars=tokenize_chinese_chars
- )
- self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, unk_token=self.unk_token)
-
- @property
- def vocab_size(self):
- return len(self.vocab)
-
- def get_vocab(self):
- return dict(self.vocab, **self.added_tokens_encoder)
-
- def _tokenize(self, text):
- split_tokens = []
- if self.do_basic_tokenize:
- for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens):
-
- # If the token is part of the never_split set
- if token in self.basic_tokenizer.never_split:
- split_tokens.append(token)
- else:
- split_tokens += self.wordpiece_tokenizer.tokenize(token)
- else:
- split_tokens = self.wordpiece_tokenizer.tokenize(text)
- return split_tokens
-
- def _convert_token_to_id(self, token):
- """ Converts a token (str) in an id using the vocab. """
- return self.vocab.get(token, self.vocab.get(self.unk_token))
-
- def _convert_id_to_token(self, index):
- """Converts an index (integer) in a token (str) using the vocab."""
- return self.ids_to_tokens.get(index, self.unk_token)
-
- def convert_tokens_to_string(self, tokens):
- """ Converts a sequence of tokens (string) in a single string. """
- out_string = " ".join(tokens).replace(" ##", "").strip()
- return out_string
-
- def build_inputs_with_special_tokens(
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
- ) -> List[int]:
- """
- Build model inputs from a sequence or a pair of sequence for sequence classification tasks
- by concatenating and adding special tokens.
- A BERT sequence has the following format:
-
- - single sequence: ``[CLS] X [SEP]``
- - pair of sequences: ``[CLS] A [SEP] B [SEP]``
-
- Args:
- token_ids_0 (:obj:`List[int]`):
- List of IDs to which the special tokens will be added
- token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
- Optional second list of IDs for sequence pairs.
-
- Returns:
- :obj:`List[int]`: list of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens.
- """
- if token_ids_1 is None:
- return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
- cls = [self.cls_token_id]
- sep = [self.sep_token_id]
- return cls + token_ids_0 + sep + token_ids_1 + sep
-
- def get_special_tokens_mask(
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
- ) -> List[int]:
- """
- Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
- special tokens using the tokenizer ``prepare_for_model`` method.
-
- Args:
- token_ids_0 (:obj:`List[int]`):
- List of ids.
- token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
- Optional second list of IDs for sequence pairs.
- already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
- Set to True if the token list is already formatted with special tokens for the model
-
- Returns:
- :obj:`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- """
-
- if already_has_special_tokens:
- if token_ids_1 is not None:
- raise ValueError(
- "You should not supply a second sequence if the provided sequence of "
- "ids is already formated with special tokens for the model."
- )
- return list(map(lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0))
-
- if token_ids_1 is not None:
- return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1]
- return [1] + ([0] * len(token_ids_0)) + [1]
-
- def create_token_type_ids_from_sequences(
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
- ) -> List[int]:
- """
- Creates a mask from the two sequences passed to be used in a sequence-pair classification task.
- A BERT sequence pair mask has the following format:
-
- ::
-
- 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
- | first sequence | second sequence |
-
- if token_ids_1 is None, only returns the first portion of the mask (0's).
-
- Args:
- token_ids_0 (:obj:`List[int]`):
- List of ids.
- token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`):
- Optional second list of IDs for sequence pairs.
-
- Returns:
- :obj:`List[int]`: List of `token type IDs <../glossary.html#token-type-ids>`_ according to the given
- sequence(s).
- """
- sep = [self.sep_token_id]
- cls = [self.cls_token_id]
- if token_ids_1 is None:
- return len(cls + token_ids_0 + sep) * [0]
- return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]
-
- def save_vocabulary(self, vocab_path):
- """
- Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.
-
- Args:
- vocab_path (:obj:`str`):
- The directory in which to save the vocabulary.
-
- Returns:
- :obj:`Tuple(str)`: Paths to the files saved.
- """
- index = 0
- if os.path.isdir(vocab_path):
- vocab_file = os.path.join(vocab_path, VOCAB_FILES_NAMES["vocab_file"])
- else:
- vocab_file = vocab_path
- with open(vocab_file, "w", encoding="utf-8") as writer:
- for token, token_index in sorted(self.vocab.items(), key=lambda kv: kv[1]):
- if index != token_index:
- logger.warning(
- "Saving vocabulary to {}: vocabulary indices are not consecutive."
- " Please check that the vocabulary is not corrupted!".format(vocab_file)
- )
- index = token_index
- writer.write(token + "\n")
- index += 1
- return (vocab_file,)
-
-
-class BasicTokenizer(object):
- """Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
-
- def __init__(self, do_lower_case=True, never_split=None, tokenize_chinese_chars=True):
- """ Constructs a BasicTokenizer.
-
- Args:
- **do_lower_case**: Whether to lower case the input.
- **never_split**: (`optional`) list of str
- Kept for backward compatibility purposes.
- Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
- List of token not to split.
- **tokenize_chinese_chars**: (`optional`) boolean (default True)
- Whether to tokenize Chinese characters.
- This should likely be deactivated for Japanese:
- see: https://github.com/huggingface/pytorch-pretrained-BERT/issues/328
- """
- if never_split is None:
- never_split = []
- self.do_lower_case = do_lower_case
- self.never_split = set(never_split)
- self.tokenize_chinese_chars = tokenize_chinese_chars
-
- def tokenize(self, text, never_split=None):
- """ Basic Tokenization of a piece of text.
- Split on "white spaces" only, for sub-word tokenization, see WordPieceTokenizer.
-
- Args:
- **never_split**: (`optional`) list of str
- Kept for backward compatibility purposes.
- Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
- List of token not to split.
- """
- # union() returns a new set by concatenating the two sets.
- never_split = self.never_split.union(set(never_split)) if never_split else self.never_split
-
- # This was added on November 1st, 2018 for the multilingual and Chinese
- # models. This is also applied to the English models now, but it doesn't
- # matter since the English models were not trained on any Chinese data
- # and generally don't have any Chinese data in them (there are Chinese
- # characters in the vocabulary because Wikipedia does have some Chinese
- # words in the English Wikipedia.).
- if self.tokenize_chinese_chars:
- text = self._tokenize_chinese_chars(text)
- orig_tokens = whitespace_tokenize(text)
- split_tokens = []
- for token in orig_tokens:
- if self.do_lower_case and token not in never_split:
- token = token.lower()
- token = self._run_strip_accents(token)
- split_tokens.extend(self._run_split_on_punc(token, never_split))
-
- output_tokens = whitespace_tokenize(" ".join(split_tokens))
- return output_tokens
-
- def _run_strip_accents(self, text):
- """Strips accents from a piece of text."""
- text = unicodedata.normalize("NFD", text)
- output = []
- for char in text:
- cat = unicodedata.category(char)
- if cat == "Mn":
- continue
- output.append(char)
- return "".join(output)
-
- def _run_split_on_punc(self, text, never_split=None):
- """Splits punctuation on a piece of text."""
- if never_split is not None and text in never_split:
- return [text]
- chars = list(text)
- i = 0
- start_new_word = True
- output = []
- while i < len(chars):
- char = chars[i]
- if _is_punctuation(char):
- output.append([char])
- start_new_word = True
- else:
- if start_new_word:
- output.append([])
- start_new_word = False
- output[-1].append(char)
- i += 1
-
- return ["".join(x) for x in output]
-
- def _tokenize_chinese_chars(self, text):
- """Adds whitespace around any CJK character."""
- output = []
- for char in text:
- cp = ord(char)
- if self._is_chinese_char(cp):
- output.append(" ")
- output.append(char)
- output.append(" ")
- else:
- output.append(char)
- return "".join(output)
-
- def _is_chinese_char(self, cp):
- """Checks whether CP is the codepoint of a CJK character."""
- # This defines a "chinese character" as anything in the CJK Unicode block:
- # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
- #
- # Note that the CJK Unicode block is NOT all Japanese and Korean characters,
- # despite its name. The modern Korean Hangul alphabet is a different block,
- # as is Japanese Hiragana and Katakana. Those alphabets are used to write
- # space-separated words, so they are not treated specially and handled
- # like the all of the other languages.
- if (
- (cp >= 0x4E00 and cp <= 0x9FFF)
- or (cp >= 0x3400 and cp <= 0x4DBF) #
- or (cp >= 0x20000 and cp <= 0x2A6DF) #
- or (cp >= 0x2A700 and cp <= 0x2B73F) #
- or (cp >= 0x2B740 and cp <= 0x2B81F) #
- or (cp >= 0x2B820 and cp <= 0x2CEAF) #
- or (cp >= 0xF900 and cp <= 0xFAFF)
- or (cp >= 0x2F800 and cp <= 0x2FA1F) #
- ): #
- return True
-
- return False
-
- def _clean_text(self, text):
- """Performs invalid character removal and whitespace cleanup on text."""
- output = []
- for char in text:
- cp = ord(char)
- if cp == 0 or cp == 0xFFFD or _is_control(char):
- continue
- if _is_whitespace(char):
- output.append(" ")
- else:
- output.append(char)
- return "".join(output)
-
-
-class WordpieceTokenizer(object):
- """Runs WordPiece tokenization."""
-
- def __init__(self, vocab, unk_token, max_input_chars_per_word=100):
- self.vocab = vocab
- self.unk_token = unk_token
- self.max_input_chars_per_word = max_input_chars_per_word
-
- def tokenize(self, text):
- """Tokenizes a piece of text into its word pieces.
-
- This uses a greedy longest-match-first algorithm to perform tokenization
- using the given vocabulary.
-
- For example:
- input = "unaffable"
- output = ["un", "##aff", "##able"]
-
- Args:
- text: A single token or whitespace separated tokens. This should have
- already been passed through `BasicTokenizer`.
-
- Returns:
- A list of wordpiece tokens.
- """
-
- output_tokens = []
- for token in whitespace_tokenize(text):
- chars = list(token)
- if len(chars) > self.max_input_chars_per_word:
- output_tokens.append(self.unk_token)
- continue
-
- is_bad = False
- start = 0
- sub_tokens = []
- while start < len(chars):
- end = len(chars)
- cur_substr = None
- while start < end:
- substr = "".join(chars[start:end])
- if start > 0:
- substr = "##" + substr
- if substr in self.vocab:
- cur_substr = substr
- break
- end -= 1
- if cur_substr is None:
- is_bad = True
- break
- sub_tokens.append(cur_substr)
- start = end
-
- if is_bad:
- output_tokens.append(self.unk_token)
- else:
- output_tokens.extend(sub_tokens)
- return output_tokens
-
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py b/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py
deleted file mode 100644
index 43fe0451bf4d5762d734314075b1402c2a8db2bb..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import contextlib
-import sys
-from collections import Counter
-from multiprocessing import Pool
-
-from fairseq.data.encoders.gpt2_bpe import get_encoder
-
-
-def main():
- """
- Helper script to encode raw text with the GPT-2 BPE using multiple processes.
-
- The encoder.json and vocab.bpe files can be obtained here:
- - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
- - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--encoder-json",
- help="path to encoder.json",
- )
- parser.add_argument(
- "--vocab-bpe",
- type=str,
- help="path to vocab.bpe",
- )
- parser.add_argument(
- "--inputs",
- nargs="+",
- default=["-"],
- help="input files to filter/encode",
- )
- parser.add_argument(
- "--outputs",
- nargs="+",
- default=["-"],
- help="path to save encoded outputs",
- )
- parser.add_argument(
- "--keep-empty",
- action="store_true",
- help="keep empty lines",
- )
- parser.add_argument("--workers", type=int, default=20)
- args = parser.parse_args()
-
- assert len(args.inputs) == len(
- args.outputs
- ), "number of input and output paths should match"
-
- with contextlib.ExitStack() as stack:
- inputs = [
- stack.enter_context(open(input, "r", encoding="utf-8"))
- if input != "-"
- else sys.stdin
- for input in args.inputs
- ]
- outputs = [
- stack.enter_context(open(output, "w", encoding="utf-8"))
- if output != "-"
- else sys.stdout
- for output in args.outputs
- ]
-
- encoder = MultiprocessingEncoder(args)
- pool = Pool(args.workers, initializer=encoder.initializer)
- encoded_lines = pool.imap(encoder.encode_lines, zip(*inputs), 100)
-
- stats = Counter()
- for i, (filt, enc_lines) in enumerate(encoded_lines, start=1):
- if filt == "PASS":
- for enc_line, output_h in zip(enc_lines, outputs):
- print(enc_line, file=output_h)
- else:
- stats["num_filtered_" + filt] += 1
- if i % 10000 == 0:
- print("processed {} lines".format(i), file=sys.stderr)
-
- for k, v in stats.most_common():
- print("[{}] filtered {} lines".format(k, v), file=sys.stderr)
-
-
-class MultiprocessingEncoder(object):
- def __init__(self, args):
- self.args = args
-
- def initializer(self):
- global bpe
- bpe = get_encoder(self.args.encoder_json, self.args.vocab_bpe)
-
- def encode(self, line):
- global bpe
- ids = bpe.encode(line)
- return list(map(str, ids))
-
- def decode(self, tokens):
- global bpe
- return bpe.decode(tokens)
-
- def encode_lines(self, lines):
- """
- Encode a set of lines. All lines will be encoded together.
- """
- enc_lines = []
- for line in lines:
- line = line.strip()
- if len(line) == 0 and not self.args.keep_empty:
- return ["EMPTY", None]
- tokens = self.encode(line)
- enc_lines.append(" ".join(tokens))
- return ["PASS", enc_lines]
-
- def decode_lines(self, lines):
- dec_lines = []
- for line in lines:
- tokens = map(int, line.strip().split())
- dec_lines.append(self.decode(tokens))
- return ["PASS", dec_lines]
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py
deleted file mode 100644
index 5369c3097187b6929df58e93284199a1729ea275..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from fontTools.pens.basePen import BasePen
-
-
-__all__ = ["CocoaPen"]
-
-
-class CocoaPen(BasePen):
- def __init__(self, glyphSet, path=None):
- BasePen.__init__(self, glyphSet)
- if path is None:
- from AppKit import NSBezierPath
-
- path = NSBezierPath.bezierPath()
- self.path = path
-
- def _moveTo(self, p):
- self.path.moveToPoint_(p)
-
- def _lineTo(self, p):
- self.path.lineToPoint_(p)
-
- def _curveToOne(self, p1, p2, p3):
- self.path.curveToPoint_controlPoint1_controlPoint2_(p3, p1, p2)
-
- def _closePath(self):
- self.path.closePath()
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py
deleted file mode 100644
index 2c33b8079af6cb9d8d16fae9a8c430ecda8cc9e1..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py
+++ /dev/null
@@ -1,1807 +0,0 @@
-from __future__ import annotations
-
-import json
-import re
-import tempfile
-import textwrap
-from pathlib import Path
-from typing import Iterable
-
-import huggingface_hub
-import requests
-import semantic_version as semver
-from gradio_client.documentation import document, set_documentation_group
-from huggingface_hub import CommitOperationAdd
-
-from gradio.themes.utils import (
- colors,
- fonts,
- get_matching_version,
- get_theme_assets,
- sizes,
-)
-from gradio.themes.utils.readme_content import README_CONTENT
-
-set_documentation_group("themes")
-
-
-class ThemeClass:
- def __init__(self):
- self._stylesheets = []
- self.name = None
-
- def _get_theme_css(self):
- css = {}
- dark_css = {}
-
- for attr, val in self.__dict__.items():
- if attr.startswith("_"):
- continue
- if val is None:
- if attr.endswith("_dark"):
- dark_css[attr[:-5]] = None
- continue
- else:
- raise ValueError(
- f"Cannot set '{attr}' to None - only dark mode variables can be None."
- )
- val = str(val)
- pattern = r"(\*)([\w_]+)(\b)"
-
- def repl_func(match):
- full_match = match.group(0)
- if full_match.startswith("*") and full_match.endswith("_dark"):
- raise ValueError(
- f"Cannot refer '{attr}' to '{val}' - dark variable references are automatically used for dark mode attributes, so do not use the _dark suffix in the value."
- )
- if (
- attr.endswith("_dark")
- and full_match.startswith("*")
- and attr[:-5] == full_match[1:]
- ):
- raise ValueError(
- f"Cannot refer '{attr}' to '{val}' - if dark and light mode values are the same, set dark mode version to None."
- )
-
- word = match.group(2)
- word = word.replace("_", "-")
- return f"var(--{word})"
-
- val = re.sub(pattern, repl_func, val)
-
- attr = attr.replace("_", "-")
-
- if attr.endswith("-dark"):
- attr = attr[:-5]
- dark_css[attr] = val
- else:
- css[attr] = val
-
- for attr, val in css.items():
- if attr not in dark_css:
- dark_css[attr] = val
-
- css_code = (
- ":root {\n"
- + "\n".join([f" --{attr}: {val};" for attr, val in css.items()])
- + "\n}"
- )
- dark_css_code = (
- ".dark {\n"
- + "\n".join([f" --{attr}: {val};" for attr, val in dark_css.items()])
- + "\n}"
- )
-
- return f"{css_code}\n{dark_css_code}"
-
- def to_dict(self):
- """Convert the theme into a python dictionary."""
- schema = {"theme": {}}
- for prop in dir(self):
- if (
- not prop.startswith("_")
- or prop.startswith("_font")
- or prop == "_stylesheets"
- or prop == "name"
- ) and isinstance(getattr(self, prop), (list, str)):
- schema["theme"][prop] = getattr(self, prop)
- return schema
-
- @classmethod
- def load(cls, path: str) -> ThemeClass:
- """Load a theme from a json file.
-
- Parameters:
- path: The filepath to read.
- """
- with open(path) as fp:
- return cls.from_dict(json.load(fp, object_hook=fonts.as_font))
-
- @classmethod
- def from_dict(cls, theme: dict[str, dict[str, str]]) -> ThemeClass:
- """Create a theme instance from a dictionary representation.
-
- Parameters:
- theme: The dictionary representation of the theme.
- """
- new_theme = cls()
- for prop, value in theme["theme"].items():
- setattr(new_theme, prop, value)
-
- # For backwards compatibility, load attributes in base theme not in the loaded theme from the base theme.
- base = Base()
- for attr in base.__dict__:
- if not attr.startswith("_") and not hasattr(new_theme, attr):
- setattr(new_theme, attr, getattr(base, attr))
-
- return new_theme
-
- def dump(self, filename: str):
- """Write the theme to a json file.
-
- Parameters:
- filename: The path to write the theme too
- """
- Path(filename).write_text(json.dumps(self.to_dict(), cls=fonts.FontEncoder))
-
- @classmethod
- def from_hub(cls, repo_name: str, hf_token: str | None = None):
- """Load a theme from the hub.
-
- This DOES NOT require a HuggingFace account for downloading publicly available themes.
-
- Parameters:
- repo_name: string of the form /@. If a semantic version expression is omitted, the latest version will be fetched.
- hf_token: HuggingFace Token. Only needed to download private themes.
- """
- if "@" not in repo_name:
- name, version = repo_name, None
- else:
- name, version = repo_name.split("@")
-
- api = huggingface_hub.HfApi(token=hf_token)
-
- try:
- space_info = api.space_info(name)
- except requests.HTTPError as e:
- raise ValueError(f"The space {name} does not exist") from e
-
- assets = get_theme_assets(space_info)
- matching_version = get_matching_version(assets, version)
-
- if not matching_version:
- raise ValueError(
- f"Cannot find a matching version for expression {version} "
- f"from files {[f.filename for f in assets]}"
- )
-
- theme_file = huggingface_hub.hf_hub_download(
- repo_id=name,
- repo_type="space",
- filename=f"themes/theme_schema@{matching_version.version}.json",
- )
- theme = cls.load(theme_file)
- theme.name = name
- return theme
-
- @staticmethod
- def _get_next_version(space_info: huggingface_hub.hf_api.SpaceInfo) -> str:
- assets = get_theme_assets(space_info)
- latest_version = max(assets, key=lambda asset: asset.version).version
- return str(latest_version.next_patch())
-
- @staticmethod
- def _theme_version_exists(
- space_info: huggingface_hub.hf_api.SpaceInfo, version: str
- ) -> bool:
- assets = get_theme_assets(space_info)
- return any(a.version == semver.Version(version) for a in assets)
-
- def push_to_hub(
- self,
- repo_name: str,
- org_name: str | None = None,
- version: str | None = None,
- hf_token: str | None = None,
- theme_name: str | None = None,
- description: str | None = None,
- private: bool = False,
- ):
- """Upload a theme to the HuggingFace hub.
-
- This requires a HuggingFace account.
-
- Parameters:
- repo_name: The name of the repository to store the theme assets, e.g. 'my_theme' or 'sunset'.
- org_name: The name of the org to save the space in. If None (the default), the username corresponding to the logged in user, or hƒ_token is used.
- version: A semantic version tag for theme. Bumping the version tag lets you publish updates to a theme without changing the look of applications that already loaded your theme.
- hf_token: API token for your HuggingFace account
- theme_name: Name for the name. If None, defaults to repo_name
- description: A long form description to your theme.
- """
-
- from gradio import __version__
-
- api = huggingface_hub.HfApi()
-
- if not hf_token:
- try:
- author = huggingface_hub.whoami()["name"]
- except OSError as e:
- raise ValueError(
- "In order to push to hub, log in via `huggingface-cli login` "
- "or provide a theme_token to push_to_hub. For more information "
- "see https://huggingface.co/docs/huggingface_hub/quick-start#login"
- ) from e
- else:
- author = huggingface_hub.whoami(token=hf_token)["name"]
-
- space_id = f"{org_name or author}/{repo_name}"
-
- try:
- space_info = api.space_info(space_id)
- except requests.HTTPError:
- space_info = None
-
- space_exists = space_info is not None
-
- # If no version, set the version to next patch release
- if not version:
- version = self._get_next_version(space_info) if space_exists else "0.0.1"
- else:
- _ = semver.Version(version)
-
- if space_exists and self._theme_version_exists(space_info, version):
- raise ValueError(
- f"The space {space_id} already has a "
- f"theme with version {version}. See: themes/theme_schema@{version}.json. "
- "To manually override this version, use the HuggingFace hub UI."
- )
-
- theme_name = theme_name or repo_name
-
- with tempfile.NamedTemporaryFile(
- mode="w", delete=False, suffix=".json"
- ) as css_file:
- contents = self.to_dict()
- contents["version"] = version
- json.dump(contents, css_file, cls=fonts.FontEncoder)
- with tempfile.NamedTemporaryFile(mode="w", delete=False) as readme_file:
- readme_content = README_CONTENT.format(
- theme_name=theme_name,
- description=description or "Add a description of this theme here!",
- author=author,
- gradio_version=__version__,
- )
- readme_file.write(textwrap.dedent(readme_content))
- with tempfile.NamedTemporaryFile(mode="w", delete=False) as app_file:
- contents = (Path(__file__).parent / "app.py").read_text()
- contents = re.sub(
- r"theme=gr.themes.Default\(\)",
- f"theme='{space_id}'",
- contents,
- )
- contents = re.sub(r"{THEME}", theme_name or repo_name, contents)
- contents = re.sub(r"{AUTHOR}", org_name or author, contents)
- contents = re.sub(r"{SPACE_NAME}", repo_name, contents)
- app_file.write(contents)
-
- operations = [
- CommitOperationAdd(
- path_in_repo=f"themes/theme_schema@{version}.json",
- path_or_fileobj=css_file.name,
- ),
- CommitOperationAdd(
- path_in_repo="README.md", path_or_fileobj=readme_file.name
- ),
- CommitOperationAdd(path_in_repo="app.py", path_or_fileobj=app_file.name),
- ]
-
- huggingface_hub.create_repo(
- space_id,
- repo_type="space",
- space_sdk="gradio",
- token=hf_token,
- exist_ok=True,
- private=private,
- )
-
- api.create_commit(
- repo_id=space_id,
- commit_message="Updating theme",
- repo_type="space",
- operations=operations,
- token=hf_token,
- )
- url = f"https://huggingface.co/spaces/{space_id}"
- print(f"See your theme here! {url}")
- return url
-
-
-@document("push_to_hub", "from_hub", "load", "dump", "from_dict", "to_dict")
-class Base(ThemeClass):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.blue,
- secondary_hue: colors.Color | str = colors.blue,
- neutral_hue: colors.Color | str = colors.gray,
- text_size: sizes.Size | str = sizes.text_md,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Source Sans Pro"),
- "ui-sans-serif",
- "system-ui",
- "sans-serif",
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("IBM Plex Mono"),
- "ui-monospace",
- "Consolas",
- "monospace",
- ),
- ):
- """
- Parameters:
- primary_hue: The primary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- secondary_hue: The secondary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- neutral_hue: The neutral hue of the theme, used . Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- text_size: The size of the text. Load a preset, like gradio.themes.sizes.text_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- spacing_size: The size of the spacing. Load a preset, like gradio.themes.sizes.spacing_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- radius_size: The radius size of corners. Load a preset, like gradio.themes.sizes.radius_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- font: The primary font to use for the theme. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks.
- font_mono: The monospace font to use for the theme, applies to code. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks.
- """
-
- self.name = "base"
-
- def expand_shortcut(shortcut, mode="color", prefix=None):
- if not isinstance(shortcut, str):
- return shortcut
- if mode == "color":
- for color in colors.Color.all:
- if color.name == shortcut:
- return color
- raise ValueError(f"Color shortcut {shortcut} not found.")
- elif mode == "size":
- for size in sizes.Size.all:
- if size.name == f"{prefix}_{shortcut}":
- return size
- raise ValueError(f"Size shortcut {shortcut} not found.")
-
- primary_hue = expand_shortcut(primary_hue, mode="color")
- secondary_hue = expand_shortcut(secondary_hue, mode="color")
- neutral_hue = expand_shortcut(neutral_hue, mode="color")
- text_size = expand_shortcut(text_size, mode="size", prefix="text")
- spacing_size = expand_shortcut(spacing_size, mode="size", prefix="spacing")
- radius_size = expand_shortcut(radius_size, mode="size", prefix="radius")
-
- # Hue ranges
- self.primary_50 = primary_hue.c50
- self.primary_100 = primary_hue.c100
- self.primary_200 = primary_hue.c200
- self.primary_300 = primary_hue.c300
- self.primary_400 = primary_hue.c400
- self.primary_500 = primary_hue.c500
- self.primary_600 = primary_hue.c600
- self.primary_700 = primary_hue.c700
- self.primary_800 = primary_hue.c800
- self.primary_900 = primary_hue.c900
- self.primary_950 = primary_hue.c950
-
- self.secondary_50 = secondary_hue.c50
- self.secondary_100 = secondary_hue.c100
- self.secondary_200 = secondary_hue.c200
- self.secondary_300 = secondary_hue.c300
- self.secondary_400 = secondary_hue.c400
- self.secondary_500 = secondary_hue.c500
- self.secondary_600 = secondary_hue.c600
- self.secondary_700 = secondary_hue.c700
- self.secondary_800 = secondary_hue.c800
- self.secondary_900 = secondary_hue.c900
- self.secondary_950 = secondary_hue.c950
-
- self.neutral_50 = neutral_hue.c50
- self.neutral_100 = neutral_hue.c100
- self.neutral_200 = neutral_hue.c200
- self.neutral_300 = neutral_hue.c300
- self.neutral_400 = neutral_hue.c400
- self.neutral_500 = neutral_hue.c500
- self.neutral_600 = neutral_hue.c600
- self.neutral_700 = neutral_hue.c700
- self.neutral_800 = neutral_hue.c800
- self.neutral_900 = neutral_hue.c900
- self.neutral_950 = neutral_hue.c950
-
- # Spacing
- self.spacing_xxs = spacing_size.xxs
- self.spacing_xs = spacing_size.xs
- self.spacing_sm = spacing_size.sm
- self.spacing_md = spacing_size.md
- self.spacing_lg = spacing_size.lg
- self.spacing_xl = spacing_size.xl
- self.spacing_xxl = spacing_size.xxl
-
- self.radius_xxs = radius_size.xxs
- self.radius_xs = radius_size.xs
- self.radius_sm = radius_size.sm
- self.radius_md = radius_size.md
- self.radius_lg = radius_size.lg
- self.radius_xl = radius_size.xl
- self.radius_xxl = radius_size.xxl
-
- self.text_xxs = text_size.xxs
- self.text_xs = text_size.xs
- self.text_sm = text_size.sm
- self.text_md = text_size.md
- self.text_lg = text_size.lg
- self.text_xl = text_size.xl
- self.text_xxl = text_size.xxl
-
- # Font
- if not isinstance(font, Iterable):
- font = [font]
- self._font = [
- fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam)
- for fontfam in font
- ]
- if not isinstance(font_mono, Iterable):
- font_mono = [font_mono]
- self._font_mono = [
- fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam)
- for fontfam in font_mono
- ]
- self.font = ", ".join(str(font) for font in self._font)
- self.font_mono = ", ".join(str(font) for font in self._font_mono)
-
- self._stylesheets = []
- for font in self._font + self._font_mono:
- font_stylesheet = font.stylesheet()
- if font_stylesheet:
- self._stylesheets.append(font_stylesheet)
-
- self.set()
-
- def set(
- self,
- *,
- # Body Attributes: These set set the values for the entire body of the app.
- body_background_fill=None,
- body_background_fill_dark=None,
- body_text_color=None,
- body_text_color_dark=None,
- body_text_size=None,
- body_text_color_subdued=None,
- body_text_color_subdued_dark=None,
- body_text_weight=None,
- embed_radius=None,
- # Element Colors: These set the colors for common elements.
- background_fill_primary=None,
- background_fill_primary_dark=None,
- background_fill_secondary=None,
- background_fill_secondary_dark=None,
- border_color_accent=None,
- border_color_accent_dark=None,
- border_color_primary=None,
- border_color_primary_dark=None,
- color_accent=None,
- color_accent_soft=None,
- color_accent_soft_dark=None,
- # Text: This sets the text styling for text elements.
- link_text_color=None,
- link_text_color_dark=None,
- link_text_color_active=None,
- link_text_color_active_dark=None,
- link_text_color_hover=None,
- link_text_color_hover_dark=None,
- link_text_color_visited=None,
- link_text_color_visited_dark=None,
- prose_text_size=None,
- prose_text_weight=None,
- prose_header_text_weight=None,
- # Shadows: These set the high-level shadow rendering styles. These variables are often referenced by other component-specific shadow variables.
- shadow_drop=None,
- shadow_drop_lg=None,
- shadow_inset=None,
- shadow_spread=None,
- shadow_spread_dark=None,
- # Layout Atoms: These set the style for common layout elements, such as the blocks that wrap components.
- block_background_fill=None,
- block_background_fill_dark=None,
- block_border_color=None,
- block_border_color_dark=None,
- block_border_width=None,
- block_border_width_dark=None,
- block_info_text_color=None,
- block_info_text_color_dark=None,
- block_info_text_size=None,
- block_info_text_weight=None,
- block_label_background_fill=None,
- block_label_background_fill_dark=None,
- block_label_border_color=None,
- block_label_border_color_dark=None,
- block_label_border_width=None,
- block_label_border_width_dark=None,
- block_label_shadow=None,
- block_label_text_color=None,
- block_label_text_color_dark=None,
- block_label_margin=None,
- block_label_padding=None,
- block_label_radius=None,
- block_label_right_radius=None,
- block_label_text_size=None,
- block_label_text_weight=None,
- block_padding=None,
- block_radius=None,
- block_shadow=None,
- block_shadow_dark=None,
- block_title_background_fill=None,
- block_title_background_fill_dark=None,
- block_title_border_color=None,
- block_title_border_color_dark=None,
- block_title_border_width=None,
- block_title_border_width_dark=None,
- block_title_text_color=None,
- block_title_text_color_dark=None,
- block_title_padding=None,
- block_title_radius=None,
- block_title_text_size=None,
- block_title_text_weight=None,
- container_radius=None,
- form_gap_width=None,
- layout_gap=None,
- panel_background_fill=None,
- panel_background_fill_dark=None,
- panel_border_color=None,
- panel_border_color_dark=None,
- panel_border_width=None,
- panel_border_width_dark=None,
- section_header_text_size=None,
- section_header_text_weight=None,
- # Component Atoms: These set the style for elements within components.
- chatbot_code_background_color=None,
- chatbot_code_background_color_dark=None,
- checkbox_background_color=None,
- checkbox_background_color_dark=None,
- checkbox_background_color_focus=None,
- checkbox_background_color_focus_dark=None,
- checkbox_background_color_hover=None,
- checkbox_background_color_hover_dark=None,
- checkbox_background_color_selected=None,
- checkbox_background_color_selected_dark=None,
- checkbox_border_color=None,
- checkbox_border_color_dark=None,
- checkbox_border_color_focus=None,
- checkbox_border_color_focus_dark=None,
- checkbox_border_color_hover=None,
- checkbox_border_color_hover_dark=None,
- checkbox_border_color_selected=None,
- checkbox_border_color_selected_dark=None,
- checkbox_border_radius=None,
- checkbox_border_width=None,
- checkbox_border_width_dark=None,
- checkbox_check=None,
- radio_circle=None,
- checkbox_shadow=None,
- checkbox_label_background_fill=None,
- checkbox_label_background_fill_dark=None,
- checkbox_label_background_fill_hover=None,
- checkbox_label_background_fill_hover_dark=None,
- checkbox_label_background_fill_selected=None,
- checkbox_label_background_fill_selected_dark=None,
- checkbox_label_border_color=None,
- checkbox_label_border_color_dark=None,
- checkbox_label_border_color_hover=None,
- checkbox_label_border_color_hover_dark=None,
- checkbox_label_border_width=None,
- checkbox_label_border_width_dark=None,
- checkbox_label_gap=None,
- checkbox_label_padding=None,
- checkbox_label_shadow=None,
- checkbox_label_text_size=None,
- checkbox_label_text_weight=None,
- checkbox_label_text_color=None,
- checkbox_label_text_color_dark=None,
- checkbox_label_text_color_selected=None,
- checkbox_label_text_color_selected_dark=None,
- error_background_fill=None,
- error_background_fill_dark=None,
- error_border_color=None,
- error_border_color_dark=None,
- error_border_width=None,
- error_border_width_dark=None,
- error_text_color=None,
- error_text_color_dark=None,
- input_background_fill=None,
- input_background_fill_dark=None,
- input_background_fill_focus=None,
- input_background_fill_focus_dark=None,
- input_background_fill_hover=None,
- input_background_fill_hover_dark=None,
- input_border_color=None,
- input_border_color_dark=None,
- input_border_color_focus=None,
- input_border_color_focus_dark=None,
- input_border_color_hover=None,
- input_border_color_hover_dark=None,
- input_border_width=None,
- input_border_width_dark=None,
- input_padding=None,
- input_placeholder_color=None,
- input_placeholder_color_dark=None,
- input_radius=None,
- input_shadow=None,
- input_shadow_dark=None,
- input_shadow_focus=None,
- input_shadow_focus_dark=None,
- input_text_size=None,
- input_text_weight=None,
- loader_color=None,
- loader_color_dark=None,
- slider_color=None,
- slider_color_dark=None,
- stat_background_fill=None,
- stat_background_fill_dark=None,
- table_border_color=None,
- table_border_color_dark=None,
- table_even_background_fill=None,
- table_even_background_fill_dark=None,
- table_odd_background_fill=None,
- table_odd_background_fill_dark=None,
- table_radius=None,
- table_row_focus=None,
- table_row_focus_dark=None,
- # Buttons: These set the style for buttons.
- button_border_width=None,
- button_border_width_dark=None,
- button_shadow=None,
- button_shadow_active=None,
- button_shadow_hover=None,
- button_transition=None,
- button_large_padding=None,
- button_large_radius=None,
- button_large_text_size=None,
- button_large_text_weight=None,
- button_small_padding=None,
- button_small_radius=None,
- button_small_text_size=None,
- button_small_text_weight=None,
- button_primary_background_fill=None,
- button_primary_background_fill_dark=None,
- button_primary_background_fill_hover=None,
- button_primary_background_fill_hover_dark=None,
- button_primary_border_color=None,
- button_primary_border_color_dark=None,
- button_primary_border_color_hover=None,
- button_primary_border_color_hover_dark=None,
- button_primary_text_color=None,
- button_primary_text_color_dark=None,
- button_primary_text_color_hover=None,
- button_primary_text_color_hover_dark=None,
- button_secondary_background_fill=None,
- button_secondary_background_fill_dark=None,
- button_secondary_background_fill_hover=None,
- button_secondary_background_fill_hover_dark=None,
- button_secondary_border_color=None,
- button_secondary_border_color_dark=None,
- button_secondary_border_color_hover=None,
- button_secondary_border_color_hover_dark=None,
- button_secondary_text_color=None,
- button_secondary_text_color_dark=None,
- button_secondary_text_color_hover=None,
- button_secondary_text_color_hover_dark=None,
- button_cancel_background_fill=None,
- button_cancel_background_fill_dark=None,
- button_cancel_background_fill_hover=None,
- button_cancel_background_fill_hover_dark=None,
- button_cancel_border_color=None,
- button_cancel_border_color_dark=None,
- button_cancel_border_color_hover=None,
- button_cancel_border_color_hover_dark=None,
- button_cancel_text_color=None,
- button_cancel_text_color_dark=None,
- button_cancel_text_color_hover=None,
- button_cancel_text_color_hover_dark=None,
- ) -> Base:
- """
- Parameters:
- body_background_fill: The background of the entire app.
- body_background_fill_dark: The background of the entire app in dark mode.
- body_text_color: The default text color.
- body_text_color_dark: The default text color in dark mode.
- body_text_size: The default text size.
- body_text_color_subdued: The text color used for softer, less important text.
- body_text_color_subdued_dark: The text color used for softer, less important text in dark mode.
- body_text_weight: The default text weight.
- embed_radius: The corner radius used for embedding when the app is embedded within a page.
- background_fill_primary: The background primarily used for items placed directly on the page.
- background_fill_primary_dark: The background primarily used for items placed directly on the page in dark mode.
- background_fill_secondary: The background primarily used for items placed on top of another item.
- background_fill_secondary_dark: The background primarily used for items placed on top of another item in dark mode.
- border_color_accent: The border color used for accented items.
- border_color_accent_dark: The border color used for accented items in dark mode.
- border_color_primary: The border color primarily used for items placed directly on the page.
- border_color_primary_dark: The border color primarily used for items placed directly on the page in dark mode.
- color_accent: The color used for accented items.
- color_accent_soft: The softer color used for accented items.
- color_accent_soft_dark: The softer color used for accented items in dark mode.
- link_text_color: The text color used for links.
- link_text_color_dark: The text color used for links in dark mode.
- link_text_color_active: The text color used for links when they are active.
- link_text_color_active_dark: The text color used for links when they are active in dark mode.
- link_text_color_hover: The text color used for links when they are hovered over.
- link_text_color_hover_dark: The text color used for links when they are hovered over in dark mode.
- link_text_color_visited: The text color used for links when they have been visited.
- link_text_color_visited_dark: The text color used for links when they have been visited in dark mode.
- prose_text_size: The text size used for markdown and other prose.
- prose_text_weight: The text weight used for markdown and other prose.
- prose_header_text_weight: The text weight of a header used for markdown and other prose.
- shadow_drop: Drop shadow used by other shadowed items.
- shadow_drop_lg: Larger drop shadow used by other shadowed items.
- shadow_inset: Inset shadow used by other shadowed items.
- shadow_spread: Size of shadow spread used by shadowed items.
- shadow_spread_dark: Size of shadow spread used by shadowed items in dark mode.
- block_background_fill: The background around an item.
- block_background_fill_dark: The background around an item in dark mode.
- block_border_color: The border color around an item.
- block_border_color_dark: The border color around an item in dark mode.
- block_border_width: The border width around an item.
- block_border_width_dark: The border width around an item in dark mode.
- block_info_text_color: The color of the info text.
- block_info_text_color_dark: The color of the info text in dark mode.
- block_info_text_size: The size of the info text.
- block_info_text_weight: The weight of the info text.
- block_label_background_fill: The background of the title label of a media element (e.g. image).
- block_label_background_fill_dark: The background of the title label of a media element (e.g. image) in dark mode.
- block_label_border_color: The border color of the title label of a media element (e.g. image).
- block_label_border_color_dark: The border color of the title label of a media element (e.g. image) in dark mode.
- block_label_border_width: The border width of the title label of a media element (e.g. image).
- block_label_border_width_dark: The border width of the title label of a media element (e.g. image) in dark mode.
- block_label_shadow: The shadow of the title label of a media element (e.g. image).
- block_label_text_color: The text color of the title label of a media element (e.g. image).
- block_label_text_color_dark: The text color of the title label of a media element (e.g. image) in dark mode.
- block_label_margin: The margin of the title label of a media element (e.g. image) from its surrounding container.
- block_label_padding: The padding of the title label of a media element (e.g. image).
- block_label_radius: The corner radius of the title label of a media element (e.g. image).
- block_label_right_radius: The corner radius of a right-aligned helper label.
- block_label_text_size: The text size of the title label of a media element (e.g. image).
- block_label_text_weight: The text weight of the title label of a media element (e.g. image).
- block_padding: The padding around an item.
- block_radius: The corner radius around an item.
- block_shadow: The shadow under an item.
- block_shadow_dark: The shadow under an item in dark mode.
- block_title_background_fill: The background of the title of a form element (e.g. textbox).
- block_title_background_fill_dark: The background of the title of a form element (e.g. textbox) in dark mode.
- block_title_border_color: The border color of the title of a form element (e.g. textbox).
- block_title_border_color_dark: The border color of the title of a form element (e.g. textbox) in dark mode.
- block_title_border_width: The border width of the title of a form element (e.g. textbox).
- block_title_border_width_dark: The border width of the title of a form element (e.g. textbox) in dark mode.
- block_title_text_color: The text color of the title of a form element (e.g. textbox).
- block_title_text_color_dark: The text color of the title of a form element (e.g. textbox) in dark mode.
- block_title_padding: The padding of the title of a form element (e.g. textbox).
- block_title_radius: The corner radius of the title of a form element (e.g. textbox).
- block_title_text_size: The text size of the title of a form element (e.g. textbox).
- block_title_text_weight: The text weight of the title of a form element (e.g. textbox).
- container_radius: The corner radius of a layout component that holds other content.
- form_gap_width: The border gap between form elements, (e.g. consecutive textboxes).
- layout_gap: The gap between items within a row or column.
- panel_background_fill: The background of a panel.
- panel_background_fill_dark: The background of a panel in dark mode.
- panel_border_color: The border color of a panel.
- panel_border_color_dark: The border color of a panel in dark mode.
- panel_border_width: The border width of a panel.
- panel_border_width_dark: The border width of a panel in dark mode.
- section_header_text_size: The text size of a section header (e.g. tab name).
- section_header_text_weight: The text weight of a section header (e.g. tab name).
- chatbot_code_background_color: The background color of code blocks in the chatbot.
- chatbot_code_background_color_dark: The background color of code blocks in the chatbot in dark mode.
- checkbox_background_color: The background of a checkbox square or radio circle.
- checkbox_background_color_dark: The background of a checkbox square or radio circle in dark mode.
- checkbox_background_color_focus: The background of a checkbox square or radio circle when focused.
- checkbox_background_color_focus_dark: The background of a checkbox square or radio circle when focused in dark mode.
- checkbox_background_color_hover: The background of a checkbox square or radio circle when hovered over.
- checkbox_background_color_hover_dark: The background of a checkbox square or radio circle when hovered over in dark mode.
- checkbox_background_color_selected: The background of a checkbox square or radio circle when selected.
- checkbox_background_color_selected_dark: The background of a checkbox square or radio circle when selected in dark mode.
- checkbox_border_color: The border color of a checkbox square or radio circle.
- checkbox_border_color_dark: The border color of a checkbox square or radio circle in dark mode.
- checkbox_border_color_focus: The border color of a checkbox square or radio circle when focused.
- checkbox_border_color_focus_dark: The border color of a checkbox square or radio circle when focused in dark mode.
- checkbox_border_color_hover: The border color of a checkbox square or radio circle when hovered over.
- checkbox_border_color_hover_dark: The border color of a checkbox square or radio circle when hovered over in dark mode.
- checkbox_border_color_selected: The border color of a checkbox square or radio circle when selected.
- checkbox_border_color_selected_dark: The border color of a checkbox square or radio circle when selected in dark mode.
- checkbox_border_radius: The corner radius of a checkbox square.
- checkbox_border_width: The border width of a checkbox square or radio circle.
- checkbox_border_width_dark: The border width of a checkbox square or radio circle in dark mode.
- checkbox_check: The checkmark visual of a checkbox square.
- radio_circle: The circle visual of a radio circle.
- checkbox_shadow: The shadow of a checkbox square or radio circle.
- checkbox_label_background_fill: The background of the surrounding button of a checkbox or radio element.
- checkbox_label_background_fill_dark: The background of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_background_fill_hover: The background of the surrounding button of a checkbox or radio element when hovered over.
- checkbox_label_background_fill_hover_dark: The background of the surrounding button of a checkbox or radio element when hovered over in dark mode.
- checkbox_label_background_fill_selected: The background of the surrounding button of a checkbox or radio element when selected.
- checkbox_label_background_fill_selected_dark: The background of the surrounding button of a checkbox or radio element when selected in dark mode.
- checkbox_label_border_color: The border color of the surrounding button of a checkbox or radio element.
- checkbox_label_border_color_dark: The border color of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_border_color_hover: The border color of the surrounding button of a checkbox or radio element when hovered over.
- checkbox_label_border_color_hover_dark: The border color of the surrounding button of a checkbox or radio element when hovered over in dark mode.
- checkbox_label_border_width: The border width of the surrounding button of a checkbox or radio element.
- checkbox_label_border_width_dark: The border width of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_gap: The gap consecutive checkbox or radio elements.
- checkbox_label_padding: The padding of the surrounding button of a checkbox or radio element.
- checkbox_label_shadow: The shadow of the surrounding button of a checkbox or radio element.
- checkbox_label_text_size: The text size of the label accompanying a checkbox or radio element.
- checkbox_label_text_weight: The text weight of the label accompanying a checkbox or radio element.
- checkbox_label_text_color: The text color of the label accompanying a checkbox or radio element.
- checkbox_label_text_color_dark: The text color of the label accompanying a checkbox or radio element in dark mode.
- checkbox_label_text_color_selected: The text color of the label accompanying a checkbox or radio element when selected.
- checkbox_label_text_color_selected_dark: The text color of the label accompanying a checkbox or radio element when selected in dark mode.
- error_background_fill: The background of an error message.
- error_background_fill_dark: The background of an error message in dark mode.
- error_border_color: The border color of an error message.
- error_border_color_dark: The border color of an error message in dark mode.
- error_border_width: The border width of an error message.
- error_border_width_dark: The border width of an error message in dark mode.
- error_text_color: The text color of an error message.
- error_text_color_dark: The text color of an error message in dark mode.
- input_background_fill: The background of an input field.
- input_background_fill_dark: The background of an input field in dark mode.
- input_background_fill_focus: The background of an input field when focused.
- input_background_fill_focus_dark: The background of an input field when focused in dark mode.
- input_background_fill_hover: The background of an input field when hovered over.
- input_background_fill_hover_dark: The background of an input field when hovered over in dark mode.
- input_border_color: The border color of an input field.
- input_border_color_dark: The border color of an input field in dark mode.
- input_border_color_focus: The border color of an input field when focused.
- input_border_color_focus_dark: The border color of an input field when focused in dark mode.
- input_border_color_hover: The border color of an input field when hovered over.
- input_border_color_hover_dark: The border color of an input field when hovered over in dark mode.
- input_border_width: The border width of an input field.
- input_border_width_dark: The border width of an input field in dark mode.
- input_padding: The padding of an input field.
- input_placeholder_color: The placeholder text color of an input field.
- input_placeholder_color_dark: The placeholder text color of an input field in dark mode.
- input_radius: The corner radius of an input field.
- input_shadow: The shadow of an input field.
- input_shadow_dark: The shadow of an input field in dark mode.
- input_shadow_focus: The shadow of an input field when focused.
- input_shadow_focus_dark: The shadow of an input field when focused in dark mode.
- input_text_size: The text size of an input field.
- input_text_weight: The text weight of an input field.
- loader_color: The color of the loading animation while a request is pending.
- loader_color_dark: The color of the loading animation while a request is pending in dark mode.
- slider_color: The color of the slider in a range element.
- slider_color_dark: The color of the slider in a range element in dark mode.
- stat_background_fill: The background used for stats visuals (e.g. confidence bars in label).
- stat_background_fill_dark: The background used for stats visuals (e.g. confidence bars in label) in dark mode.
- table_border_color: The border color of a table.
- table_border_color_dark: The border color of a table in dark mode.
- table_even_background_fill: The background of even rows in a table.
- table_even_background_fill_dark: The background of even rows in a table in dark mode.
- table_odd_background_fill: The background of odd rows in a table.
- table_odd_background_fill_dark: The background of odd rows in a table in dark mode.
- table_radius: The corner radius of a table.
- table_row_focus: The background of a focused row in a table.
- table_row_focus_dark: The background of a focused row in a table in dark mode.
- button_border_width: The border width of a button.
- button_border_width_dark: The border width of a button in dark mode.
- button_cancel_background_fill: The background of a button of "cancel" variant.
- button_cancel_background_fill_dark: The background of a button of "cancel" variant in dark mode.
- button_cancel_background_fill_hover: The background of a button of "cancel" variant when hovered over.
- button_cancel_background_fill_hover_dark: The background of a button of "cancel" variant when hovered over in dark mode.
- button_cancel_border_color: The border color of a button of "cancel" variant.
- button_cancel_border_color_dark: The border color of a button of "cancel" variant in dark mode.
- button_cancel_border_color_hover: The border color of a button of "cancel" variant when hovered over.
- button_cancel_border_color_hover_dark: The border color of a button of "cancel" variant when hovered over in dark mode.
- button_cancel_text_color: The text color of a button of "cancel" variant.
- button_cancel_text_color_dark: The text color of a button of "cancel" variant in dark mode.
- button_cancel_text_color_hover: The text color of a button of "cancel" variant when hovered over.
- button_cancel_text_color_hover_dark: The text color of a button of "cancel" variant when hovered over in dark mode.
- button_large_padding: The padding of a button with the default "large" size.
- button_large_radius: The corner radius of a button with the default "large" size.
- button_large_text_size: The text size of a button with the default "large" size.
- button_large_text_weight: The text weight of a button with the default "large" size.
- button_primary_background_fill: The background of a button of "primary" variant.
- button_primary_background_fill_dark: The background of a button of "primary" variant in dark mode.
- button_primary_background_fill_hover: The background of a button of "primary" variant when hovered over.
- button_primary_background_fill_hover_dark: The background of a button of "primary" variant when hovered over in dark mode.
- button_primary_border_color: The border color of a button of "primary" variant.
- button_primary_border_color_dark: The border color of a button of "primary" variant in dark mode.
- button_primary_border_color_hover: The border color of a button of "primary" variant when hovered over.
- button_primary_border_color_hover_dark: The border color of a button of "primary" variant when hovered over in dark mode.
- button_primary_text_color: The text color of a button of "primary" variant.
- button_primary_text_color_dark: The text color of a button of "primary" variant in dark mode.
- button_primary_text_color_hover: The text color of a button of "primary" variant when hovered over.
- button_primary_text_color_hover_dark: The text color of a button of "primary" variant when hovered over in dark mode.
- button_secondary_background_fill: The background of a button of default "secondary" variant.
- button_secondary_background_fill_dark: The background of a button of default "secondary" variant in dark mode.
- button_secondary_background_fill_hover: The background of a button of default "secondary" variant when hovered over.
- button_secondary_background_fill_hover_dark: The background of a button of default "secondary" variant when hovered over in dark mode.
- button_secondary_border_color: The border color of a button of default "secondary" variant.
- button_secondary_border_color_dark: The border color of a button of default "secondary" variant in dark mode.
- button_secondary_border_color_hover: The border color of a button of default "secondary" variant when hovered over.
- button_secondary_border_color_hover_dark: The border color of a button of default "secondary" variant when hovered over in dark mode.
- button_secondary_text_color: The text color of a button of default "secondary" variant.
- button_secondary_text_color_dark: The text color of a button of default "secondary" variant in dark mode.
- button_secondary_text_color_hover: The text color of a button of default "secondary" variant when hovered over.
- button_secondary_text_color_hover_dark: The text color of a button of default "secondary" variant when hovered over in dark mode.
- button_shadow: The shadow under a button.
- button_shadow_active: The shadow under a button when pressed.
- button_shadow_hover: The shadow under a button when hovered over.
- button_small_padding: The padding of a button set to "small" size.
- button_small_radius: The corner radius of a button set to "small" size.
- button_small_text_size: The text size of a button set to "small" size.
- button_small_text_weight: The text weight of a button set to "small" size.
- button_transition: The transition animation duration of a button between regular, hover, and focused states.
- """
-
- # Body
- self.body_background_fill = body_background_fill or getattr(
- self, "body_background_fill", "*background_fill_primary"
- )
- self.body_background_fill_dark = body_background_fill_dark or getattr(
- self, "body_background_fill_dark", "*background_fill_primary"
- )
- self.body_text_color = body_text_color or getattr(
- self, "body_text_color", "*neutral_800"
- )
- self.body_text_color_dark = body_text_color_dark or getattr(
- self, "body_text_color_dark", "*neutral_100"
- )
- self.body_text_size = body_text_size or getattr(
- self, "body_text_size", "*text_md"
- )
- self.body_text_weight = body_text_weight or getattr(
- self, "body_text_weight", "400"
- )
- self.embed_radius = embed_radius or getattr(self, "embed_radius", "*radius_lg")
- # Core Colors
- self.color_accent = color_accent or getattr(
- self, "color_accent", "*primary_500"
- )
- self.color_accent_soft = color_accent_soft or getattr(
- self, "color_accent_soft", "*primary_50"
- )
- self.color_accent_soft_dark = color_accent_soft_dark or getattr(
- self, "color_accent_soft_dark", "*neutral_700"
- )
- self.background_fill_primary = background_fill_primary or getattr(
- self, "background_primary", "white"
- )
- self.background_fill_primary_dark = background_fill_primary_dark or getattr(
- self, "background_primary_dark", "*neutral_950"
- )
- self.background_fill_secondary = background_fill_secondary or getattr(
- self, "background_secondary", "*neutral_50"
- )
- self.background_fill_secondary_dark = background_fill_secondary_dark or getattr(
- self, "background_secondary_dark", "*neutral_900"
- )
- self.border_color_accent = border_color_accent or getattr(
- self, "border_color_accent", "*primary_300"
- )
- self.border_color_accent_dark = border_color_accent_dark or getattr(
- self, "border_color_accent_dark", "*neutral_600"
- )
- self.border_color_primary = border_color_primary or getattr(
- self, "border_color_primary", "*neutral_200"
- )
- self.border_color_primary_dark = border_color_primary_dark or getattr(
- self, "border_color_primary_dark", "*neutral_700"
- )
- # Text Colors
- self.link_text_color = link_text_color or getattr(
- self, "link_text_color", "*secondary_600"
- )
- self.link_text_color_active = link_text_color_active or getattr(
- self, "link_text_color_active", "*secondary_600"
- )
- self.link_text_color_active_dark = link_text_color_active_dark or getattr(
- self, "link_text_color_active_dark", "*secondary_500"
- )
- self.link_text_color_dark = link_text_color_dark or getattr(
- self, "link_text_color_dark", "*secondary_500"
- )
- self.link_text_color_hover = link_text_color_hover or getattr(
- self, "link_text_color_hover", "*secondary_700"
- )
- self.link_text_color_hover_dark = link_text_color_hover_dark or getattr(
- self, "link_text_color_hover_dark", "*secondary_400"
- )
- self.link_text_color_visited = link_text_color_visited or getattr(
- self, "link_text_color_visited", "*secondary_500"
- )
- self.link_text_color_visited_dark = link_text_color_visited_dark or getattr(
- self, "link_text_color_visited_dark", "*secondary_600"
- )
- self.body_text_color_subdued = body_text_color_subdued or getattr(
- self, "body_text_color_subdued", "*neutral_400"
- )
- self.body_text_color_subdued_dark = body_text_color_subdued_dark or getattr(
- self, "body_text_color_subdued_dark", "*neutral_400"
- )
- # Shadows
- self.shadow_drop = shadow_drop or getattr(
- self, "shadow_drop", "rgba(0,0,0,0.05) 0px 1px 2px 0px"
- )
- self.shadow_drop_lg = shadow_drop_lg or getattr(
- self,
- "shadow_drop_lg",
- "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)",
- )
- self.shadow_inset = shadow_inset or getattr(
- self, "shadow_inset", "rgba(0,0,0,0.05) 0px 2px 4px 0px inset"
- )
- self.shadow_spread = shadow_spread or getattr(self, "shadow_spread", "3px")
- self.shadow_spread_dark = shadow_spread_dark or getattr(
- self, "shadow_spread_dark", "1px"
- )
- # Layout Atoms
- self.block_background_fill = block_background_fill or getattr(
- self, "block_background_fill", "*background_fill_primary"
- )
- self.block_background_fill_dark = block_background_fill_dark or getattr(
- self, "block_background_fill_dark", "*neutral_800"
- )
- self.block_border_color = block_border_color or getattr(
- self, "block_border_color", "*border_color_primary"
- )
- self.block_border_color_dark = block_border_color_dark or getattr(
- self, "block_border_color_dark", "*border_color_primary"
- )
- self.block_border_width = block_border_width or getattr(
- self, "block_border_width", "1px"
- )
- self.block_border_width_dark = block_border_width_dark or getattr(
- self, "block_border_width_dark", None
- )
- self.block_info_text_color = block_info_text_color or getattr(
- self, "block_info_text_color", "*body_text_color_subdued"
- )
- self.block_info_text_color_dark = block_info_text_color_dark or getattr(
- self, "block_info_text_color_dark", "*body_text_color_subdued"
- )
- self.block_info_text_size = block_info_text_size or getattr(
- self, "block_info_text_size", "*text_sm"
- )
- self.block_info_text_weight = block_info_text_weight or getattr(
- self, "block_info_text_weight", "400"
- )
- self.block_label_background_fill = block_label_background_fill or getattr(
- self, "block_label_background_fill", "*background_fill_primary"
- )
- self.block_label_background_fill_dark = (
- block_label_background_fill_dark
- or getattr(
- self, "block_label_background_fill_dark", "*background_fill_secondary"
- )
- )
- self.block_label_border_color = block_label_border_color or getattr(
- self, "block_label_border_color", "*border_color_primary"
- )
- self.block_label_border_color_dark = block_label_border_color_dark or getattr(
- self, "block_label_border_color_dark", "*border_color_primary"
- )
- self.block_label_border_width = block_label_border_width or getattr(
- self, "block_label_border_width", "1px"
- )
- self.block_label_border_width_dark = block_label_border_width_dark or getattr(
- self, "block_label_border_width_dark", None
- )
- self.block_label_shadow = block_label_shadow or getattr(
- self, "block_label_shadow", "*block_shadow"
- )
- self.block_label_text_color = block_label_text_color or getattr(
- self, "block_label_text_color", "*neutral_500"
- )
- self.block_label_text_color_dark = block_label_text_color_dark or getattr(
- self, "block_label_text_color_dark", "*neutral_200"
- )
- self.block_label_margin = block_label_margin or getattr(
- self, "block_label_margin", "0"
- )
- self.block_label_padding = block_label_padding or getattr(
- self, "block_label_padding", "*spacing_sm *spacing_lg"
- )
- self.block_label_radius = block_label_radius or getattr(
- self,
- "block_label_radius",
- "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0",
- )
- self.block_label_right_radius = block_label_right_radius or getattr(
- self,
- "block_label_right_radius",
- "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)",
- )
- self.block_label_text_size = block_label_text_size or getattr(
- self, "block_label_text_size", "*text_sm"
- )
- self.block_label_text_weight = block_label_text_weight or getattr(
- self, "block_label_text_weight", "400"
- )
- self.block_padding = block_padding or getattr(
- self, "block_padding", "*spacing_xl calc(*spacing_xl + 2px)"
- )
- self.block_radius = block_radius or getattr(self, "block_radius", "*radius_lg")
- self.block_shadow = block_shadow or getattr(self, "block_shadow", "none")
- self.block_shadow_dark = block_shadow_dark or getattr(
- self, "block_shadow_dark", None
- )
- self.block_title_background_fill = block_title_background_fill or getattr(
- self, "block_title_background_fill", "none"
- )
- self.block_title_background_fill_dark = (
- block_title_background_fill_dark
- or getattr(self, "block_title_background_fill_dark", None)
- )
- self.block_title_border_color = block_title_border_color or getattr(
- self, "block_title_border_color", "none"
- )
- self.block_title_border_color_dark = block_title_border_color_dark or getattr(
- self, "block_title_border_color_dark", None
- )
- self.block_title_border_width = block_title_border_width or getattr(
- self, "block_title_border_width", "0px"
- )
- self.block_title_border_width_dark = block_title_border_width_dark or getattr(
- self, "block_title_border_width_dark", None
- )
- self.block_title_text_color = block_title_text_color or getattr(
- self, "block_title_text_color", "*neutral_500"
- )
- self.block_title_text_color_dark = block_title_text_color_dark or getattr(
- self, "block_title_text_color_dark", "*neutral_200"
- )
- self.block_title_padding = block_title_padding or getattr(
- self, "block_title_padding", "0"
- )
- self.block_title_radius = block_title_radius or getattr(
- self, "block_title_radius", "none"
- )
- self.block_title_text_size = block_title_text_size or getattr(
- self, "block_title_text_size", "*text_md"
- )
- self.block_title_text_weight = block_title_text_weight or getattr(
- self, "block_title_text_weight", "400"
- )
- self.container_radius = container_radius or getattr(
- self, "container_radius", "*radius_lg"
- )
- self.form_gap_width = form_gap_width or getattr(self, "form_gap_width", "0px")
- self.layout_gap = layout_gap or getattr(self, "layout_gap", "*spacing_xxl")
- self.panel_background_fill = panel_background_fill or getattr(
- self, "panel_background_fill", "*background_fill_secondary"
- )
- self.panel_background_fill_dark = panel_background_fill_dark or getattr(
- self, "panel_background_fill_dark", "*background_fill_secondary"
- )
- self.panel_border_color = panel_border_color or getattr(
- self, "panel_border_color", "*border_color_primary"
- )
- self.panel_border_color_dark = panel_border_color_dark or getattr(
- self, "panel_border_color_dark", "*border_color_primary"
- )
- self.panel_border_width = panel_border_width or getattr(
- self, "panel_border_width", "0"
- )
- self.panel_border_width_dark = panel_border_width_dark or getattr(
- self, "panel_border_width_dark", None
- )
- self.section_header_text_size = section_header_text_size or getattr(
- self, "section_header_text_size", "*text_md"
- )
- self.section_header_text_weight = section_header_text_weight or getattr(
- self, "section_header_text_weight", "400"
- )
- # Component Atoms
- self.chatbot_code_background_color = chatbot_code_background_color or getattr(
- self, "chatbot_code_background_color", "*neutral_100"
- )
- self.chatbot_code_background_color_dark = (
- chatbot_code_background_color_dark
- or getattr(self, "chatbot_code_background_color_dark", "*neutral_800")
- )
- self.checkbox_background_color = checkbox_background_color or getattr(
- self, "checkbox_background_color", "*background_fill_primary"
- )
- self.checkbox_background_color_dark = checkbox_background_color_dark or getattr(
- self, "checkbox_background_color_dark", "*neutral_800"
- )
- self.checkbox_background_color_focus = (
- checkbox_background_color_focus
- or getattr(
- self, "checkbox_background_color_focus", "*checkbox_background_color"
- )
- )
- self.checkbox_background_color_focus_dark = (
- checkbox_background_color_focus_dark
- or getattr(
- self,
- "checkbox_background_color_focus_dark",
- "*checkbox_background_color",
- )
- )
- self.checkbox_background_color_hover = (
- checkbox_background_color_hover
- or getattr(
- self, "checkbox_background_color_hover", "*checkbox_background_color"
- )
- )
- self.checkbox_background_color_hover_dark = (
- checkbox_background_color_hover_dark
- or getattr(
- self,
- "checkbox_background_color_hover_dark",
- "*checkbox_background_color",
- )
- )
- self.checkbox_background_color_selected = (
- checkbox_background_color_selected
- or getattr(self, "checkbox_background_color_selected", "*secondary_600")
- )
- self.checkbox_background_color_selected_dark = (
- checkbox_background_color_selected_dark
- or getattr(
- self, "checkbox_background_color_selected_dark", "*secondary_600"
- )
- )
- self.checkbox_border_color = checkbox_border_color or getattr(
- self, "checkbox_border_color", "*neutral_300"
- )
- self.checkbox_border_color_dark = checkbox_border_color_dark or getattr(
- self, "checkbox_border_color_dark", "*neutral_700"
- )
- self.checkbox_border_color_focus = checkbox_border_color_focus or getattr(
- self, "checkbox_border_color_focus", "*secondary_500"
- )
- self.checkbox_border_color_focus_dark = (
- checkbox_border_color_focus_dark
- or getattr(self, "checkbox_border_color_focus_dark", "*secondary_500")
- )
- self.checkbox_border_color_hover = checkbox_border_color_hover or getattr(
- self, "checkbox_border_color_hover", "*neutral_300"
- )
- self.checkbox_border_color_hover_dark = (
- checkbox_border_color_hover_dark
- or getattr(self, "checkbox_border_color_hover_dark", "*neutral_600")
- )
- self.checkbox_border_color_selected = checkbox_border_color_selected or getattr(
- self, "checkbox_border_color_selected", "*secondary_600"
- )
- self.checkbox_border_color_selected_dark = (
- checkbox_border_color_selected_dark
- or getattr(self, "checkbox_border_color_selected_dark", "*secondary_600")
- )
- self.checkbox_border_radius = checkbox_border_radius or getattr(
- self, "checkbox_border_radius", "*radius_sm"
- )
- self.checkbox_border_width = checkbox_border_width or getattr(
- self, "checkbox_border_width", "*input_border_width"
- )
- self.checkbox_border_width_dark = checkbox_border_width_dark or getattr(
- self, "checkbox_border_width_dark", "*input_border_width"
- )
- self.checkbox_label_background_fill = checkbox_label_background_fill or getattr(
- self, "checkbox_label_background_fill", "*button_secondary_background_fill"
- )
- self.checkbox_label_background_fill_dark = (
- checkbox_label_background_fill_dark
- or getattr(
- self,
- "checkbox_label_background_fill_dark",
- "*button_secondary_background_fill",
- )
- )
- self.checkbox_label_background_fill_hover = (
- checkbox_label_background_fill_hover
- or getattr(
- self,
- "checkbox_label_background_fill_hover",
- "*button_secondary_background_fill_hover",
- )
- )
- self.checkbox_label_background_fill_hover_dark = (
- checkbox_label_background_fill_hover_dark
- or getattr(
- self,
- "checkbox_label_background_fill_hover_dark",
- "*button_secondary_background_fill_hover",
- )
- )
- self.checkbox_label_background_fill_selected = (
- checkbox_label_background_fill_selected
- or getattr(
- self,
- "checkbox_label_background_fill_selected",
- "*checkbox_label_background_fill",
- )
- )
- self.checkbox_label_background_fill_selected_dark = (
- checkbox_label_background_fill_selected_dark
- or getattr(
- self,
- "checkbox_label_background_fill_selected_dark",
- "*checkbox_label_background_fill",
- )
- )
- self.checkbox_label_border_color = checkbox_label_border_color or getattr(
- self, "checkbox_label_border_color", "*border_color_primary"
- )
- self.checkbox_label_border_color_dark = (
- checkbox_label_border_color_dark
- or getattr(
- self, "checkbox_label_border_color_dark", "*border_color_primary"
- )
- )
- self.checkbox_label_border_color_hover = (
- checkbox_label_border_color_hover
- or getattr(
- self,
- "checkbox_label_border_color_hover",
- "*checkbox_label_border_color",
- )
- )
- self.checkbox_label_border_color_hover_dark = (
- checkbox_label_border_color_hover_dark
- or getattr(
- self,
- "checkbox_label_border_color_hover_dark",
- "*checkbox_label_border_color",
- )
- )
- self.checkbox_label_border_width = checkbox_label_border_width or getattr(
- self, "checkbox_label_border_width", "*input_border_width"
- )
- self.checkbox_label_border_width_dark = (
- checkbox_label_border_width_dark
- or getattr(self, "checkbox_label_border_width_dark", "*input_border_width")
- )
- self.checkbox_label_gap = checkbox_label_gap or getattr(
- self, "checkbox_label_gap", "*spacing_lg"
- )
- self.checkbox_label_padding = checkbox_label_padding or getattr(
- self, "checkbox_label_padding", "*spacing_md calc(2 * *spacing_md)"
- )
- self.checkbox_label_shadow = checkbox_label_shadow or getattr(
- self, "checkbox_label_shadow", "none"
- )
- self.checkbox_label_text_size = checkbox_label_text_size or getattr(
- self, "checkbox_label_text_size", "*text_md"
- )
- self.checkbox_label_text_weight = checkbox_label_text_weight or getattr(
- self, "checkbox_label_text_weight", "400"
- )
- self.checkbox_check = checkbox_check or getattr(
- self,
- "checkbox_check",
- """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e")""",
- )
- self.radio_circle = radio_circle or getattr(
- self,
- "radio_circle",
- """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e")""",
- )
- self.checkbox_shadow = checkbox_shadow or getattr(
- self, "checkbox_shadow", "*input_shadow"
- )
- self.checkbox_label_text_color = checkbox_label_text_color or getattr(
- self, "checkbox_label_text_color", "*body_text_color"
- )
- self.checkbox_label_text_color_dark = checkbox_label_text_color_dark or getattr(
- self, "checkbox_label_text_color_dark", "*body_text_color"
- )
- self.checkbox_label_text_color_selected = (
- checkbox_label_text_color_selected
- or getattr(
- self, "checkbox_label_text_color_selected", "*checkbox_label_text_color"
- )
- )
- self.checkbox_label_text_color_selected_dark = (
- checkbox_label_text_color_selected_dark
- or getattr(
- self,
- "checkbox_label_text_color_selected_dark",
- "*checkbox_label_text_color",
- )
- )
- self.error_background_fill = error_background_fill or getattr(
- self, "error_background_fill", colors.red.c100
- )
- self.error_background_fill_dark = error_background_fill_dark or getattr(
- self, "error_background_fill_dark", "*background_fill_primary"
- )
- self.error_border_color = error_border_color or getattr(
- self, "error_border_color", colors.red.c200
- )
- self.error_border_color_dark = error_border_color_dark or getattr(
- self, "error_border_color_dark", "*border_color_primary"
- )
- self.error_border_width = error_border_width or getattr(
- self, "error_border_width", "1px"
- )
- self.error_border_width_dark = error_border_width_dark or getattr(
- self, "error_border_width_dark", None
- )
- self.error_text_color = error_text_color or getattr(
- self, "error_text_color", colors.red.c500
- )
- self.error_text_color_dark = error_text_color_dark or getattr(
- self, "error_text_color_dark", colors.red.c500
- )
- self.input_background_fill = input_background_fill or getattr(
- self, "input_background_fill", "*neutral_100"
- )
- self.input_background_fill_dark = input_background_fill_dark or getattr(
- self, "input_background_fill_dark", "*neutral_700"
- )
- self.input_background_fill_focus = input_background_fill_focus or getattr(
- self, "input_background_fill_focus", "*secondary_500"
- )
- self.input_background_fill_focus_dark = (
- input_background_fill_focus_dark
- or getattr(self, "input_background_fill_focus_dark", "*secondary_600")
- )
- self.input_background_fill_hover = input_background_fill_hover or getattr(
- self, "input_background_fill_hover", "*input_background_fill"
- )
- self.input_background_fill_hover_dark = (
- input_background_fill_hover_dark
- or getattr(
- self, "input_background_fill_hover_dark", "*input_background_fill"
- )
- )
- self.input_border_color = input_border_color or getattr(
- self, "input_border_color", "*border_color_primary"
- )
- self.input_border_color_dark = input_border_color_dark or getattr(
- self, "input_border_color_dark", "*border_color_primary"
- )
- self.input_border_color_focus = input_border_color_focus or getattr(
- self, "input_border_color_focus", "*secondary_300"
- )
- self.input_border_color_focus_dark = input_border_color_focus_dark or getattr(
- self, "input_border_color_focus_dark", "*neutral_700"
- )
- self.input_border_color_hover = input_border_color_hover or getattr(
- self, "input_border_color_hover", "*input_border_color"
- )
- self.input_border_color_hover_dark = input_border_color_hover_dark or getattr(
- self, "input_border_color_hover_dark", "*input_border_color"
- )
- self.input_border_width = input_border_width or getattr(
- self, "input_border_width", "0px"
- )
- self.input_border_width_dark = input_border_width_dark or getattr(
- self, "input_border_width_dark", None
- )
- self.input_padding = input_padding or getattr(
- self, "input_padding", "*spacing_xl"
- )
- self.input_placeholder_color = input_placeholder_color or getattr(
- self, "input_placeholder_color", "*neutral_400"
- )
- self.input_placeholder_color_dark = input_placeholder_color_dark or getattr(
- self, "input_placeholder_color_dark", "*neutral_500"
- )
- self.input_radius = input_radius or getattr(self, "input_radius", "*radius_lg")
- self.input_shadow = input_shadow or getattr(self, "input_shadow", "none")
- self.input_shadow_dark = input_shadow_dark or getattr(
- self, "input_shadow_dark", None
- )
- self.input_shadow_focus = input_shadow_focus or getattr(
- self, "input_shadow_focus", "*input_shadow"
- )
- self.input_shadow_focus_dark = input_shadow_focus_dark or getattr(
- self, "input_shadow_focus_dark", None
- )
- self.input_text_size = input_text_size or getattr(
- self, "input_text_size", "*text_md"
- )
- self.input_text_weight = input_text_weight or getattr(
- self, "input_text_weight", "400"
- )
- self.loader_color = loader_color or getattr(
- self, "loader_color", "*color_accent"
- )
- self.loader_color_dark = loader_color_dark or getattr(
- self, "loader_color_dark", None
- )
- self.prose_text_size = prose_text_size or getattr(
- self, "prose_text_size", "*text_md"
- )
- self.prose_text_weight = prose_text_weight or getattr(
- self, "prose_text_weight", "400"
- )
- self.prose_header_text_weight = prose_header_text_weight or getattr(
- self, "prose_header_text_weight", "600"
- )
- self.slider_color = slider_color or getattr(self, "slider_color", "auto")
- self.slider_color_dark = slider_color_dark or getattr(
- self, "slider_color_dark", None
- )
- self.stat_background_fill = stat_background_fill or getattr(
- self, "stat_background_fill", "*primary_300"
- )
- self.stat_background_fill_dark = stat_background_fill_dark or getattr(
- self, "stat_background_fill_dark", "*primary_500"
- )
- self.table_border_color = table_border_color or getattr(
- self, "table_border_color", "*neutral_300"
- )
- self.table_border_color_dark = table_border_color_dark or getattr(
- self, "table_border_color_dark", "*neutral_700"
- )
- self.table_even_background_fill = table_even_background_fill or getattr(
- self, "table_even_background_fill", "white"
- )
- self.table_even_background_fill_dark = (
- table_even_background_fill_dark
- or getattr(self, "table_even_background_fill_dark", "*neutral_950")
- )
- self.table_odd_background_fill = table_odd_background_fill or getattr(
- self, "table_odd_background_fill", "*neutral_50"
- )
- self.table_odd_background_fill_dark = table_odd_background_fill_dark or getattr(
- self, "table_odd_background_fill_dark", "*neutral_900"
- )
- self.table_radius = table_radius or getattr(self, "table_radius", "*radius_lg")
- self.table_row_focus = table_row_focus or getattr(
- self, "table_row_focus", "*color_accent_soft"
- )
- self.table_row_focus_dark = table_row_focus_dark or getattr(
- self, "table_row_focus_dark", "*color_accent_soft"
- )
- # Buttons
- self.button_border_width = button_border_width or getattr(
- self, "button_border_width", "*input_border_width"
- )
- self.button_border_width_dark = button_border_width_dark or getattr(
- self, "button_border_width_dark", "*input_border_width"
- )
- self.button_cancel_background_fill = button_cancel_background_fill or getattr(
- self, "button_cancel_background_fill", "*button_secondary_background_fill"
- )
- self.button_cancel_background_fill_dark = (
- button_cancel_background_fill_dark
- or getattr(
- self,
- "button_cancel_background_fill_dark",
- "*button_secondary_background_fill",
- )
- )
- self.button_cancel_background_fill_hover = (
- button_cancel_background_fill_hover
- or getattr(
- self,
- "button_cancel_background_fill_hover",
- "*button_cancel_background_fill",
- )
- )
- self.button_cancel_background_fill_hover_dark = (
- button_cancel_background_fill_hover_dark
- or getattr(
- self,
- "button_cancel_background_fill_hover_dark",
- "*button_cancel_background_fill",
- )
- )
- self.button_cancel_border_color = button_cancel_border_color or getattr(
- self, "button_cancel_border_color", "*button_secondary_border_color"
- )
- self.button_cancel_border_color_dark = (
- button_cancel_border_color_dark
- or getattr(
- self,
- "button_cancel_border_color_dark",
- "*button_secondary_border_color",
- )
- )
- self.button_cancel_border_color_hover = (
- button_cancel_border_color_hover
- or getattr(
- self,
- "button_cancel_border_color_hover",
- "*button_cancel_border_color",
- )
- )
- self.button_cancel_border_color_hover_dark = (
- button_cancel_border_color_hover_dark
- or getattr(
- self,
- "button_cancel_border_color_hover_dark",
- "*button_cancel_border_color",
- )
- )
- self.button_cancel_text_color = button_cancel_text_color or getattr(
- self, "button_cancel_text_color", "*button_secondary_text_color"
- )
- self.button_cancel_text_color_dark = button_cancel_text_color_dark or getattr(
- self, "button_cancel_text_color_dark", "*button_secondary_text_color"
- )
- self.button_cancel_text_color_hover = button_cancel_text_color_hover or getattr(
- self, "button_cancel_text_color_hover", "*button_cancel_text_color"
- )
- self.button_cancel_text_color_hover_dark = (
- button_cancel_text_color_hover_dark
- or getattr(
- self, "button_cancel_text_color_hover_dark", "*button_cancel_text_color"
- )
- )
- self.button_large_padding = button_large_padding or getattr(
- self, "button_large_padding", "*spacing_lg calc(2 * *spacing_lg)"
- )
- self.button_large_radius = button_large_radius or getattr(
- self, "button_large_radius", "*radius_lg"
- )
- self.button_large_text_size = button_large_text_size or getattr(
- self, "button_large_text_size", "*text_lg"
- )
- self.button_large_text_weight = button_large_text_weight or getattr(
- self, "button_large_text_weight", "600"
- )
- self.button_primary_background_fill = button_primary_background_fill or getattr(
- self, "button_primary_background_fill", "*primary_200"
- )
- self.button_primary_background_fill_dark = (
- button_primary_background_fill_dark
- or getattr(self, "button_primary_background_fill_dark", "*primary_700")
- )
- self.button_primary_background_fill_hover = (
- button_primary_background_fill_hover
- or getattr(
- self,
- "button_primary_background_fill_hover",
- "*button_primary_background_fill",
- )
- )
- self.button_primary_background_fill_hover_dark = (
- button_primary_background_fill_hover_dark
- or getattr(
- self,
- "button_primary_background_fill_hover_dark",
- "*button_primary_background_fill",
- )
- )
- self.button_primary_border_color = button_primary_border_color or getattr(
- self, "button_primary_border_color", "*primary_200"
- )
- self.button_primary_border_color_dark = (
- button_primary_border_color_dark
- or getattr(self, "button_primary_border_color_dark", "*primary_600")
- )
- self.button_primary_border_color_hover = (
- button_primary_border_color_hover
- or getattr(
- self,
- "button_primary_border_color_hover",
- "*button_primary_border_color",
- )
- )
- self.button_primary_border_color_hover_dark = (
- button_primary_border_color_hover_dark
- or getattr(
- self,
- "button_primary_border_color_hover_dark",
- "*button_primary_border_color",
- )
- )
- self.button_primary_text_color = button_primary_text_color or getattr(
- self, "button_primary_text_color", "*primary_600"
- )
- self.button_primary_text_color_dark = button_primary_text_color_dark or getattr(
- self, "button_primary_text_color_dark", "white"
- )
- self.button_primary_text_color_hover = (
- button_primary_text_color_hover
- or getattr(
- self, "button_primary_text_color_hover", "*button_primary_text_color"
- )
- )
- self.button_primary_text_color_hover_dark = (
- button_primary_text_color_hover_dark
- or getattr(
- self,
- "button_primary_text_color_hover_dark",
- "*button_primary_text_color",
- )
- )
- self.button_secondary_background_fill = (
- button_secondary_background_fill
- or getattr(self, "button_secondary_background_fill", "*neutral_200")
- )
- self.button_secondary_background_fill_dark = (
- button_secondary_background_fill_dark
- or getattr(self, "button_secondary_background_fill_dark", "*neutral_600")
- )
- self.button_secondary_background_fill_hover = (
- button_secondary_background_fill_hover
- or getattr(
- self,
- "button_secondary_background_fill_hover",
- "*button_secondary_background_fill",
- )
- )
- self.button_secondary_background_fill_hover_dark = (
- button_secondary_background_fill_hover_dark
- or getattr(
- self,
- "button_secondary_background_fill_hover_dark",
- "*button_secondary_background_fill",
- )
- )
- self.button_secondary_border_color = button_secondary_border_color or getattr(
- self, "button_secondary_border_color", "*neutral_200"
- )
- self.button_secondary_border_color_dark = (
- button_secondary_border_color_dark
- or getattr(self, "button_secondary_border_color_dark", "*neutral_600")
- )
- self.button_secondary_border_color_hover = (
- button_secondary_border_color_hover
- or getattr(
- self,
- "button_secondary_border_color_hover",
- "*button_secondary_border_color",
- )
- )
- self.button_secondary_border_color_hover_dark = (
- button_secondary_border_color_hover_dark
- or getattr(
- self,
- "button_secondary_border_color_hover_dark",
- "*button_secondary_border_color",
- )
- )
- self.button_secondary_text_color = button_secondary_text_color or getattr(
- self, "button_secondary_text_color", "*neutral_700"
- )
- self.button_secondary_text_color_dark = (
- button_secondary_text_color_dark
- or getattr(self, "button_secondary_text_color_dark", "white")
- )
- self.button_secondary_text_color_hover = (
- button_secondary_text_color_hover
- or getattr(
- self,
- "button_secondary_text_color_hover",
- "*button_secondary_text_color",
- )
- )
- self.button_secondary_text_color_hover_dark = (
- button_secondary_text_color_hover_dark
- or getattr(
- self,
- "button_secondary_text_color_hover_dark",
- "*button_secondary_text_color",
- )
- )
- self.button_shadow = button_shadow or getattr(self, "button_shadow", "none")
- self.button_shadow_active = button_shadow_active or getattr(
- self, "button_shadow_active", "none"
- )
- self.button_shadow_hover = button_shadow_hover or getattr(
- self, "button_shadow_hover", "none"
- )
- self.button_small_padding = button_small_padding or getattr(
- self, "button_small_padding", "*spacing_sm calc(2 * *spacing_sm)"
- )
- self.button_small_radius = button_small_radius or getattr(
- self, "button_small_radius", "*radius_lg"
- )
- self.button_small_text_size = button_small_text_size or getattr(
- self, "button_small_text_size", "*text_md"
- )
- self.button_small_text_weight = button_small_text_weight or getattr(
- self, "button_small_text_weight", "400"
- )
- self.button_transition = button_transition or getattr(
- self, "button_transition", "background-color 0.2s ease"
- )
- return self
diff --git a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp b/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp
deleted file mode 100644
index ff4149b8b46b54d2f400ae10e44d19f20503ba1f..0000000000000000000000000000000000000000
--- a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp
+++ /dev/null
@@ -1,300 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "filtered_lrelu.h"
-
-//------------------------------------------------------------------------
-
-static std::tuple filtered_lrelu(
- torch::Tensor x, torch::Tensor fu, torch::Tensor fd, torch::Tensor b, torch::Tensor si,
- int up, int down, int px0, int px1, int py0, int py1, int sx, int sy, float gain, float slope, float clamp, bool flip_filters, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(fu.device() == x.device() && fd.device() == x.device() && b.device() == x.device(), "all input tensors must reside on the same device");
- TORCH_CHECK(fu.dtype() == torch::kFloat && fd.dtype() == torch::kFloat, "fu and fd must be float32");
- TORCH_CHECK(b.dtype() == x.dtype(), "x and b must have the same dtype");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat, "x and b must be float16 or float32");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK((fu.dim() == 1 || fu.dim() == 2) && (fd.dim() == 1 || fd.dim() == 2), "fu and fd must be rank 1 or 2");
- TORCH_CHECK(fu.size(0) <= INT_MAX && fu.size(-1) <= INT_MAX, "fu is too large");
- TORCH_CHECK(fd.size(0) <= INT_MAX && fd.size(-1) <= INT_MAX, "fd is too large");
- TORCH_CHECK(fu.numel() > 0, "fu is empty");
- TORCH_CHECK(fd.numel() > 0, "fd is empty");
- TORCH_CHECK(b.dim() == 1 && b.size(0) == x.size(1), "b must be a vector with the same number of channels as x");
- TORCH_CHECK(up >= 1 && down >= 1, "up and down must be at least 1");
-
- // Figure out how much shared memory is available on the device.
- int maxSharedBytes = 0;
- AT_CUDA_CHECK(cudaDeviceGetAttribute(&maxSharedBytes, cudaDevAttrMaxSharedMemoryPerBlockOptin, x.device().index()));
- int sharedKB = maxSharedBytes >> 10;
-
- // Populate enough launch parameters to check if a CUDA kernel exists.
- filtered_lrelu_kernel_params p;
- p.up = up;
- p.down = down;
- p.fuShape = make_int2((int)fu.size(-1), fu.dim() == 2 ? (int)fu.size(0) : 0); // shape [n, 0] indicates separable filter.
- p.fdShape = make_int2((int)fd.size(-1), fd.dim() == 2 ? (int)fd.size(0) : 0);
- filtered_lrelu_kernel_spec test_spec = choose_filtered_lrelu_kernel(p, sharedKB);
- if (!test_spec.exec)
- {
- // No kernel found - return empty tensors and indicate missing kernel with return code of -1.
- return std::make_tuple(torch::Tensor(), torch::Tensor(), -1);
- }
-
- // Input/output element size.
- int64_t sz = (x.dtype() == torch::kHalf) ? 2 : 4;
-
- // Input sizes.
- int64_t xw = (int)x.size(3);
- int64_t xh = (int)x.size(2);
- int64_t fut_w = (int)fu.size(-1) - 1;
- int64_t fut_h = (int)fu.size(0) - 1;
- int64_t fdt_w = (int)fd.size(-1) - 1;
- int64_t fdt_h = (int)fd.size(0) - 1;
-
- // Logical size of upsampled buffer.
- int64_t cw = xw * up + (px0 + px1) - fut_w;
- int64_t ch = xh * up + (py0 + py1) - fut_h;
- TORCH_CHECK(cw > fdt_w && ch > fdt_h, "upsampled buffer must be at least the size of downsampling filter");
- TORCH_CHECK(cw <= INT_MAX && ch <= INT_MAX, "upsampled buffer is too large");
-
- // Compute output size and allocate.
- int64_t yw = (cw - fdt_w + (down - 1)) / down;
- int64_t yh = (ch - fdt_h + (down - 1)) / down;
- TORCH_CHECK(yw > 0 && yh > 0, "output must be at least 1x1");
- TORCH_CHECK(yw <= INT_MAX && yh <= INT_MAX, "output is too large");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), yh, yw}, x.options(), x.suggest_memory_format());
-
- // Allocate sign tensor.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- int64_t sw_active = 0; // Active width of sign tensor.
- if (writeSigns)
- {
- sw_active = yw * down - (down - 1) + fdt_w; // Active width in elements.
- int64_t sh = yh * down - (down - 1) + fdt_h; // Height = active height.
- int64_t sw = (sw_active + 15) & ~15; // Width = active width in elements, rounded up to multiple of 16.
- TORCH_CHECK(sh <= INT_MAX && (sw >> 2) <= INT_MAX, "signs is too large");
- s = so = torch::empty({x.size(0), x.size(1), sh, sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
- else if (readSigns)
- sw_active = s.size(3) << 2;
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && s.size(3) <= INT_MAX, "signs is too large");
- }
-
- // Populate rest of CUDA kernel parameters.
- p.x = x.data_ptr();
- p.y = y.data_ptr();
- p.b = b.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.fu = fu.data_ptr();
- p.fd = fd.data_ptr();
- p.pad0 = make_int2(px0, py0);
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.flip = (flip_filters) ? 1 : 0;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.yShape = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3), (int)s.size(2)) : make_int2(0, 0); // Width is in bytes. Contiguous.
- p.sOfs = make_int2(sx, sy);
- p.swLimit = (sw_active + 3) >> 2; // Rounded up to bytes.
-
- // x, y, b strides are in bytes.
- p.xStride = make_longlong4(sz * x.stride(3), sz * x.stride(2), sz * x.stride(1), sz * x.stride(0));
- p.yStride = make_longlong4(sz * y.stride(3), sz * y.stride(2), sz * y.stride(1), sz * y.stride(0));
- p.bStride = sz * b.stride(0);
-
- // fu, fd strides are in elements.
- p.fuStride = make_longlong3(fu.stride(-1), fu.dim() == 2 ? fu.stride(0) : 0, 0);
- p.fdStride = make_longlong3(fd.stride(-1), fd.dim() == 2 ? fd.stride(0) : 0, 0);
-
- // Determine if indices don't fit in int32. Support negative strides although Torch currently never produces those.
- bool index64b = false;
- if (std::abs(p.bStride * x.size(1)) > INT_MAX) index64b = true;
- if (std::min(x.size(0) * p.xStride.w, 0ll) + std::min(x.size(1) * p.xStride.z, 0ll) + std::min(x.size(2) * p.xStride.y, 0ll) + std::min(x.size(3) * p.xStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(x.size(0) * p.xStride.w, 0ll) + std::max(x.size(1) * p.xStride.z, 0ll) + std::max(x.size(2) * p.xStride.y, 0ll) + std::max(x.size(3) * p.xStride.x, 0ll) > INT_MAX) index64b = true;
- if (std::min(y.size(0) * p.yStride.w, 0ll) + std::min(y.size(1) * p.yStride.z, 0ll) + std::min(y.size(2) * p.yStride.y, 0ll) + std::min(y.size(3) * p.yStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(y.size(0) * p.yStride.w, 0ll) + std::max(y.size(1) * p.yStride.z, 0ll) + std::max(y.size(2) * p.yStride.y, 0ll) + std::max(y.size(3) * p.yStride.x, 0ll) > INT_MAX) index64b = true;
- if (s.numel() > INT_MAX) index64b = true;
-
- // Choose CUDA kernel.
- filtered_lrelu_kernel_spec spec = { 0 };
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_cuda", [&]
- {
- if constexpr (sizeof(scalar_t) <= 4) // Exclude doubles. constexpr prevents template instantiation.
- {
- // Choose kernel based on index type, datatype and sign read/write modes.
- if (!index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- }
- });
- TORCH_CHECK(spec.exec, "internal error - CUDA kernel not found") // This should not happen because we tested earlier that kernel exists.
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = spec.numWarps * 32;
- int gx = (p.yShape.x - 1) / spec.tileOut.x + 1;
- int gy = (p.yShape.y - 1) / spec.tileOut.y + 1;
- int gz = p.yShape.z * p.yShape.w;
-
- // Repeat multiple horizontal tiles in a CTA?
- if (spec.xrep)
- {
- p.tilesXrep = spec.xrep;
- p.tilesXdim = gx;
-
- gx = (gx + p.tilesXrep - 1) / p.tilesXrep;
- std::swap(gx, gy);
- }
- else
- {
- p.tilesXrep = 0;
- p.tilesXdim = 0;
- }
-
- // Launch filter setup kernel.
- AT_CUDA_CHECK(cudaLaunchKernel(spec.setup, 1, 1024, args, 0, at::cuda::getCurrentCUDAStream()));
-
- // Copy kernels to constant memory.
- if ( writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
-
- // Set cache and shared memory configurations for main kernel.
- AT_CUDA_CHECK(cudaFuncSetCacheConfig(spec.exec, cudaFuncCachePreferShared));
- if (spec.dynamicSharedKB) // Need dynamically allocated shared memory?
- AT_CUDA_CHECK(cudaFuncSetAttribute(spec.exec, cudaFuncAttributeMaxDynamicSharedMemorySize, spec.dynamicSharedKB << 10));
- AT_CUDA_CHECK(cudaFuncSetSharedMemConfig(spec.exec, cudaSharedMemBankSizeFourByte));
-
- // Launch main kernel.
- const int maxSubGz = 65535; // CUDA maximum for block z dimension.
- for (int zofs=0; zofs < gz; zofs += maxSubGz) // Do multiple launches if gz is too big.
- {
- p.blockZofs = zofs;
- int subGz = std::min(maxSubGz, gz - zofs);
- AT_CUDA_CHECK(cudaLaunchKernel(spec.exec, dim3(gx, gy, subGz), bx, args, spec.dynamicSharedKB << 10, at::cuda::getCurrentCUDAStream()));
- }
-
- // Done.
- return std::make_tuple(y, so, 0);
-}
-
-//------------------------------------------------------------------------
-
-static torch::Tensor filtered_lrelu_act(torch::Tensor x, torch::Tensor si, int sx, int sy, float gain, float slope, float clamp, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat || x.dtype() == torch::kDouble, "x must be float16, float32 or float64");
-
- // Output signs if we don't have sign input.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- if (writeSigns)
- {
- int64_t sw = x.size(3);
- sw = (sw + 15) & ~15; // Round to a multiple of 16 for coalescing.
- s = so = torch::empty({x.size(0), x.size(1), x.size(2), sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && (s.size(3) << 2) <= INT_MAX, "signs tensor is too large");
- }
-
- // Initialize CUDA kernel parameters.
- filtered_lrelu_act_kernel_params p;
- p.x = x.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.xStride = make_longlong4(x.stride(3), x.stride(2), x.stride(1), x.stride(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3) << 2, (int)s.size(2)) : make_int2(0, 0); // Width is in elements. Contiguous.
- p.sOfs = make_int2(sx, sy);
-
- // Choose CUDA kernel.
- void* func = 0;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_act_cuda", [&]
- {
- if (writeSigns)
- func = choose_filtered_lrelu_act_kernel();
- else if (readSigns)
- func = choose_filtered_lrelu_act_kernel();
- else
- func = choose_filtered_lrelu_act_kernel();
- });
- TORCH_CHECK(func, "internal error - CUDA kernel not found");
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = 128; // 4 warps per block.
-
- // Logical size of launch = writeSigns ? p.s : p.x
- uint32_t gx = writeSigns ? p.sShape.x : p.xShape.x;
- uint32_t gy = writeSigns ? p.sShape.y : p.xShape.y;
- uint32_t gz = p.xShape.z * p.xShape.w; // Same as in p.sShape if signs are in use.
- gx = (gx - 1) / bx + 1;
-
- // Make sure grid y and z dimensions are within CUDA launch limits. Kernel loops internally to do the rest.
- const uint32_t gmax = 65535;
- gy = std::min(gy, gmax);
- gz = std::min(gz, gmax);
-
- // Launch.
- AT_CUDA_CHECK(cudaLaunchKernel(func, dim3(gx, gy, gz), bx, args, 0, at::cuda::getCurrentCUDAStream()));
- return so;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("filtered_lrelu", &filtered_lrelu); // The whole thing.
- m.def("filtered_lrelu_act_", &filtered_lrelu_act); // Activation and sign tensor handling only. Modifies data tensor in-place.
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py b/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py
deleted file mode 100644
index 5e607526925445134fc1715a1fab6bb4af99112d..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-import json
-
-def create_graph(lora_path, lora_name):
- try:
- import matplotlib.pyplot as plt
- from matplotlib.ticker import ScalarFormatter
-
- peft_model_path = f'{lora_path}/training_graph.json'
- image_model_path = f'{lora_path}/training_graph.png'
- # Check if the JSON file exists
- if os.path.exists(peft_model_path):
- # Load data from JSON file
- with open(peft_model_path, 'r') as file:
- data = json.load(file)
- # Extract x, y1, and y2 values
- x = [item['epoch'] for item in data]
- y1 = [item['learning_rate'] for item in data]
- y2 = [item['loss'] for item in data]
-
- # Create the line chart
- fig, ax1 = plt.subplots(figsize=(10, 6))
-
-
- # Plot y1 (learning rate) on the first y-axis
- ax1.plot(x, y1, 'b-', label='Learning Rate')
- ax1.set_xlabel('Epoch')
- ax1.set_ylabel('Learning Rate', color='b')
- ax1.tick_params('y', colors='b')
-
- # Create a second y-axis
- ax2 = ax1.twinx()
-
- # Plot y2 (loss) on the second y-axis
- ax2.plot(x, y2, 'r-', label='Loss')
- ax2.set_ylabel('Loss', color='r')
- ax2.tick_params('y', colors='r')
-
- # Set the y-axis formatter to display numbers in scientific notation
- ax1.yaxis.set_major_formatter(ScalarFormatter(useMathText=True))
- ax1.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
-
- # Add grid
- ax1.grid(True)
-
- # Combine the legends for both plots
- lines, labels = ax1.get_legend_handles_labels()
- lines2, labels2 = ax2.get_legend_handles_labels()
- ax2.legend(lines + lines2, labels + labels2, loc='best')
-
- # Set the title
- plt.title(f'{lora_name} LR and Loss vs Epoch')
-
- # Save the chart as an image
- plt.savefig(image_model_path)
-
- print(f"Graph saved in {image_model_path}")
- else:
- print(f"File 'training_graph.json' does not exist in the {lora_path}")
-
- except ImportError:
- print("matplotlib is not installed. Please install matplotlib to create PNG graphs")
\ No newline at end of file
diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py b/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py
deleted file mode 100644
index 0da2d8f90c623b43ecd49b3dcf20919b8e2a1434..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py
+++ /dev/null
@@ -1,376 +0,0 @@
-import threading
-import chromadb
-import posthog
-import torch
-import math
-
-import numpy as np
-import extensions.superboogav2.parameters as parameters
-
-from chromadb.config import Settings
-from sentence_transformers import SentenceTransformer
-
-from modules.logging_colors import logger
-from modules.text_generation import encode, decode
-
-logger.debug('Intercepting all calls to posthog.')
-posthog.capture = lambda *args, **kwargs: None
-
-
-class Collecter():
- def __init__(self):
- pass
-
- def add(self, texts: list[str], texts_with_context: list[str], starting_indices: list[int]):
- pass
-
- def get(self, search_strings: list[str], n_results: int) -> list[str]:
- pass
-
- def clear(self):
- pass
-
-
-class Embedder():
- def __init__(self):
- pass
-
- def embed(self, text: str) -> list[torch.Tensor]:
- pass
-
-class Info:
- def __init__(self, start_index, text_with_context, distance, id):
- self.text_with_context = text_with_context
- self.start_index = start_index
- self.distance = distance
- self.id = id
-
- def calculate_distance(self, other_info):
- if parameters.get_new_dist_strategy() == parameters.DIST_MIN_STRATEGY:
- # Min
- return min(self.distance, other_info.distance)
- elif parameters.get_new_dist_strategy() == parameters.DIST_HARMONIC_STRATEGY:
- # Harmonic mean
- return 2 * (self.distance * other_info.distance) / (self.distance + other_info.distance)
- elif parameters.get_new_dist_strategy() == parameters.DIST_GEOMETRIC_STRATEGY:
- # Geometric mean
- return (self.distance * other_info.distance) ** 0.5
- elif parameters.get_new_dist_strategy() == parameters.DIST_ARITHMETIC_STRATEGY:
- # Arithmetic mean
- return (self.distance + other_info.distance) / 2
- else: # Min is default
- return min(self.distance, other_info.distance)
-
- def merge_with(self, other_info):
- s1 = self.text_with_context
- s2 = other_info.text_with_context
- s1_start = self.start_index
- s2_start = other_info.start_index
-
- new_dist = self.calculate_distance(other_info)
-
- if self.should_merge(s1, s2, s1_start, s2_start):
- if s1_start <= s2_start:
- if s1_start + len(s1) >= s2_start + len(s2): # if s1 completely covers s2
- return Info(s1_start, s1, new_dist, self.id)
- else:
- overlap = max(0, s1_start + len(s1) - s2_start)
- return Info(s1_start, s1 + s2[overlap:], new_dist, self.id)
- else:
- if s2_start + len(s2) >= s1_start + len(s1): # if s2 completely covers s1
- return Info(s2_start, s2, new_dist, other_info.id)
- else:
- overlap = max(0, s2_start + len(s2) - s1_start)
- return Info(s2_start, s2 + s1[overlap:], new_dist, other_info.id)
-
- return None
-
- @staticmethod
- def should_merge(s1, s2, s1_start, s2_start):
- # Check if s1 and s2 are adjacent or overlapping
- s1_end = s1_start + len(s1)
- s2_end = s2_start + len(s2)
-
- return not (s1_end < s2_start or s2_end < s1_start)
-
-class ChromaCollector(Collecter):
- def __init__(self, embedder: Embedder):
- super().__init__()
- self.chroma_client = chromadb.Client(Settings(anonymized_telemetry=False))
- self.embedder = embedder
- self.collection = self.chroma_client.create_collection(name="context", embedding_function=self.embedder.embed)
- self.ids = []
- self.id_to_info = {}
- self.embeddings_cache = {}
- self.lock = threading.Lock() # Locking so the server doesn't break.
-
- def add(self, texts: list[str], texts_with_context: list[str], starting_indices: list[int], metadatas: list[dict] = None):
- with self.lock:
- assert metadatas is None or len(metadatas) == len(texts), "metadatas must be None or have the same length as texts"
-
- if len(texts) == 0:
- return
-
- new_ids = self._get_new_ids(len(texts))
-
- (existing_texts, existing_embeddings, existing_ids, existing_metas), \
- (non_existing_texts, non_existing_ids, non_existing_metas) = self._split_texts_by_cache_hit(texts, new_ids, metadatas)
-
- # If there are any already existing texts, add them all at once.
- if existing_texts:
- logger.info(f'Adding {len(existing_embeddings)} cached embeddings.')
- args = {'embeddings': existing_embeddings, 'documents': existing_texts, 'ids': existing_ids}
- if metadatas is not None:
- args['metadatas'] = existing_metas
- self.collection.add(**args)
-
- # If there are any non-existing texts, compute their embeddings all at once. Each call to embed has significant overhead.
- if non_existing_texts:
- non_existing_embeddings = self.embedder.embed(non_existing_texts).tolist()
- for text, embedding in zip(non_existing_texts, non_existing_embeddings):
- self.embeddings_cache[text] = embedding
-
- logger.info(f'Adding {len(non_existing_embeddings)} new embeddings.')
- args = {'embeddings': non_existing_embeddings, 'documents': non_existing_texts, 'ids': non_existing_ids}
- if metadatas is not None:
- args['metadatas'] = non_existing_metas
- self.collection.add(**args)
-
- # Create a dictionary that maps each ID to its context and starting index
- new_info = {
- id_: {'text_with_context': context, 'start_index': start_index}
- for id_, context, start_index in zip(new_ids, texts_with_context, starting_indices)
- }
-
- self.id_to_info.update(new_info)
- self.ids.extend(new_ids)
-
-
- def _split_texts_by_cache_hit(self, texts: list[str], new_ids: list[str], metadatas: list[dict]):
- existing_texts, non_existing_texts = [], []
- existing_embeddings = []
- existing_ids, non_existing_ids = [], []
- existing_metas, non_existing_metas = [], []
-
- for i, text in enumerate(texts):
- id_ = new_ids[i]
- metadata = metadatas[i] if metadatas is not None else None
- embedding = self.embeddings_cache.get(text)
- if embedding:
- existing_texts.append(text)
- existing_embeddings.append(embedding)
- existing_ids.append(id_)
- existing_metas.append(metadata)
- else:
- non_existing_texts.append(text)
- non_existing_ids.append(id_)
- non_existing_metas.append(metadata)
-
- return (existing_texts, existing_embeddings, existing_ids, existing_metas), \
- (non_existing_texts, non_existing_ids, non_existing_metas)
-
-
- def _get_new_ids(self, num_new_ids: int):
- if self.ids:
- max_existing_id = max(int(id_) for id_ in self.ids)
- else:
- max_existing_id = -1
-
- return [str(i + max_existing_id + 1) for i in range(num_new_ids)]
-
-
- def _find_min_max_start_index(self):
- max_index, min_index = 0, float('inf')
- for _, val in self.id_to_info.items():
- if val['start_index'] > max_index:
- max_index = val['start_index']
- if val['start_index'] < min_index:
- min_index = val['start_index']
- return min_index, max_index
-
-
- # NB: Does not make sense to weigh excerpts from different documents.
- # But let's say that's the user's problem. Perfect world scenario:
- # Apply time weighing to different documents. For each document, then, add
- # separate time weighing.
- def _apply_sigmoid_time_weighing(self, infos: list[Info], document_len: int, time_steepness: float, time_power: float):
- sigmoid = lambda x: 1 / (1 + np.exp(-x))
-
- weights = sigmoid(time_steepness * np.linspace(-10, 10, document_len))
-
- # Scale to [0,time_power] and shift it up to [1-time_power, 1]
- weights = weights - min(weights)
- weights = weights * (time_power / max(weights))
- weights = weights + (1 - time_power)
-
- # Reverse the weights
- weights = weights[::-1]
-
- for info in infos:
- index = info.start_index
- info.distance *= weights[index]
-
-
- def _filter_outliers_by_median_distance(self, infos: list[Info], significant_level: float):
- # Ensure there are infos to filter
- if not infos:
- return []
-
- # Find info with minimum distance
- min_info = min(infos, key=lambda x: x.distance)
-
- # Calculate median distance among infos
- median_distance = np.median([inf.distance for inf in infos])
-
- # Filter out infos that have a distance significantly greater than the median
- filtered_infos = [inf for inf in infos if inf.distance <= significant_level * median_distance]
-
- # Always include the info with minimum distance
- if min_info not in filtered_infos:
- filtered_infos.append(min_info)
-
- return filtered_infos
-
-
- def _merge_infos(self, infos: list[Info]):
- merged_infos = []
- current_info = infos[0]
-
- for next_info in infos[1:]:
- merged = current_info.merge_with(next_info)
- if merged is not None:
- current_info = merged
- else:
- merged_infos.append(current_info)
- current_info = next_info
-
- merged_infos.append(current_info)
- return merged_infos
-
-
- # Main function for retrieving chunks by distance. It performs merging, time weighing, and mean filtering.
- def _get_documents_ids_distances(self, search_strings: list[str], n_results: int):
- n_results = min(len(self.ids), n_results)
- if n_results == 0:
- return [], [], []
-
- if isinstance(search_strings, str):
- search_strings = [search_strings]
-
- infos = []
- min_start_index, max_start_index = self._find_min_max_start_index()
-
- for search_string in search_strings:
- result = self.collection.query(query_texts=search_string, n_results=math.ceil(n_results / len(search_strings)), include=['distances'])
- curr_infos = [Info(start_index=self.id_to_info[id]['start_index'],
- text_with_context=self.id_to_info[id]['text_with_context'],
- distance=distance, id=id)
- for id, distance in zip(result['ids'][0], result['distances'][0])]
-
- self._apply_sigmoid_time_weighing(infos=curr_infos, document_len=max_start_index - min_start_index + 1, time_steepness=parameters.get_time_steepness(), time_power=parameters.get_time_power())
- curr_infos = self._filter_outliers_by_median_distance(curr_infos, parameters.get_significant_level())
- infos.extend(curr_infos)
-
- infos.sort(key=lambda x: x.start_index)
- infos = self._merge_infos(infos)
-
- texts_with_context = [inf.text_with_context for inf in infos]
- ids = [inf.id for inf in infos]
- distances = [inf.distance for inf in infos]
-
- return texts_with_context, ids, distances
-
-
- # Get chunks by similarity
- def get(self, search_strings: list[str], n_results: int) -> list[str]:
- with self.lock:
- documents, _, _ = self._get_documents_ids_distances(search_strings, n_results)
- return documents
-
-
- # Get ids by similarity
- def get_ids(self, search_strings: list[str], n_results: int) -> list[str]:
- with self.lock:
- _, ids, _ = self._get_documents_ids_distances(search_strings, n_results)
- return ids
-
-
- # Cutoff token count
- def _get_documents_up_to_token_count(self, documents: list[str], max_token_count: int):
- # TODO: Move to caller; We add delimiters there which might go over the limit.
- current_token_count = 0
- return_documents = []
-
- for doc in documents:
- doc_tokens = encode(doc)[0]
- doc_token_count = len(doc_tokens)
- if current_token_count + doc_token_count > max_token_count:
- # If adding this document would exceed the max token count,
- # truncate the document to fit within the limit.
- remaining_tokens = max_token_count - current_token_count
-
- truncated_doc = decode(doc_tokens[:remaining_tokens], skip_special_tokens=True)
- return_documents.append(truncated_doc)
- break
- else:
- return_documents.append(doc)
- current_token_count += doc_token_count
-
- return return_documents
-
-
- # Get chunks by similarity and then sort by ids
- def get_sorted_by_ids(self, search_strings: list[str], n_results: int, max_token_count: int) -> list[str]:
- with self.lock:
- documents, ids, _ = self._get_documents_ids_distances(search_strings, n_results)
- sorted_docs = [x for _, x in sorted(zip(ids, documents))]
-
- return self._get_documents_up_to_token_count(sorted_docs, max_token_count)
-
-
- # Get chunks by similarity and then sort by distance (lowest distance is last).
- def get_sorted_by_dist(self, search_strings: list[str], n_results: int, max_token_count: int) -> list[str]:
- with self.lock:
- documents, _, distances = self._get_documents_ids_distances(search_strings, n_results)
- sorted_docs = [doc for doc, _ in sorted(zip(documents, distances), key=lambda x: x[1])] # sorted lowest -> highest
-
- # If a document is truncated or competely skipped, it would be with high distance.
- return_documents = self._get_documents_up_to_token_count(sorted_docs, max_token_count)
- return_documents.reverse() # highest -> lowest
-
- return return_documents
-
-
- def delete(self, ids_to_delete: list[str], where: dict):
- with self.lock:
- ids_to_delete = self.collection.get(ids=ids_to_delete, where=where)['ids']
- self.collection.delete(ids=ids_to_delete, where=where)
-
- # Remove the deleted ids from self.ids and self.id_to_info
- ids_set = set(ids_to_delete)
- self.ids = [id_ for id_ in self.ids if id_ not in ids_set]
- for id_ in ids_to_delete:
- self.id_to_info.pop(id_, None)
-
- logger.info(f'Successfully deleted {len(ids_to_delete)} records from chromaDB.')
-
-
- def clear(self):
- with self.lock:
- self.chroma_client.reset()
- self.collection = self.chroma_client.create_collection("context", embedding_function=self.embedder.embed)
- self.ids = []
- self.id_to_info = {}
-
- logger.info('Successfully cleared all records and reset chromaDB.')
-
-
-class SentenceTransformerEmbedder(Embedder):
- def __init__(self) -> None:
- logger.debug('Creating Sentence Embedder...')
- self.model = SentenceTransformer("sentence-transformers/all-mpnet-base-v2")
- self.embed = self.model.encode
-
-
-def make_collector():
- return ChromaCollector(SentenceTransformerEmbedder())
\ No newline at end of file
diff --git a/spaces/lilucheng/sourcedetection/common/utils/misc.py b/spaces/lilucheng/sourcedetection/common/utils/misc.py
deleted file mode 100644
index c15b9d203169cc55bc15e2c81349d1d6fe923a24..0000000000000000000000000000000000000000
--- a/spaces/lilucheng/sourcedetection/common/utils/misc.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from tqdm import tqdm
-
-#--------------------------------------------------------
-
-# just a list of a mapping
-#
-apply = lambda f, a: list(map(f, a))
-
-def apply_inplace(f, a, show_progress = False):
-
- idxs = range(len(a))
-
- if show_progress:
- idxs = tqdm(idxs)
-
- for k in idxs:
- a[k] = f(a[k])
-
-# 'safe cast': cast `val` to type `T` if possible, otherwise return `None`
-#
-def cast(val, T):
-
- try:
- return T(val)
- except:
- return None
-
-# return a 'standardized' sting length based on the actual length `s_length`
-#
-def standardized_string_length(s_length):
-
- for std_length in [256, 65535]:
-
- if s_length <= std_length:
- return std_length
-
- raise Exception(f'String too long (len = {s_length})')
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md
deleted file mode 100644
index 678b13bab38697421c42a0c82f43d9804c64eb68..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Microsoft Office 365 Product Key 2019 Cracked
Download Zip ↔ https://bytlly.com/2uGvOZ
-
-Feb 6, 2022 - Office 2019 is Microsoft's recently released office automation software, giving you an office that is an expert in document processing. Office 2019 offers you a new, modern and powerful look and feel, improved security, and a more efficient and thoughtful workflow that can be tailored to your needs.
-Like the rest of the software, Office 2019 has been released with new features that make it more sophisticated, but nevertheless easy to understand
-Office 2019 is a suite of software that contains the essential tools used to work with documents in the office. 8a78ff9644
-
-
-
diff --git a/spaces/lithiumice/SadTalker/src/utils/paste_pic.py b/spaces/lithiumice/SadTalker/src/utils/paste_pic.py
deleted file mode 100644
index a05a55caeea190d2af32f2341e3a96d1fc417b09..0000000000000000000000000000000000000000
--- a/spaces/lithiumice/SadTalker/src/utils/paste_pic.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import cv2, os
-import numpy as np
-from tqdm import tqdm
-import uuid
-
-from src.utils.videoio import save_video_with_watermark
-
-def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path):
-
- full_img = cv2.imread(pic_path)
- frame_h = full_img.shape[0]
- frame_w = full_img.shape[1]
-
- video_stream = cv2.VideoCapture(video_path)
- fps = video_stream.get(cv2.CAP_PROP_FPS)
- crop_frames = []
- while 1:
- still_reading, frame = video_stream.read()
- if not still_reading:
- video_stream.release()
- break
- crop_frames.append(frame)
-
- if len(crop_info) != 3:
- print("you didn't crop the image")
- return
- else:
- r_w, r_h = crop_info[0]
- clx, cly, crx, cry = crop_info[1]
- lx, ly, rx, ry = crop_info[2]
- lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
- # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
- # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
- oy1, oy2, ox1, ox2 = cly, cry, clx, crx
-
-
- tmp_path = str(uuid.uuid4())+'.mp4'
- out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h))
- for crop_frame in tqdm(crop_frames, 'seamlessClone:'):
- p = cv2.resize(crop_frame.astype(np.uint8), (crx-clx, cry - cly))
-
- mask = 255*np.ones(p.shape, p.dtype)
- location = ((ox1+ox2) // 2, (oy1+oy2) // 2)
- gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE)
- out_tmp.write(gen_img)
-
- out_tmp.release()
-
- save_video_with_watermark(tmp_path, new_audio_path, full_video_path)
- os.remove(tmp_path)
diff --git a/spaces/ljh1212/ljhai/README.md b/spaces/ljh1212/ljhai/README.md
deleted file mode 100644
index 5f78a160ab8fa766e47eb27e6450fd525803450d..0000000000000000000000000000000000000000
--- a/spaces/ljh1212/ljhai/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ljhai
-emoji: 🔥
-colorFrom: yellow
-colorTo: yellow
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/logasja/LowKey/align/first_stage.py b/spaces/logasja/LowKey/align/first_stage.py
deleted file mode 100644
index 0781bdc5870832d120a2108b9e2f333dac6e4566..0000000000000000000000000000000000000000
--- a/spaces/logasja/LowKey/align/first_stage.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import torch
-from torch.autograd import Variable
-import math
-from PIL import Image
-import numpy as np
-from align.box_utils import nms, _preprocess
-
-
-def run_first_stage(image, net, scale, threshold):
- """Run P-Net, generate bounding boxes, and do NMS.
-
- Arguments:
- image: an instance of PIL.Image.
- net: an instance of pytorch's nn.Module, P-Net.
- scale: a float number,
- scale width and height of the image by this number.
- threshold: a float number,
- threshold on the probability of a face when generating
- bounding boxes from predictions of the net.
-
- Returns:
- a float numpy array of shape [n_boxes, 9],
- bounding boxes with scores and offsets (4 + 1 + 4).
- """
-
- # scale the image and convert it to a float array
- width, height = image.size
- sw, sh = math.ceil(width*scale), math.ceil(height*scale)
- img = image.resize((sw, sh), Image.BILINEAR)
- img = np.asarray(img, 'float32')
-
- img = Variable(torch.FloatTensor(_preprocess(img)), volatile = True)
- output = net(img)
- probs = output[1].data.numpy()[0, 1, :, :]
- offsets = output[0].data.numpy()
- # probs: probability of a face at each sliding window
- # offsets: transformations to true bounding boxes
-
- boxes = _generate_bboxes(probs, offsets, scale, threshold)
- if len(boxes) == 0:
- return None
-
- keep = nms(boxes[:, 0:5], overlap_threshold = 0.5)
- return boxes[keep]
-
-
-def _generate_bboxes(probs, offsets, scale, threshold):
- """Generate bounding boxes at places
- where there is probably a face.
-
- Arguments:
- probs: a float numpy array of shape [n, m].
- offsets: a float numpy array of shape [1, 4, n, m].
- scale: a float number,
- width and height of the image were scaled by this number.
- threshold: a float number.
-
- Returns:
- a float numpy array of shape [n_boxes, 9]
- """
-
- # applying P-Net is equivalent, in some sense, to
- # moving 12x12 window with stride 2
- stride = 2
- cell_size = 12
-
- # indices of boxes where there is probably a face
- inds = np.where(probs > threshold)
-
- if inds[0].size == 0:
- return np.array([])
-
- # transformations of bounding boxes
- tx1, ty1, tx2, ty2 = [offsets[0, i, inds[0], inds[1]] for i in range(4)]
- # they are defined as:
- # w = x2 - x1 + 1
- # h = y2 - y1 + 1
- # x1_true = x1 + tx1*w
- # x2_true = x2 + tx2*w
- # y1_true = y1 + ty1*h
- # y2_true = y2 + ty2*h
-
- offsets = np.array([tx1, ty1, tx2, ty2])
- score = probs[inds[0], inds[1]]
-
- # P-Net is applied to scaled images
- # so we need to rescale bounding boxes back
- bounding_boxes = np.vstack([
- np.round((stride*inds[1] + 1.0)/scale),
- np.round((stride*inds[0] + 1.0)/scale),
- np.round((stride*inds[1] + 1.0 + cell_size)/scale),
- np.round((stride*inds[0] + 1.0 + cell_size)/scale),
- score, offsets
- ])
- # why one is added?
-
- return bounding_boxes.T
diff --git a/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py b/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py
deleted file mode 100644
index e4abe8814534dbde4d18f06951c658fd31a52a73..0000000000000000000000000000000000000000
--- a/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py
+++ /dev/null
@@ -1,219 +0,0 @@
-from operator import itemgetter;
-import os.path;
-import re;
-import sys;
-
-from graph import Graph;
-
-conditions = {"APX": "≈", "EQU": "=", "LEQ": "≤", "LES": "<", "NEQ": "≠",
- "SXN": "«", "SXP": "»", "SXY": "≖", "SZN": "\\", "SZP": "/",
- "STI": "⊍", "STO": "⊍", "SY1": "∥", "SY2": "⚮",
- "TAB": "⋈", "TPR": "≺"};
-
-#
-# in parsing the clauses, patterns are ordered by specificity
-#
-id_matcher = re.compile(r'^%%% bin/boxer --input (?:[^/]+/)?p([0-9]+)/d([0-9]+)/');
-referent_matcher = re.compile(r'^(b[0-9]+) REF ([enpstx][0-9]+) +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$');
-condition_matcher = re.compile(r'^(b[0-9]+) (EQU|NEQ|APX|LE[SQ]|TPR|TAB|S[ZX][PN]|ST[IO]|SY[12]|SXY) ([enpstx][0-9]+|"[^"]+") ([enpstx][0-9]+|"[^"]+") +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$');
-role_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) ([enpstx][0-9]+) ([enpstx][0-9]+|"[^"]+") +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$');
-concept_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) ("[^ ]+") ([enpstx][0-9]+) +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$');
-discourse_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) (b[0-9]+)(?: (b[0-9]+))? +%(?: .* \[[0-9]+\.\.\.[0-9]+\])?$');
-empty_matcher = re.compile(r'^ *%(?: .* \[[0-9]+\.\.\.[0-9]+\])?$');
-
-def read(fp, text = None, full = False, reify = False, trace = 0, strict = 0):
-
- def finish(graph, mapping, finis, scopes):
- if reify:
- for box, referent, node in finis:
- #
- # in full reification mode, or when the corresponding box cannot be
- # easily inferred for a reified role (including when the source node is
- # a constant, as e.g. in a 'future' temporal discourse conditions),
- # add an explicit box membership edge.
- #
- if full \
- or referent[0] == referent[-1] == "\"" \
- or box not in scopes[referent]:
- graph.add_edge(mapping[box].id, node.id, "∈");
- else:
- for referent in scopes:
- if len(scopes[referent]) > 1:
- print("pbm.read(): [graph #{}] stray referent ‘{}’ in boxes {}."
- "".format(graph.id, referent, scopes[referent]),
- file=sys.stderr);
- #
- # after the fact, mark all boxes that structurally are roots as top nodes.
- #
- for node in graph.nodes:
- if node.type == 0 and node.is_root(): node.is_top = True;
-
- graph = None; id = None; sentence = None;
- mapping = dict(); scopes = dict(); finis = list();
- i = 0;
- header = 3;
- for line in fp:
- line = line.rstrip(); i += 1;
- if trace: print("{}: {}".format(i, line));
- #
- # to support newline-separated concatenations of clause files (a format not
- # used in the native PMB 3.0 release),
- #
- if len(line) == 0:
- finish(graph, mapping, finis, scopes);
- yield graph, None;
- graph = None; id = None;
- mapping = dict(); scopes = dict(); finis = list();
- header = 3;
- continue;
- #
- # each block of clauses is preceded by three comment lines, which we use to
- # extract the sentence identifier and underlying string.
- #
- if header:
- if header == 3: pass;
- elif header == 2:
- match = id_matcher.match(line);
- if match is None:
- raise Exception("pbm.read(): "
- "[line {}] missing identifier in ‘{}’; exit."
- "".format(i, line));
- part, document = match.groups();
- id = "{:02d}{:04d}".format(int(part), int(document));
- elif header == 1:
- if text is not None and id in text: sentence = text[id];
- else: sentence = line[5:-1];
- graph = Graph(id, flavor = 2, framework = "drg");
- graph.add_input(sentence);
- header -= 1;
- continue;
- #
- # from here onwards, we are looking at genuine, contentful clauses. from
- # inspecting some of the files, it appears they are organized according to
- # surface (reading) order, and we cannot assume that discourse referents
- # are 'introduced' (in some box) prior to their first occurance in e.g. a
- # role or concept clause.
- #
- anchor = None;
- match = referent_matcher.match(line);
- if match is not None:
- box, referent, start, end = match.groups();
- if referent in scopes:
- if strict and box not in scopes[referent] and reify:
- raise Exception("pbm.read(): "
- "[line {}] stray referent ‘{}’ in box ‘{}’ "
- "(instead of ‘{}’); exit."
- "".format(i, referent, box, scopes[referent]));
- else: scopes[referent] = {box};
- if box not in mapping: mapping[box] = graph.add_node(type = 0);
- if start is not None and end is not None:
- anchor = {"from": int(start), "to": int(end)};
- if referent not in mapping:
- mapping[referent] \
- = graph.add_node(anchors = [anchor] if anchor else None);
- else:
- node = mapping[referent];
- node.add_anchor(anchor);
- graph.add_edge(mapping[box].id, mapping[referent].id, "∈");
- else:
- match = condition_matcher.match(line);
- if match is not None:
- box, condition, source, target, start, end = match.groups();
- condition = conditions[condition];
- if source[0] == "\"" and source[-1] == "\"" and source not in mapping:
- if start is not None and end is not None:
- anchor = {"from": int(start), "to": int(end)};
- mapping[source] \
- = graph.add_node(label = source,
- anchors = [anchor] if anchor else None);
- elif source not in mapping: mapping[source] = graph.add_node();
- if target[0] == "\"" and target[-1] == "\"" and target not in mapping:
- if start is not None and end is not None:
- anchor = {"from": int(start), "to": int(end)};
- mapping[target] \
- = graph.add_node(label = target,
- anchors = [anchor] if anchor else None);
- elif target not in mapping: mapping[target] = graph.add_node();
- if reify:
- if box not in mapping: mapping[box] = graph.add_node(type = 0);
- node = graph.add_node(label = condition, type = 3);
- finis.append((box, source, node));
- graph.add_edge(mapping[source].id, node.id, None);
- graph.add_edge(node.id, mapping[target].id, None);
- else:
- if source in scopes: scopes[source].add(box);
- else: scopes[source] = {box};
- graph.add_edge(mapping[source].id, mapping[target].id, condition);
- else:
- match = role_matcher.match(line);
- if match is not None:
- box, role, source, target, start, end = match.groups();
- if source not in mapping: mapping[source] = graph.add_node();
- if target[0] == "\"" and target[-1] == "\"" and target not in mapping:
- if start is not None and end is not None:
- anchor = {"from": int(start), "to": int(end)};
- mapping[target] \
- = graph.add_node(label = target,
- anchors = [anchor] if anchor else None);
- elif target not in mapping: mapping[target] = graph.add_node();
- if reify:
- if box not in mapping: mapping[box] = graph.add_node(type = 0);
- node = graph.add_node(label = role, type = 2);
- finis.append((box, source, node));
- graph.add_edge(mapping[source].id, node.id, None);
- graph.add_edge(node.id, mapping[target].id, None);
- else:
- if source in scopes: scopes[source].add(box);
- else: scopes[source] = {box};
- graph.add_edge(mapping[source].id, mapping[target].id, role);
- else:
- match = concept_matcher.match(line);
- if match is not None:
- box, lemma, sense, referent, start, end = match.groups();
- if referent in scopes:
- if strict and box not in scopes[referent] and reify:
- raise Exception("pbm.read(): "
- "[line {}] stray referent ‘{}’ in box ‘{}’ "
- "(instead of ‘{}’); exit."
- "".format(i, referent, box, scopes[referent]));
- else: scopes[referent] = {box};
- if start is not None and end is not None:
- anchor = {"from": int(start), "to": int(end)};
- if referent not in mapping:
- mapping[referent] = node \
- = graph.add_node(anchors = [anchor] if anchor else None);
- else:
- node = mapping[referent];
- node.add_anchor(anchor);
- if strict and node.label is not None:
- raise Exception("pbm.read(): "
- "[line {}] duplicate label ‘{}’ on referent ‘{}’ "
- "(instead of ‘{}’); exit."
- "".format(i, lemma, referent, node.label));
- node.label = lemma;
- if sense[0] == sense[-1] == "\"": sense = sense[1:-1];
- node.set_property("sense", sense);
- else:
- match = discourse_matcher.match(line);
- if match is not None:
- top, relation, one, two = match.groups();
- if one not in mapping: mapping[one] = graph.add_node(type = 0);
- if two is not None:
- if trace > 1: print("ternary discourse relation");
- if two not in mapping: mapping[two] = graph.add_node(type = 0);
- graph.add_edge(mapping[one].id, mapping[two].id, relation);
- else:
- if top not in mapping: mapping[top] = graph.add_node(type = 0);
- graph.add_edge(mapping[top].id, mapping[one].id, relation);
- elif empty_matcher.search(line) is None:
- raise Exception("pmb.read(): [line {}] invalid clause ‘{}’."
- "".format(i, line));
- #
- # finally, as we reach an end of file (without an empty line terminating the
- # preceding block of clauses, as is the standard format in PMB), finalize the
- # graph and return it.
- #
- if graph is not None:
- finish(graph, mapping, finis, scopes);
- yield graph, None;
-
diff --git a/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py b/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py
deleted file mode 100644
index effbac21e5f863d5bf17e16b45469ce2d22affa5..0000000000000000000000000000000000000000
--- a/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import pytest
-import os
-from discord_bot.client.utils import ( \
- find_max_split_index, \
- find_max_split_index_from_sequence, \
- split_text_into_chunks
-)
-
-
-@pytest.fixture(scope='module')
-def test_chunk() -> str:
- return 't. , \n .'
-
-
-@pytest.fixture(scope='module')
-def test_text() -> str:
- with open('tests/discord_bot/client/lorem_ipsum.txt', 'r') as f:
- text = f.read()
- assert text is not None, 'test text is empty'
- return text
-
-
-def test_find_max_splitting_index(test_chunk: str):
- index = find_max_split_index(test_chunk, char='\n')
- assert index == 6, 'index should be 6'
- index = find_max_split_index(test_chunk, char='. ')
- assert index == 3, 'index should be 3'
- index = find_max_split_index(test_chunk, char='.')
- assert index == 8, 'index should be 8'
-
-
-def test_find_max_split_index_from_sequence(test_chunk: str):
- index = find_max_split_index_from_sequence(
- test_chunk,
- split_characters=['\n']
- )
- assert index == 6, 'index should be 6'
- index = find_max_split_index_from_sequence(
- test_chunk,
- split_characters=['.', ', ', '\n']
- )
- assert index == 8, 'index should be 8'
-
-
-def test_split_text_into_chunks_with_split_characters(test_text: str):
- max_chunk_size = 250
- chunks = split_text_into_chunks(
- test_text,
- split_characters=['. ', ', ', '\n'],
- min_size=20,
- max_size=max_chunk_size
- )
- for chunk in chunks:
- assert len(chunk) > 0, 'Chunk length is zero'
- assert len(chunk) <= max_chunk_size, 'Chunk length exceeds maximum limit'
-
-
-def test_split_text_into_chunks_without_split_characters():
- test_text = 'a' * 1000
- max_chunk_size = 250
- chunks = split_text_into_chunks(
- test_text,
- split_characters=[],
- min_size=20,
- max_size=max_chunk_size
- )
- for chunk in chunks:
- assert len(chunk) == max_chunk_size, \
- 'Chunk length is too small'
diff --git a/spaces/lunarring/latentblending/ldm/data/util.py b/spaces/lunarring/latentblending/ldm/data/util.py
deleted file mode 100644
index 5b60ceb2349e3bd7900ff325740e2022d2903b1c..0000000000000000000000000000000000000000
--- a/spaces/lunarring/latentblending/ldm/data/util.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import torch
-
-from ldm.modules.midas.api import load_midas_transform
-
-
-class AddMiDaS(object):
- def __init__(self, model_type):
- super().__init__()
- self.transform = load_midas_transform(model_type)
-
- def pt2np(self, x):
- x = ((x + 1.0) * .5).detach().cpu().numpy()
- return x
-
- def np2pt(self, x):
- x = torch.from_numpy(x) * 2 - 1.
- return x
-
- def __call__(self, sample):
- # sample['jpg'] is tensor hwc in [-1, 1] at this point
- x = self.pt2np(sample['jpg'])
- x = self.transform({"image": x})["image"]
- sample['midas_in'] = x
- return sample
\ No newline at end of file
diff --git a/spaces/ma-xu/LIVE/pybind11/setup.py b/spaces/ma-xu/LIVE/pybind11/setup.py
deleted file mode 100644
index 577a6b6c37c9d284b0d5b7453de62aaa71c50869..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/setup.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-# Setup script for PyPI; use CMakeFile.txt to build extension modules
-
-from setuptools import setup
-from distutils.command.install_headers import install_headers
-from distutils.command.build_py import build_py
-from pybind11 import __version__
-import os
-
-package_data = [
- 'include/pybind11/detail/class.h',
- 'include/pybind11/detail/common.h',
- 'include/pybind11/detail/descr.h',
- 'include/pybind11/detail/init.h',
- 'include/pybind11/detail/internals.h',
- 'include/pybind11/detail/typeid.h',
- 'include/pybind11/attr.h',
- 'include/pybind11/buffer_info.h',
- 'include/pybind11/cast.h',
- 'include/pybind11/chrono.h',
- 'include/pybind11/common.h',
- 'include/pybind11/complex.h',
- 'include/pybind11/eigen.h',
- 'include/pybind11/embed.h',
- 'include/pybind11/eval.h',
- 'include/pybind11/functional.h',
- 'include/pybind11/iostream.h',
- 'include/pybind11/numpy.h',
- 'include/pybind11/operators.h',
- 'include/pybind11/options.h',
- 'include/pybind11/pybind11.h',
- 'include/pybind11/pytypes.h',
- 'include/pybind11/stl.h',
- 'include/pybind11/stl_bind.h',
-]
-
-# Prevent installation of pybind11 headers by setting
-# PYBIND11_USE_CMAKE.
-if os.environ.get('PYBIND11_USE_CMAKE'):
- headers = []
-else:
- headers = package_data
-
-
-class InstallHeaders(install_headers):
- """Use custom header installer because the default one flattens subdirectories"""
- def run(self):
- if not self.distribution.headers:
- return
-
- for header in self.distribution.headers:
- subdir = os.path.dirname(os.path.relpath(header, 'include/pybind11'))
- install_dir = os.path.join(self.install_dir, subdir)
- self.mkpath(install_dir)
-
- (out, _) = self.copy_file(header, install_dir)
- self.outfiles.append(out)
-
-
-# Install the headers inside the package as well
-class BuildPy(build_py):
- def build_package_data(self):
- build_py.build_package_data(self)
- for header in package_data:
- target = os.path.join(self.build_lib, 'pybind11', header)
- self.mkpath(os.path.dirname(target))
- self.copy_file(header, target, preserve_mode=False)
-
- def get_outputs(self, include_bytecode=1):
- outputs = build_py.get_outputs(self, include_bytecode=include_bytecode)
- for header in package_data:
- target = os.path.join(self.build_lib, 'pybind11', header)
- outputs.append(target)
- return outputs
-
-
-setup(
- name='pybind11',
- version=__version__,
- description='Seamless operability between C++11 and Python',
- author='Wenzel Jakob',
- author_email='wenzel.jakob@epfl.ch',
- url='https://github.com/pybind/pybind11',
- download_url='https://github.com/pybind/pybind11/tarball/v' + __version__,
- packages=['pybind11'],
- license='BSD',
- headers=headers,
- zip_safe=False,
- cmdclass=dict(install_headers=InstallHeaders, build_py=BuildPy),
- classifiers=[
- 'Development Status :: 5 - Production/Stable',
- 'Intended Audience :: Developers',
- 'Topic :: Software Development :: Libraries :: Python Modules',
- 'Topic :: Utilities',
- 'Programming Language :: C++',
- 'Programming Language :: Python :: 2.7',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.2',
- 'Programming Language :: Python :: 3.3',
- 'Programming Language :: Python :: 3.4',
- 'Programming Language :: Python :: 3.5',
- 'Programming Language :: Python :: 3.6',
- 'License :: OSI Approved :: BSD License'
- ],
- keywords='C++11, Python bindings',
- long_description="""pybind11 is a lightweight header-only library that
-exposes C++ types in Python and vice versa, mainly to create Python bindings of
-existing C++ code. Its goals and syntax are similar to the excellent
-Boost.Python by David Abrahams: to minimize boilerplate code in traditional
-extension modules by inferring type information using compile-time
-introspection.
-
-The main issue with Boost.Python-and the reason for creating such a similar
-project-is Boost. Boost is an enormously large and complex suite of utility
-libraries that works with almost every C++ compiler in existence. This
-compatibility has its cost: arcane template tricks and workarounds are
-necessary to support the oldest and buggiest of compiler specimens. Now that
-C++11-compatible compilers are widely available, this heavy machinery has
-become an excessively large and unnecessary dependency.
-
-Think of this library as a tiny self-contained version of Boost.Python with
-everything stripped away that isn't relevant for binding generation. Without
-comments, the core header files only require ~4K lines of code and depend on
-Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This
-compact implementation was possible thanks to some of the new C++11 language
-features (specifically: tuples, lambda functions and variadic templates). Since
-its creation, this library has grown beyond Boost.Python in many ways, leading
-to dramatically simpler binding code in many common situations.""")
diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py b/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py
deleted file mode 100644
index 5257e0cd3061707f0dd1b79de54a0c6cdae81cd1..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py
+++ /dev/null
@@ -1,192 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-
-import env # noqa: F401
-
-from pybind11_tests import kwargs_and_defaults as m
-
-
-def test_function_signatures(doc):
- assert doc(m.kw_func0) == "kw_func0(arg0: int, arg1: int) -> str"
- assert doc(m.kw_func1) == "kw_func1(x: int, y: int) -> str"
- assert doc(m.kw_func2) == "kw_func2(x: int = 100, y: int = 200) -> str"
- assert doc(m.kw_func3) == "kw_func3(data: str = 'Hello world!') -> None"
- assert doc(m.kw_func4) == "kw_func4(myList: List[int] = [13, 17]) -> str"
- assert doc(m.kw_func_udl) == "kw_func_udl(x: int, y: int = 300) -> str"
- assert doc(m.kw_func_udl_z) == "kw_func_udl_z(x: int, y: int = 0) -> str"
- assert doc(m.args_function) == "args_function(*args) -> tuple"
- assert doc(m.args_kwargs_function) == "args_kwargs_function(*args, **kwargs) -> tuple"
- assert doc(m.KWClass.foo0) == \
- "foo0(self: m.kwargs_and_defaults.KWClass, arg0: int, arg1: float) -> None"
- assert doc(m.KWClass.foo1) == \
- "foo1(self: m.kwargs_and_defaults.KWClass, x: int, y: float) -> None"
-
-
-def test_named_arguments(msg):
- assert m.kw_func0(5, 10) == "x=5, y=10"
-
- assert m.kw_func1(5, 10) == "x=5, y=10"
- assert m.kw_func1(5, y=10) == "x=5, y=10"
- assert m.kw_func1(y=10, x=5) == "x=5, y=10"
-
- assert m.kw_func2() == "x=100, y=200"
- assert m.kw_func2(5) == "x=5, y=200"
- assert m.kw_func2(x=5) == "x=5, y=200"
- assert m.kw_func2(y=10) == "x=100, y=10"
- assert m.kw_func2(5, 10) == "x=5, y=10"
- assert m.kw_func2(x=5, y=10) == "x=5, y=10"
-
- with pytest.raises(TypeError) as excinfo:
- # noinspection PyArgumentList
- m.kw_func2(x=5, y=10, z=12)
- assert excinfo.match(
- r'(?s)^kw_func2\(\): incompatible.*Invoked with: kwargs: ((x=5|y=10|z=12)(, |$))' + '{3}$')
-
- assert m.kw_func4() == "{13 17}"
- assert m.kw_func4(myList=[1, 2, 3]) == "{1 2 3}"
-
- assert m.kw_func_udl(x=5, y=10) == "x=5, y=10"
- assert m.kw_func_udl_z(x=5) == "x=5, y=0"
-
-
-def test_arg_and_kwargs():
- args = 'arg1_value', 'arg2_value', 3
- assert m.args_function(*args) == args
-
- args = 'a1', 'a2'
- kwargs = dict(arg3='a3', arg4=4)
- assert m.args_kwargs_function(*args, **kwargs) == (args, kwargs)
-
-
-def test_mixed_args_and_kwargs(msg):
- mpa = m.mixed_plus_args
- mpk = m.mixed_plus_kwargs
- mpak = m.mixed_plus_args_kwargs
- mpakd = m.mixed_plus_args_kwargs_defaults
-
- assert mpa(1, 2.5, 4, 99.5, None) == (1, 2.5, (4, 99.5, None))
- assert mpa(1, 2.5) == (1, 2.5, ())
- with pytest.raises(TypeError) as excinfo:
- assert mpa(1)
- assert msg(excinfo.value) == """
- mixed_plus_args(): incompatible function arguments. The following argument types are supported:
- 1. (arg0: int, arg1: float, *args) -> tuple
-
- Invoked with: 1
- """ # noqa: E501 line too long
- with pytest.raises(TypeError) as excinfo:
- assert mpa()
- assert msg(excinfo.value) == """
- mixed_plus_args(): incompatible function arguments. The following argument types are supported:
- 1. (arg0: int, arg1: float, *args) -> tuple
-
- Invoked with:
- """ # noqa: E501 line too long
-
- assert mpk(-2, 3.5, pi=3.14159, e=2.71828) == (-2, 3.5, {'e': 2.71828, 'pi': 3.14159})
- assert mpak(7, 7.7, 7.77, 7.777, 7.7777, minusseven=-7) == (
- 7, 7.7, (7.77, 7.777, 7.7777), {'minusseven': -7})
- assert mpakd() == (1, 3.14159, (), {})
- assert mpakd(3) == (3, 3.14159, (), {})
- assert mpakd(j=2.71828) == (1, 2.71828, (), {})
- assert mpakd(k=42) == (1, 3.14159, (), {'k': 42})
- assert mpakd(1, 1, 2, 3, 5, 8, then=13, followedby=21) == (
- 1, 1, (2, 3, 5, 8), {'then': 13, 'followedby': 21})
- # Arguments specified both positionally and via kwargs should fail:
- with pytest.raises(TypeError) as excinfo:
- assert mpakd(1, i=1)
- assert msg(excinfo.value) == """
- mixed_plus_args_kwargs_defaults(): incompatible function arguments. The following argument types are supported:
- 1. (i: int = 1, j: float = 3.14159, *args, **kwargs) -> tuple
-
- Invoked with: 1; kwargs: i=1
- """ # noqa: E501 line too long
- with pytest.raises(TypeError) as excinfo:
- assert mpakd(1, 2, j=1)
- assert msg(excinfo.value) == """
- mixed_plus_args_kwargs_defaults(): incompatible function arguments. The following argument types are supported:
- 1. (i: int = 1, j: float = 3.14159, *args, **kwargs) -> tuple
-
- Invoked with: 1, 2; kwargs: j=1
- """ # noqa: E501 line too long
-
-
-def test_keyword_only_args(msg):
- assert m.kwonly_all(i=1, j=2) == (1, 2)
- assert m.kwonly_all(j=1, i=2) == (2, 1)
-
- with pytest.raises(TypeError) as excinfo:
- assert m.kwonly_all(i=1) == (1,)
- assert "incompatible function arguments" in str(excinfo.value)
-
- with pytest.raises(TypeError) as excinfo:
- assert m.kwonly_all(1, 2) == (1, 2)
- assert "incompatible function arguments" in str(excinfo.value)
-
- assert m.kwonly_some(1, k=3, j=2) == (1, 2, 3)
-
- assert m.kwonly_with_defaults(z=8) == (3, 4, 5, 8)
- assert m.kwonly_with_defaults(2, z=8) == (2, 4, 5, 8)
- assert m.kwonly_with_defaults(2, j=7, k=8, z=9) == (2, 7, 8, 9)
- assert m.kwonly_with_defaults(2, 7, z=9, k=8) == (2, 7, 8, 9)
-
- assert m.kwonly_mixed(1, j=2) == (1, 2)
- assert m.kwonly_mixed(j=2, i=3) == (3, 2)
- assert m.kwonly_mixed(i=2, j=3) == (2, 3)
-
- assert m.kwonly_plus_more(4, 5, k=6, extra=7) == (4, 5, 6, {'extra': 7})
- assert m.kwonly_plus_more(3, k=5, j=4, extra=6) == (3, 4, 5, {'extra': 6})
- assert m.kwonly_plus_more(2, k=3, extra=4) == (2, -1, 3, {'extra': 4})
-
- with pytest.raises(TypeError) as excinfo:
- assert m.kwonly_mixed(i=1) == (1,)
- assert "incompatible function arguments" in str(excinfo.value)
-
- with pytest.raises(RuntimeError) as excinfo:
- m.register_invalid_kwonly(m)
- assert msg(excinfo.value) == """
- arg(): cannot specify an unnamed argument after an kwonly() annotation
- """
-
-
-@pytest.mark.xfail("env.PYPY and env.PY2", reason="PyPy2 doesn't double count")
-def test_args_refcount():
- """Issue/PR #1216 - py::args elements get double-inc_ref()ed when combined with regular
- arguments"""
- refcount = m.arg_refcount_h
-
- myval = 54321
- expected = refcount(myval)
- assert m.arg_refcount_h(myval) == expected
- assert m.arg_refcount_o(myval) == expected + 1
- assert m.arg_refcount_h(myval) == expected
- assert refcount(myval) == expected
-
- assert m.mixed_plus_args(1, 2.0, "a", myval) == (1, 2.0, ("a", myval))
- assert refcount(myval) == expected
-
- assert m.mixed_plus_kwargs(3, 4.0, a=1, b=myval) == (3, 4.0, {"a": 1, "b": myval})
- assert refcount(myval) == expected
-
- assert m.args_function(-1, myval) == (-1, myval)
- assert refcount(myval) == expected
-
- assert m.mixed_plus_args_kwargs(5, 6.0, myval, a=myval) == (5, 6.0, (myval,), {"a": myval})
- assert refcount(myval) == expected
-
- assert m.args_kwargs_function(7, 8, myval, a=1, b=myval) == \
- ((7, 8, myval), {"a": 1, "b": myval})
- assert refcount(myval) == expected
-
- exp3 = refcount(myval, myval, myval)
- assert m.args_refcount(myval, myval, myval) == (exp3, exp3, exp3)
- assert refcount(myval) == expected
-
- # This function takes the first arg as a `py::object` and the rest as a `py::args`. Unlike the
- # previous case, when we have both positional and `py::args` we need to construct a new tuple
- # for the `py::args`; in the previous case, we could simply inc_ref and pass on Python's input
- # tuple without having to inc_ref the individual elements, but here we can't, hence the extra
- # refs.
- assert m.mixed_args_refcount(myval, myval, myval) == (exp3 + 3, exp3 + 3, exp3 + 3)
-
- assert m.class_default_argument() == ""
diff --git a/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake b/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake
deleted file mode 100644
index 98ab43d9e62e293c0c87e44b6f325579991e8732..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake
+++ /dev/null
@@ -1,83 +0,0 @@
-# - Try to find Eigen3 lib
-#
-# This module supports requiring a minimum version, e.g. you can do
-# find_package(Eigen3 3.1.2)
-# to require version 3.1.2 or newer of Eigen3.
-#
-# Once done this will define
-#
-# EIGEN3_FOUND - system has eigen lib with correct version
-# EIGEN3_INCLUDE_DIR - the eigen include directory
-# EIGEN3_VERSION - eigen version
-
-# Copyright (c) 2006, 2007 Montel Laurent,
-# Copyright (c) 2008, 2009 Gael Guennebaud,
-# Copyright (c) 2009 Benoit Jacob
-# Redistribution and use is allowed according to the terms of the 2-clause BSD license.
-
-if(NOT Eigen3_FIND_VERSION)
- if(NOT Eigen3_FIND_VERSION_MAJOR)
- set(Eigen3_FIND_VERSION_MAJOR 2)
- endif(NOT Eigen3_FIND_VERSION_MAJOR)
- if(NOT Eigen3_FIND_VERSION_MINOR)
- set(Eigen3_FIND_VERSION_MINOR 91)
- endif(NOT Eigen3_FIND_VERSION_MINOR)
- if(NOT Eigen3_FIND_VERSION_PATCH)
- set(Eigen3_FIND_VERSION_PATCH 0)
- endif(NOT Eigen3_FIND_VERSION_PATCH)
-
- set(Eigen3_FIND_VERSION
- "${Eigen3_FIND_VERSION_MAJOR}.${Eigen3_FIND_VERSION_MINOR}.${Eigen3_FIND_VERSION_PATCH}")
-endif(NOT Eigen3_FIND_VERSION)
-
-macro(_eigen3_check_version)
- file(READ "${EIGEN3_INCLUDE_DIR}/Eigen/src/Core/util/Macros.h" _eigen3_version_header)
-
- string(REGEX MATCH "define[ \t]+EIGEN_WORLD_VERSION[ \t]+([0-9]+)" _eigen3_world_version_match
- "${_eigen3_version_header}")
- set(EIGEN3_WORLD_VERSION "${CMAKE_MATCH_1}")
- string(REGEX MATCH "define[ \t]+EIGEN_MAJOR_VERSION[ \t]+([0-9]+)" _eigen3_major_version_match
- "${_eigen3_version_header}")
- set(EIGEN3_MAJOR_VERSION "${CMAKE_MATCH_1}")
- string(REGEX MATCH "define[ \t]+EIGEN_MINOR_VERSION[ \t]+([0-9]+)" _eigen3_minor_version_match
- "${_eigen3_version_header}")
- set(EIGEN3_MINOR_VERSION "${CMAKE_MATCH_1}")
-
- set(EIGEN3_VERSION ${EIGEN3_WORLD_VERSION}.${EIGEN3_MAJOR_VERSION}.${EIGEN3_MINOR_VERSION})
- if(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION})
- set(EIGEN3_VERSION_OK FALSE)
- else(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION})
- set(EIGEN3_VERSION_OK TRUE)
- endif(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION})
-
- if(NOT EIGEN3_VERSION_OK)
-
- message(STATUS "Eigen3 version ${EIGEN3_VERSION} found in ${EIGEN3_INCLUDE_DIR}, "
- "but at least version ${Eigen3_FIND_VERSION} is required")
- endif(NOT EIGEN3_VERSION_OK)
-endmacro(_eigen3_check_version)
-
-if(EIGEN3_INCLUDE_DIR)
-
- # in cache already
- _eigen3_check_version()
- set(EIGEN3_FOUND ${EIGEN3_VERSION_OK})
-
-else(EIGEN3_INCLUDE_DIR)
-
- find_path(
- EIGEN3_INCLUDE_DIR
- NAMES signature_of_eigen3_matrix_library
- PATHS ${CMAKE_INSTALL_PREFIX}/include ${KDE4_INCLUDE_DIR}
- PATH_SUFFIXES eigen3 eigen)
-
- if(EIGEN3_INCLUDE_DIR)
- _eigen3_check_version()
- endif(EIGEN3_INCLUDE_DIR)
-
- include(FindPackageHandleStandardArgs)
- find_package_handle_standard_args(Eigen3 DEFAULT_MSG EIGEN3_INCLUDE_DIR EIGEN3_VERSION_OK)
-
- mark_as_advanced(EIGEN3_INCLUDE_DIR)
-
-endif(EIGEN3_INCLUDE_DIR)
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h
deleted file mode 100644
index d6801305be01b903d7a3b9a8bd45101f709543f4..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-
-template
- struct is_void_like
- : thrust::detail::or_<
- thrust::detail::is_void,
- thrust::detail::is_same
- >
-{}; // end is_void_like
-
-
-template
- struct lazy_is_void_like
- : is_void_like
-{}; // end lazy_is_void_like
-
-
-// XXX this meta function should first check that T is actually an iterator
-//
-// if thrust::iterator_value is defined and thrust::iterator_value::type == void
-// return false
-// else
-// return true
-template
- struct is_output_iterator
- : eval_if<
- is_metafunction_defined >::value,
- lazy_is_void_like >,
- thrust::detail::true_type
- >::type
-{
-}; // end is_output_iterator
-
-} // end detail
-
-} // end thrust
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h
deleted file mode 100644
index 18b31e758de483d77fc1c84f515e4117575ce852..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file thrust/system/cpp/memory.h
- * \brief Managing memory associated with Thrust's standard C++ system.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace cpp
-{
-/*! Allocates an area of memory available to Thrust's cpp system.
- * \param n Number of bytes to allocate.
- * \return A cpp::pointer pointing to the beginning of the newly
- * allocated memory. A null cpp::pointer is returned if
- * an error occurs.
- * \note The cpp::pointer returned by this function must be
- * deallocated with \p cpp::free.
- * \see cpp::free
- * \see std::malloc
- */
-inline pointer malloc(std::size_t n);
-
-/*! Allocates a typed area of memory available to Thrust's cpp system.
- * \param n Number of elements to allocate.
- * \return A cpp::pointer pointing to the beginning of the newly
- * allocated elements. A null cpp::pointer is returned if
- * an error occurs.
- * \note The cpp::pointer returned by this function must be
- * deallocated with \p cpp::free.
- * \see cpp::free
- * \see std::malloc
- */
-template
-inline pointer malloc(std::size_t n);
-
-/*! Deallocates an area of memory previously allocated by cpp::malloc.
- * \param ptr A cpp::pointer pointing to the beginning of an area
- * of memory previously allocated with cpp::malloc.
- * \see cpp::malloc
- * \see std::free
- */
-inline void free(pointer ptr);
-
-/*! \p cpp::allocator is the default allocator used by the \p cpp system's containers such as
- * cpp::vector if no user-specified allocator is provided. \p cpp::allocator allocates
- * (deallocates) storage with \p cpp::malloc (\p cpp::free).
- */
-template
-using allocator = thrust::mr::stateless_resource_allocator;
-
-} // end cpp
-
-} // end system
-
-/*! \namespace thrust::cpp
- * \brief \p thrust::cpp is a top-level alias for thrust::system::cpp.
- */
-namespace cpp
-{
-
-using thrust::system::cpp::malloc;
-using thrust::system::cpp::free;
-using thrust::system::cpp::allocator;
-
-} // end cpp
-
-} // end thrust
-
-#include
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h
deleted file mode 100644
index d441862ab6cec2ef6ed87e21f5f926e81c32a5fd..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h
+++ /dev/null
@@ -1,857 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-// XXX declare generic copy_if interface
-// to avoid circulular dependency from thrust/copy.h
-template