Chesskid.com: A kid-friendly platform that offers fun games, puzzles,
-- Chesskid.com: A kid-friendly platform that offers fun games, puzzles, lessons, videos, and more for children and parents.
-
-How can I play chess online with my friends and family?
-To play chess online with your friends and family, you can use Chess Chess Online Apk's "Play a Friend" option. You can either invite your friends and family by sending them a link or a code, or accept their invitations by entering their link or code. You can also chat with them during the game and send them emojis.
-How can I learn more about chess history and origin?
-To learn more about chess history and origin, you can use Chess Chess Online Apk's "News" option. You can read articles, watch videos, listen to podcasts, and view live games that cover various topics related to chess history and origin. You can also use the app's "Lessons" option to learn about the history of chess openings, endgames, and famous players.
-How can I customize my chess board and pieces in the app?
-To customize your chess board and pieces in the app, you can use Chess Chess Online Apk's "Settings" option. You can choose from different themes, colors, styles, sounds, and animations for your chess board and pieces. You can also adjust the board size, orientation, coordinates, and notation.
-How can I contact the developers of Chess Chess Online Apk for feedback and support?
-To contact the developers of Chess Chess Online Apk for feedback and support, you can use Chess Chess Online Apk's "More" option. You can send them an email, a message, or a review. You can also follow them on social media platforms like Facebook, Twitter, Instagram, and YouTube.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Braindom Mod APK Solve Puzzles and Brain Teasers with Free Rewards.md b/spaces/1phancelerku/anime-remove-background/Braindom Mod APK Solve Puzzles and Brain Teasers with Free Rewards.md
deleted file mode 100644
index 3e0153fb1c78f75da34602a3dcfb78cf563eb20c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Braindom Mod APK Solve Puzzles and Brain Teasers with Free Rewards.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-Braindom: Brain Games Test Mod APK - A Fun and Challenging Word Game
-Do you love word games that make you think outside the box? Do you enjoy solving puzzles that test your brain power and creativity? If yes, then you should try Braindom: Brain Games Test, a fun and challenging word game that will keep you entertained for hours.
-braindom brain games test mod apk
Download File ► https://jinyurl.com/2uNPW9
-What is Braindom: Brain Games Test?
-Braindom: Brain Games Test is a word game that combines logic, memory, and vocabulary skills. You will have to answer various questions that range from easy to hard, from simple to absurd, from common sense to brain teaser. You will have to use your imagination, intuition, and knowledge to find the correct answer.
-Features of Braindom: Brain Games Test
-- Hundreds of levels with different themes and difficulties
-The game has hundreds of levels that will challenge your brain in different ways. You will encounter questions about animals, celebrities, movies, music, history, geography, and more. Each level has a different theme and difficulty level, so you will never get bored or stuck.
-- Creative and humorous puzzles that test your logic, memory, and vocabulary
-The game has puzzles that are not only challenging but also funny and witty. You will have to use your logic, memory, and vocabulary skills to solve them. Some puzzles will make you laugh, some will make you think, and some will make you scratch your head. You will have to be smart and clever to find the right answer.
-- Earn coins and hints to help you solve tricky questions
-The game rewards you with coins and hints for every level you complete. You can use coins to buy more hints or skip levels if you are stuck. You can use hints to reveal letters or words in the answer or eliminate wrong options. You can also watch videos or share the game with your friends to get more coins and hints.
-braindom brain games test mod apk download
-braindom brain games test mod apk unlimited money
-braindom brain games test mod apk latest version
-braindom brain games test mod apk android
-braindom brain games test mod apk ios
-braindom brain games test mod apk free
-braindom brain games test mod apk hack
-braindom brain games test mod apk online
-braindom brain games test mod apk offline
-braindom brain games test mod apk no ads
-braindom brain games test mod apk 2.0.4
-braindom brain games test mod apk 2023
-braindom brain games test mod apk for pc
-braindom brain games test mod apk rexdl
-braindom brain games test mod apk revdl
-braindom brain games test mod apk apkpure
-braindom brain games test mod apk apkloli
-braindom brain games test mod apk happymod
-braindom brain games test mod apk an1
-braindom brain games test mod apk android 1
-download game braindom:brain games test mod apk
-how to install braindom:brain games test mod apk
-how to play braindom:brain games test mod apk
-how to update braindom:brain games test mod apk
-how to get unlimited money in braindom:brain games test mod apk
-how to remove ads in braindom:brain games test mod apk
-how to hack braindom:brain games test mod apk
-is there a virus in the Braindom:Brain Games Test Mod APK?
-is Braindom:Brain Games Test Mod APK safe?
-is Braindom:Brain Games Test Mod APK legal?
-what is Braindom:Brain Games Test Mod APK?
-what are the features of Braindom:Brain Games Test Mod APK?
-what are the benefits of Braindom:Brain Games Test Mod APK?
-what are the disadvantages of Braindom:Brain Games Test Mod APK?
-what are the requirements for Braindom:Brain Games Test Mod APK?
-where can I download Braindom:Brain Games Test Mod APK?
-where can I find more information about Braindom:Brain Games Test Mod APK?
-why should I download Braindom:Brain Games Test Mod APK?
-why is Braindom:Brain Games Test Mod APK popular?
-why is Braindom:Brain Games Test Mod APK fun?
-- Play offline or online with friends and family
-The game can be played offline or online with friends and family. You can play offline without an internet connection anytime and anywhere. You can play online with your Facebook friends or other players around the world. You can also chat with them, send them gifts, or challenge them to beat your score.
-Why download Braindom: Brain Games Test Mod APK?
-- Unlimited money to buy more hints and coins
-If you want to enjoy the game without any limitations, you should download Braindom: Brain Games Test Mod APK. This modded version of the game gives you unlimited money to buy more hints and coins. You can use them as much as you want without worrying about running out of them.
-- No ads to interrupt your gameplay
-Another benefit of downloading Braindom: Brain Games Test Mod APK is that it removes all the ads from the game. You will not have to watch any annoying or intrusive ads that interrupt your gameplay. You can play the game smoothly and comfortably without any distractions.
-- Easy installation and compatibility with most devices
-Braindom: Brain Games Test Mod APK is easy to install and compatible with most devices. You just need to download the APK file from a trusted source and follow the simple steps to install it on your device. You do not need to root or jailbreak your device to use the mod. You can enjoy the game on your Android or iOS device without any problems.
-How to download and install Braindom: Brain Games Test Mod APK?
-If you want to download and install Braindom: Brain Games Test Mod APK, you can follow these steps:
-Step 1: Download the APK file from a trusted source
-You can download the APK file from a trusted source such as [APKPure] or [APKMirror]. These are reliable websites that offer safe and secure downloads of modded apps and games. You can search for Braindom: Brain Games Test Mod APK on these websites and click on the download button.
-Step 2: Enable unknown sources on your device settings
-Before you can install the APK file, you need to enable unknown sources on your device settings. This will allow you to install apps and games from sources other than the official app store. To enable unknown sources, you can go to your device settings, then security, then unknown sources, and toggle it on.
-Step 3: Install the APK file and launch the game
-After you have enabled unknown sources, you can install the APK file by locating it in your downloads folder and tapping on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to finish. Once the installation is done, you can launch the game and enjoy it.
-Conclusion
-Braindom: Brain Games Test is a fun and challenging word game that will test your brain power and creativity. You will have to answer various questions that range from easy to hard, from simple to absurd, from common sense to brain teaser. You will have to use your imagination, intuition, and knowledge to find the correct answer.
-If you want to enjoy the game without any limitations, you should download Braindom: Brain Games Test Mod APK. This modded version of the game gives you unlimited money to buy more hints and coins, removes all the ads from the game, and makes it easy to install and compatible with most devices.
-So what are you waiting for? Download Braindom: Brain Games Test Mod APK today and have fun with this amazing word game.
-FAQs
-Here are some frequently asked questions about Braindom: Brain Games Test Mod APK:
-
-Q: Is Braindom: Brain Games Test Mod APK safe to use? | A: Yes, it is safe to use as long as you download it from a trusted source. However, you should always be careful when downloading modded apps and games as they may contain viruses or malware that can harm your device. |
-Q: Do I need an internet connection to play Braindom: Brain Games Test? | A: No, you do not need an internet connection to play the game. You can play it offline anytime and anywhere. However, if you want to play online with your friends or other players, you will need an internet connection. |
-Q: How can I update Braindom: Brain Games Test Mod APK? | A: You can update the modded version of the game by downloading the latest version of the APK file from the same source where you downloaded it before. You can also check for updates on the game itself by tapping on the settings icon and then checking for updates. |
-Q: How can I contact the developers of Braindom: Brain Games Test? | A: You can contact the developers of the game by sending them an email at [support@matchingham.gs] or by visiting their website at [https://www.matchingham.gs/]. You can also follow them on Facebook at [https://www.facebook.com/matchinghamgames] or on Instagram at [https://www.instagram.com/matchingham.games/]. |
-Q: How can I rate and review Braindom: Brain Games Test? | A: You can rate and review the game by going to the official app store where you downloaded it from. You can also rate and review the modded version of the game by going to the website where you downloaded it from. You can share your feedback, suggestions, and opinions with the developers and other players. You can also give the game a thumbs up or a thumbs down on Facebook or Instagram. |
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Clash of Clans MOD APK with Unlimited Gems and Troops (v15.297.217).md b/spaces/1phancelerku/anime-remove-background/Download Clash of Clans MOD APK with Unlimited Gems and Troops (v15.297.217).md
deleted file mode 100644
index c77deeb09acb857a2d15b596a0c767283bb4a61d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Clash of Clans MOD APK with Unlimited Gems and Troops (v15.297.217).md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-Clash of Clans Mod APK Download (Unlimited Gems Troops) Latest Version
-Are you a fan of strategy games? Do you love building your own village, training your own army, and battling with millions of other players online? If yes, then you must have heard of Clash of Clans, one of the most popular and addictive games for both iOS and Android devices.
-clash of clans mod apk download (unlimited gems troops) latest version
Download ✶ https://jinyurl.com/2uNOOV
-But what if we tell you that you can enjoy this game even more with a modded version that gives you unlimited resources, gems, troops, and access to all the features that you normally have to pay for? Sounds amazing, right? Well, that's exactly what Clash of Clans Mod APK is all about.
-In this article, we will tell you everything you need to know about Clash of Clans Mod APK, how to download and install it on your Android device, what are its features, how to play it, and some FAQs that you might have. So, without further ado, let's get started.
- What is Clash of Clans?
-A brief introduction to the game and its features
-Clash of Clans is a strategy game developed by Supercell, a Finnish company that also created other popular games like Hay Day, Boom Beach, and Brawl Stars. The game was released in 2012 for iOS and in 2013 for Android, and since then it has become one of the most downloaded and played games in the world.
-The game is set in a fantasy world where you have to build your own village, train your own troops, and fight with other players in clan wars and clan games. You can also join or create your own clan, where you can chat with other players, donate and receive troops, and participate in clan events.
-The game has various types of resources that you need to collect and spend to upgrade your village and troops. These resources include gold, elixir, dark elixir, gems, and magic items. You can get these resources by raiding other players' villages, completing achievements, winning clan wars and clan games, or buying them with real money.
-The game also has various types of buildings that you can construct and upgrade in your village. These buildings include town hall, barracks, army camps, laboratory, spell factory, gold mines, elixir collectors, dark elixir drills, gold storages, elixir storages, dark elixir storages, walls, cannons, archer towers, mortars, air defenses, wizard towers, hidden teslas, bomb towers, x-bows, inferno towers, eagle artillery, scattershots, air sweepers, air bombs , traps, clan castle, builder's hut, and decorations. You can also unlock and upgrade various types of troops and spells that you can use in battles. These troops and spells include barbarians, archers, giants, goblins, wall breakers, balloons, wizards, healers, dragons, pekkas, minions, hog riders, valkyries, golems, witches, lava hounds, bowlers, miners, baby dragons, electro dragons, yetis, ice golems, headhunters, super troops, lightning spell, healing spell, rage spell, jump spell, freeze spell, clone spell, poison spell, earthquake spell, haste spell, skeleton spell, bat spell and invisibility spell.
- The benefits of playing Clash of Clans Mod APK
-As you can see, Clash of Clans is a very fun and exciting game that offers a lot of content and features for you to enjoy. However, it can also be very challenging and time-consuming to progress in the game. You need to spend a lot of resources and gems to upgrade your village and troops. You also need to wait for long hours or days for the upgrades to finish. You might also face difficulties in finding suitable opponents or winning battles against stronger players.
-That's why many players look for ways to hack or mod the game to get unlimited resources and gems. This way, they can skip the waiting time and enjoy the game without any limitations or restrictions. They can also experiment with different strategies and tactics without worrying about losing resources or trophies.
-Clash of Clans Mod APK is one of the best and most reliable ways to hack or mod the game. It is a modified version of the original game that gives you access to unlimited resources and gems. It also unlocks all the buildings and upgrades that you normally have to pay for. It also gives you unlimited troops and spells that you can use in battles. It also allows you to customize and personalize your village and troops according to your preferences.
-clash of clans hack apk free download (unlimited everything) 2023
-clash of clans modded apk with unlimited gold and elixir
-download clash of clans mod apk latest version (unlimited troops/gems)
-how to install clash of clans mod apk on android device
-clash of clans mod apk offline mode (no internet required)
-clash of clans cheats apk download for unlimited resources
-clash of clans unlimited gems mod apk 2023 (working)
-clash of clans mod apk with private server and custom mods
-clash of clans mod apk unlimited dark elixir and heroes
-best clash of clans mod apk download site (safe and secure)
-clash of clans mod apk for pc windows 10/8/7
-clash of clans mod apk unlimited builder base troops
-clash of clans mod apk with th14 and new troops
-clash of clans mod apk download link (direct and fast)
-clash of clans mod apk latest update 2023 (new features)
-clash of clans mod apk no root required (easy to use)
-clash of clans mod apk unlimited super troops and spells
-clash of clans mod apk with unlimited clan games rewards
-clash of clans mod apk for ios iphone/ipad/ipod
-clash of clans mod apk with unlimited war stars and trophies
-clash of clans hack tool apk download (no survey no password)
-clash of clans mod apk with unlimited season pass and skins
-clash of clans mod apk with all maps unlocked and unlimited gems
-how to update clash of clans mod apk to the latest version
-clash of clans mod apk with unlimited cwl medals and league shop items
-clash of clans cracked apk download (unlimited gems/troops) 2023
-clash of clans modded server apk with unlimited events and challenges
-download coc mod apk latest version (unlimited gems/troops) 2023
-how to play clash of clans mod apk online with friends
-clash of clans premium apk download (unlimited gems/troops) 2023
-clash of clans pro mod apk with unlimited training potions and books
-clash of clans hacked version download (unlimited gems/troops) 2023
-how to backup and restore clash of clans mod apk data
-coc hack apk download latest version 2023 (unlimited gems/troops)
-how to fix clash of clans mod apk not working or crashing issues
-coc cheat codes apk download for unlimited gems and resources
-coc unlimited money mod apk download 2023 (working)
-how to transfer clash of clans mod apk account to another device
-coc god mode mod apk download (unlimited gems/troops) 2023
-how to uninstall or remove clash of clans mod apk from your device
-By playing Clash of Clans Mod APK, you can enjoy the game to the fullest without spending any money or wasting any time. You can build your dream village and army in no time. You can also dominate the leaderboards and impress your friends with your achievements. You can also have more fun and excitement in clan wars and clan games with your unlimited resources and troops.
- How to Download and Install Clash of Clans Mod APK on Android?
-The steps to download and install the modded version of the game
-If you are interested in playing Clash of Clans Mod APK on your Android device, you need to follow these simple steps:
-
-- First of all, you need to download the Clash of Clans Mod APK file from a trusted source. You can find many websites that offer the mod apk file for free. However, you need to be careful as some of them might contain viruses or malware that can harm your device. We recommend you to use this link to download the latest version of Clash of Clans Mod APK safely and securely.
-- Next, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.
-- Then, you need to locate the downloaded Clash of Clans Mod APK file on your device. You can use a file manager app or your browser's download history to find it. Once you find it, tap on it to start the installation process.
-- Finally, you need to follow the on-screen instructions and grant the necessary permissions to complete the installation process. It might take a few minutes for the app to install depending on your device's performance.
-
-Congratulations! You have successfully installed Clash of Clans Mod APK on your Android device. Now you can launch the app and enjoy the game with unlimited resources and gems.
- The precautions to take before installing the mod apk
-Before you install Clash of Clans Mod APK on your Android device, there are some precautions that you need to take:
-
-- Make sure that you have enough storage space on your device for the mod apk file and its data. The mod apk file is about 200 MB in size while its data is about 2 GB in size.
-- Make sure that you have a stable internet connection for downloading and installing the mod apk file and its data.
-- Make sure that you have backed up your original Clash of Clans game data before installing the mod apk file. This way, you can restore your original game data if anything goes wrong with the mod apk file or if you want to switch back to the original game.
-- Make sure that you have uninstalled the original Clash of Clans game from your device before installing the mod apk file. This is to avoid any conflicts or errors between the two versions of the game.
-- Make sure that you do not use your original Clash of Clans account or Google Play account to log in to the mod apk file. This is to avoid any risk of getting banned or suspended by Supercell for using a modded version of the game. You can create a new account or use a guest account to play the mod apk file.
-
-By following these precautions, you can ensure a smooth and safe installation and gameplay experience with Clash of Clans Mod APK.
- What are the Features of Clash of Clans Mod APK?
-Unlimited resources and gems
-One of the main features of Clash of Clans Mod APK is that it gives you unlimited resources and gems. You can use these resources and gems to upgrade your village and troops without any limitations or restrictions. You can also use them to buy anything you want from the shop, such as magic items, decorations, shields, and more.
-You can also use these resources and gems to instantly finish any upgrade or training process. You don't have to wait for hours or days for the upgrades or training to complete. You can also use them to boost your resource production, troop training, spell brewing, and hero regeneration.
-With unlimited resources and gems, you can enjoy the game without any worries or hassles. You can build your dream village and army in no time. You can also experiment with different combinations and strategies without losing anything.
- Unlimited troops and spells
-Another feature of Clash of Clans Mod APK is that it gives you unlimited troops and spells. You can train as many troops as you want in your barracks and army camps. You can also brew as many spells as you want in your spell factory. You don't have to worry about running out of space or elixir.
-You can also use any type of troop or spell in your battles. You don't have to unlock them or upgrade them first. You can access all the troops and spells that are available in the game, including the super troops and the new invisibility spell.
-With unlimited troops and spells, you can unleash your full potential in battles. You can create powerful armies and devastating spells that can crush any opponent. You can also have more fun and variety in your attacks and defenses.
- Access to all buildings and upgrades
-A third feature of Clash of Clans Mod APK is that it gives you access to all buildings and upgrades. You can build and upgrade any building that you want in your village. You don't have to meet any requirements or prerequisites. You can also skip the town hall levels and jump to the highest level possible.
-You can also access all the buildings and upgrades that are normally exclusive to certain town hall levels or seasons. For example, you can build and upgrade the scattershot, the royal champion, the giga inferno, the giga tesla, the builder base, the otto hut, the battle machine, the super pekka, the mega tesla, and more.
-With access to all buildings and upgrades, you can enhance your village and troops with ease. You can also explore all the features and content that the game has to offer. You can also challenge yourself with different modes and difficulties.
- Customization and personalization options
-A fourth feature of Clash of Clans Mod APK is that it gives you customization and personalization options. You can change the appearance and design of your village and troops according to your preferences. You can also modify the settings and parameters of the game according to your needs.
-You can choose from different themes and skins for your village and troops. You can also change the colors, shapes, sizes, names, icons, sounds, animations, effects, and more. You can also create your own custom themes and skins using various tools and resources.
-You can also adjust the difficulty level, speed, damage, health, range, capacity, cost, cooldown, duration, frequency, and more of your village and troops. You can also enable or disable certain features and functions of the game. You can also use cheats and hacks to manipulate the game in your favor.
-With customization and personalization options, you can make the game more fun and interesting. You can also express your creativity and personality through your village and troops. You can also have more control and flexibility over the game.
- How to Play Clash of Clans Mod APK?
-The basics of building your village and training your troops
-Playing Clash of Clans Mod APK is very similar to playing the original game. You still have to build your village and train your troops. However, with the mod apk, you have unlimited resources and gems, so you don't have to worry about collecting or spending them.
-To build your village, you have to tap on the shop icon on the bottom right corner of the screen. There, you can find all the buildings that you can construct and upgrade in your village. You can also find the decorations and magic items that you can buy and use in your village.
-To train your troops, you have to tap on the barracks icon on the bottom left corner of the screen. There, you can find all the troops that you can train in your barracks and army camps. You can also find the spells that you can brew in your spell factory.
-To build or upgrade a building, or to train a troop or a spell, you just have to tap on it and then tap on the green button that says "Build" or "Train". The building or troop or spell will be instantly built or trained without any waiting time or cost.
-You can also move, rotate, or remove any building or decoration in your village by tapping and holding on it. You can also edit the layout of your village by tapping on the edit mode icon on the top right corner of the screen.
- The strategies to attack and defend in clan wars and clan games
-Another aspect of playing Clash of Clans Mod APK is attacking and defending in clan wars and clan games. You still have to join or create a clan, where you can chat with other players, donate and receive troops, and participate in clan events.
-To join or create a clan, you have to tap on the clan icon on the bottom left corner of the screen. There, you can find all the clans that are available for you to join or create. You can also find the clan chat, clan profile, clan settings, clan war, clan games, and clan perks tabs.
-To attack in a clan war or a clan game, you have to tap on the clan war or clan game icon on the top left corner of the screen. There, you can find all the details and information about the current clan war or clan game. You can also find the map of the enemy clans' villages that you can attack.
-To attack an enemy village, you just have to tap on it and then tap on the red button that says "Attack". You will be taken to the battle screen, where you can deploy your troops and spells on the enemy's territory. You will also see your own village's defenses on the bottom of the screen. You can also use the buttons on the bottom right corner of the screen to zoom in or out, to end the battle, or to surrender.
-To defend your village, you have to make sure that you have a strong and well-designed layout that can withstand enemy attacks. You also have to make sure that you have enough troops in your clan castle that can help you in defending your village. You can also use the shield and guard features that can protect your village from attacks for a certain period of time.
-To win a battle, you have to destroy more percentage of the enemy's village than they do to yours. You also have to destroy their town hall, which gives you an extra star. The more stars you get, the more loot and trophies you earn. You also help your clan in winning the clan war or clan game.
- The tips and tricks to enjoy the game to the fullest
-The last aspect of playing Clash of Clans Mod APK is enjoying the game to the fullest. You can do this by following these tips and tricks:
-
-- Experiment with different troops and spells combinations and find out what works best for you. You can also watch replays of other players' attacks and learn from their strategies and mistakes.
-- Join an active and friendly clan that can help you with donations, advice, and support. You can also chat with other players and make new friends. You can also participate in clan events and earn rewards and perks for your clan.
-- Complete achievements and quests that can give you extra resources, gems, and magic items. You can also use these items to boost your progress and performance in the game.
-- Have fun and don't take the game too seriously. Remember that it is just a game and not a real war. Don't get frustrated or angry if you lose a battle or if someone attacks your village. Just learn from your experience and try again.
-
-By following these tips and tricks, you can have more fun and excitement in playing Clash of Clans Mod APK.
- Conclusion
-A summary of the main points and a call to action
-In conclusion, Clash of Clans Mod APK is a modded version of the original game that gives you unlimited resources, gems, troops, and access to all features that you normally have to pay for. It also allows you to customize and personalize your village and troops according to your preferences.
-By playing Clash of Clans Mod APK, you can enjoy the game without any limitations or restrictions. You can build your dream village and army in no time. You can also dominate the leaderboards and impress your friends with your achievements. You can also have more fun and excitement in clan wars and clan games with your unlimited resources and troops.
-If you are interested in playing Clash of Clans Mod APK, you can download it from this link safely and securely. You just have to follow the steps and precautions that we have mentioned in this article. Then, you can launch the app and enjoy the game.
-So, what are you waiting for? Download Clash of Clans Mod APK now and experience the ultimate strategy game like never before.
- FAQs
-Q1. Is Clash of Clans Mod APK safe to use?
-A1. Yes, Clash of Clans Mod APK is safe to use as long as you download it from a trusted source like this link. However, you still need to be careful as some websites might offer fake or malicious mod apk files that can harm your device or steal your data. You also need to follow the precautions that we have mentioned in this article before installing the mod apk file.
- Q2. Do I need to root my device to use Clash of Clans Mod APK?
-A2. No, you don't need to root your device to use Clash of Clans Mod APK. The mod apk file works on both rooted and non-rooted devices without any problems.
- Q3. Can I play Clash of Clans Mod APK with my friends?
-A3. Yes, you can play Clash of Clans Mod APK with your friends as long as they also have the same mod apk file installed on their devices. You can join or create a clan with them and chat with them in the game. You can also attack or defend each other's villages in clan wars and clan games.
- Q4. Will I get banned for using Clash of Clans Mod APK?
-A4. There is a possibility that you might get banned for using Clash of Clans Mod APK as it violates the terms of service of Supercell, the developer of the original game. Supercell has a system that can detect and ban players who use modded or hacked versions of the game. However, you can reduce the risk of getting banned by following these tips:
-
-- Do not use your original Clash of Clans account or Google Play account to log in to the mod apk file. Use a new account or a guest account instead.
-- Do not play the mod apk file on public servers or networks. Use a private server or a VPN service instead.
-- Do not brag or boast about using the mod apk file in the game chat or social media. Keep it a secret and avoid drawing attention to yourself.
-- Do not use the mod apk file excessively or abusively. Use it moderately and responsibly.
-
-By following these tips, you can enjoy the mod apk file without worrying too much about getting banned.
- Q5. How can I update Clash of Clans Mod APK?
-A5. To update Clash of Clans Mod APK, you have to download the latest version of the mod apk file from the same source that you downloaded it from before. You can check this link for the latest updates and news about the mod apk file. You also have to uninstall the previous version of the mod apk file from your device before installing the new version. You don't have to worry about losing your game data as it will be saved on your device's memory.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Endless Run Jungle Escape Mod APK Discover the Secrets of the Jungle.md b/spaces/1phancelerku/anime-remove-background/Endless Run Jungle Escape Mod APK Discover the Secrets of the Jungle.md
deleted file mode 100644
index 317aeef6914103fb34ee85d324679682b8ca65db..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Endless Run Jungle Escape Mod APK Discover the Secrets of the Jungle.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-Endless Run Jungle Escape Mod APK: A Thrilling Adventure Game
-If you are looking for a fun and exciting game that will keep you on the edge of your seat, then you should try Endless Run Jungle Escape Mod APK. This is a modified version of the original game that offers more features and benefits for the players. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, tips and tricks, review, and alternatives.
- What is Endless Run Jungle Escape Mod APK?
-Endless Run Jungle Escape is an addictive endless runner game that puts you in the shoes of a charismatic archaeologist trapped in an immensely endless jungle. You have to run, jump, slide, and dodge obstacles while collecting coins, gems, and power-ups. The game has stunning graphics, smooth controls, and immersive sound effects that will make you feel like you are in a real adventure.
-endless run jungle escape mod apk
Download Zip ↔ https://jinyurl.com/2uNUac
- The original game
-The original game was developed by Huskaimm Com and released in 2019. It has over 10 million downloads on Google Play Store and a rating of 4.1 out of 5 stars. The game is free to play but contains ads and in-app purchases. You can download the original game from Google Play Store or from other sources.
- The modded version
-The modded version is a modified version of the original game that offers more features and benefits for the players. The main difference between the modded version and the original version is that the modded version has unlocked all the characters and props in the game. This means that you can choose any character you want and use any prop you like without spending any money or coins. You can also enjoy unlimited coins, gems, and power-ups in the modded version. The modded version is not available on Google Play Store but you can download it from HappyMod or from other sources.
-endless run jungle escape mod apk download
-endless run jungle escape mod apk unlimited money
-endless run jungle escape mod apk latest version
-endless run jungle escape mod apk android 1
-endless run jungle escape mod apk revdl
-endless run jungle escape mod apk happymod
-endless run jungle escape mod apk rexdl
-endless run jungle escape mod apk free shopping
-endless run jungle escape mod apk offline
-endless run jungle escape mod apk no ads
-endless run jungle escape hack mod apk
-endless run jungle escape cheat mod apk
-endless run jungle escape premium mod apk
-endless run jungle escape pro mod apk
-endless run jungle escape vip mod apk
-temple spirit endless run mod apk
-temple spirit endless run hack apk
-temple spirit endless run cheat apk
-temple spirit endless run unlimited coins apk
-temple spirit endless run unlocked apk
-temple spirit endless run latest apk
-temple spirit endless run free download apk
-temple spirit endless run android game apk
-temple spirit endless run 3d adventure apk
-temple spirit endless run offline game apk
-jungle runner adventure fun game mod apk
-jungle runner adventure fun game hack apk
-jungle runner adventure fun game cheat apk
-jungle runner adventure fun game unlimited gems apk
-jungle runner adventure fun game unlocked all levels apk
-jungle runner adventure fun game latest version apk
-jungle runner adventure fun game free download apk
-jungle runner adventure fun game android app apk
-jungle runner adventure fun game 3d graphics apk
-jungle runner adventure fun game offline mode apk
-lost temple endless run 2 mod apk
-lost temple endless run 2 hack apk
-lost temple endless run 2 cheat apk
-lost temple endless run 2 unlimited diamonds apk
-lost temple endless run 2 unlocked characters apk
-lost temple endless run 2 new version apk
-lost temple endless run 2 free download apk
-lost temple endless run 2 android game apk
-lost temple endless run 2 hd quality apk
-lost temple endless run 2 online play apk
- Features of Endless Run Jungle Escape Mod APK
-Endless Run Jungle Escape Mod APK has many features that make it more enjoyable and challenging than the original game. Here are some of the features that you can expect from this game:
- Unlocked characters and props
-The modded version has unlocked all the characters and props in the game. You can choose from 22 main roles, each with their own skills and abilities. You can also use different props, such as shields, magnets, wings, rockets, etc., to help you overcome obstacles and enemies. You can customize your character and prop according to your preference.
- Dual handle operation
-Tasks and scores
-The modded version has various tasks and scores that you can complete and achieve. You can collect coins, gems, and power-ups to increase your score and unlock more rewards. You can also complete daily tasks, weekly tasks, and achievements to earn more coins, gems, and items. You can compare your score with other players on the leaderboard and challenge yourself to improve your rank.
- How to download and install Endless Run Jungle Escape Mod APK?
-If you want to download and install Endless Run Jungle Escape Mod APK, you need to follow these simple steps:
- Download from a reliable source
-The first step is to download the modded version from a reliable source. You can use HappyMod or other sources that offer safe and verified APK files. You need to make sure that the file you download is compatible with your device and has the latest version of the game.
- Enable unknown sources
-The second step is to enable unknown sources on your device. This is necessary because the modded version is not from Google Play Store and you need to allow your device to install apps from other sources. To do this, you need to go to your device settings, security, and enable unknown sources.
- Install the APK file
-The third step is to install the APK file on your device. You need to locate the file you downloaded and tap on it to start the installation process. You need to follow the instructions on the screen and wait for the installation to finish. Once it is done, you can launch the game and enjoy it.
- How to play Endless Run Jungle Escape Mod APK?
-If you want to play Endless Run Jungle Escape Mod APK, you need to follow these simple steps:
- Choose your character and prop
-The first step is to choose your character and prop from the unlocked ones. You can select any character you want and use any prop you like. You can also customize your character and prop according to your preference.
- Swipe to move and jump
-The second step is to swipe to move and jump in the game. You need to swipe left or right to switch roads or swipe up or down to turn gravity. You need to avoid obstacles and enemies while running in the jungle.
- Collect coins and gems
-The third step is to collect coins and gems in the game. You need to collect as many coins and gems as possible while running in the jungle. You can use them to upgrade your skills and items or buy new characters and props.
- Tips and tricks for Endless Run Jungle Escape Mod APK
-If you want to master Endless Run Jungle Escape Mod APK, you need to follow these tips and tricks:
- Use the tunnel level
-One of the tips is to use the tunnel level in the game. The tunnel level is a special level that appears randomly in the game. It allows you to run in a tunnel without any obstacles or enemies. You can collect a lot of coins and gems in this level without any risk.
- Switch roads and turn gravity
-Another tip is to switch roads and turn gravity in the game. This will help you avoid obstacles and enemies that are blocking your way. You can also find hidden paths and shortcuts by switching roads and turning gravity.
- Upgrade your skills and items
-A final tip is to upgrade your skills and items in the game. This will help you improve your performance and survival in the game. You can upgrade your skills such as speed, magnet, shield, etc., or your items such as wings, rockets, etc., using coins and gems.
- Review of Endless Run Jungle Escape Mod APK
-Endless Run Jungle Escape Mod APK is a thrilling adventure game that will keep you entertained for hours. Here is a review of this game based on its pros and cons, user ratings, and feedback.
- Pros and cons
-
-Pros | Cons |
-- Unlocked characters and props | - Ads may still appear |
-- Unlimited coins, gems, and power-ups | - May not work on some devices |
-- Dual handle operation | - May cause battery drain |
-players
-
- User ratings and feedback
-Endless Run Jungle Escape Mod APK has received positive ratings and feedback from most of the users who have tried it. The game has a rating of 4.6 out of 5 stars on HappyMod and a rating of 4.1 out of 5 stars on Google Play Store. Here are some of the user reviews from HappyMod:
-
-- "This game is awesome. I love the graphics and the gameplay. It is very addictive and fun. I recommend it to everyone who likes endless runner games."
-- "This is the best mod ever. It has everything unlocked and unlimited. I can play with any character and prop I want. It is very easy to install and use."
-- "This game is amazing. It has a lot of features and challenges. It is very smooth and fast. It is better than the original game."
-
- Alternatives to Endless Run Jungle Escape Mod APK
-If you are looking for alternatives to Endless Run Jungle Escape Mod APK, you can try these other games that are similar in genre and style:
- Temple Run and Temple Run 2
-Temple Run and Temple Run 2 are classic endless runner games that have inspired many other games in this genre. You have to run away from a group of monkeys that are chasing you after you stole a cursed idol from a temple. You have to swipe to turn, jump, slide, and tilt to avoid obstacles and collect coins and power-ups. You can also unlock different characters and abilities as you progress in the game.
- Subway Surfers and Minion Rush
-Subway Surfers and Minion Rush are popular endless runner games that feature colorful graphics and characters. You have to run on the subway tracks or the streets while dodging trains, buses, cars, and other obstacles. You can also collect coins, power-ups, and items that will help you in your run. You can also customize your character and use different gadgets and vehicles.
- Conclusion
-Endless Run Jungle Escape Mod APK is a thrilling adventure game that will keep you entertained for hours. It is a modified version of the original game that offers more features and benefits for the players. You can enjoy unlocked characters and props, unlimited coins, gems, and power-ups, dual handle operation, tasks and scores, and more in this game. You can download and install it easily from a reliable source and play it on your device. You can also follow some tips and tricks to master this game and compare your score with other players. If you are looking for alternatives to this game, you can try Temple Run, Temple Run 2, Subway Surfers, or Minion Rush.
- FAQs
-
-- Q: Is Endless Run Jungle Escape Mod APK safe to download and install?
-- A: Yes, Endless Run Jungle Escape Mod APK is safe to download and install if you use a reliable source that offers verified APK files. You should also scan the file with an antivirus before installing it on your device.
-- Q: What are the requirements to play Endless Run Jungle Escape Mod APK?
-- A: Endless Run Jungle Escape Mod APK requires Android 4.1 or higher to run smoothly on your device. You also need at least 100 MB of free storage space on your device to install it.
-- Q: How can I remove ads from Endless Run Jungle Escape Mod APK?
-- A: Endless Run Jungle Escape Mod APK may still show some ads in the game even though it is a modded version. You can remove ads by turning off your internet connection or using an ad blocker app.
-- Q: How can I get more coins and gems in Endless Run Jungle Escape Mod APK?
-- A: Endless Run Jungle Escape Mod APK gives you unlimited coins, gems, and power-ups in the game so you don't need to worry about running out of them. However, if you want to get more coins and gems, you can collect them while running in the jungle or complete tasks and achievements.
-- Q: How can I update Endless Run Jungle Escape Mod APK?
-- A: Endless Run Jungle Escape Mod APK may not update automatically on your device because it is not from Google Play Store. You need to check for updates manually from the source where you downloaded it or from other sources that offer the latest version of the game.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py
deleted file mode 100644
index 5e4b83adac8e6a4b1caf522596666e4f5d0ee854..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import contextlib
-import warnings
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-
-enabled = True
-weight_gradients_disabled = False
-
-
-@contextlib.contextmanager
-def no_weight_gradients():
- global weight_gradients_disabled
-
- old = weight_gradients_disabled
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=False,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=0,
- dilation=dilation,
- groups=groups,
- ).apply(input, weight, bias)
-
- return F.conv2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def conv_transpose2d(
- input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- output_padding=0,
- groups=1,
- dilation=1,
-):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=True,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- groups=groups,
- dilation=dilation,
- ).apply(input, weight, bias)
-
- return F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def could_use_op(input):
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
-
- if input.device.type != "cuda":
- return False
-
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
- return True
-
- #warnings.warn(
- # f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
- #)
-
- return False
-
-
-def ensure_tuple(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
-
- return xs
-
-
-conv2d_gradfix_cache = dict()
-
-
-def conv2d_gradfix(
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
-):
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = ensure_tuple(stride, ndim)
- padding = ensure_tuple(padding, ndim)
- output_padding = ensure_tuple(output_padding, ndim)
- dilation = ensure_tuple(dilation, ndim)
-
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in conv2d_gradfix_cache:
- return conv2d_gradfix_cache[key]
-
- common_kwargs = dict(
- stride=stride, padding=padding, dilation=dilation, groups=groups
- )
-
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
-
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- class Conv2d(autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- if not transpose:
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- else:
- out = F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- output_padding=output_padding,
- **common_kwargs,
- )
-
- ctx.save_for_backward(input, weight)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- grad_input, grad_weight, grad_bias = None, None, None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, weight, None)
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum((0, 2, 3))
-
- return grad_input, grad_weight, grad_bias
-
- class Conv2dGradWeight(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- op = torch._C._jit_get_operation(
- "aten::cudnn_convolution_backward_weight"
- if not transpose
- else "aten::cudnn_convolution_transpose_backward_weight"
- )
- flags = [
- torch.backends.cudnn.benchmark,
- torch.backends.cudnn.deterministic,
- torch.backends.cudnn.allow_tf32,
- ]
- grad_weight = op(
- weight_shape,
- grad_output,
- input,
- padding,
- stride,
- dilation,
- groups,
- *flags,
- )
- ctx.save_for_backward(grad_output, input)
-
- return grad_weight
-
- @staticmethod
- def backward(ctx, grad_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_grad_output, grad_grad_input = None, None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, grad_grad_weight, None)
-
- return grad_grad_output, grad_grad_input
-
- conv2d_gradfix_cache[key] = Conv2d
-
- return Conv2d
diff --git a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/spec_utils.py
deleted file mode 100644
index a3fd46d333da7becc7f09f42c084ac7cde661035..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/spec_utils.py
+++ /dev/null
@@ -1,667 +0,0 @@
-import os, librosa
-import numpy as np
-import soundfile as sf
-from tqdm import tqdm
-import json, math, hashlib
-
-
-def crop_center(h1, h2):
- h1_shape = h1.size()
- h2_shape = h2.size()
-
- if h1_shape[3] == h2_shape[3]:
- return h1
- elif h1_shape[3] < h2_shape[3]:
- raise ValueError("h1_shape[3] must be greater than h2_shape[3]")
-
- # s_freq = (h2_shape[2] - h1_shape[2]) // 2
- # e_freq = s_freq + h1_shape[2]
- s_time = (h1_shape[3] - h2_shape[3]) // 2
- e_time = s_time + h2_shape[3]
- h1 = h1[:, :, :, s_time:e_time]
-
- return h1
-
-
-def wave_to_spectrogram(
- wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False
-):
- if reverse:
- wave_left = np.flip(np.asfortranarray(wave[0]))
- wave_right = np.flip(np.asfortranarray(wave[1]))
- elif mid_side:
- wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2)
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1]))
- elif mid_side_b2:
- wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5))
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5))
- else:
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
-
- spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length)
- spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length)
-
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def wave_to_spectrogram_mt(
- wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False
-):
- import threading
-
- if reverse:
- wave_left = np.flip(np.asfortranarray(wave[0]))
- wave_right = np.flip(np.asfortranarray(wave[1]))
- elif mid_side:
- wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2)
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1]))
- elif mid_side_b2:
- wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5))
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5))
- else:
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
-
- def run_thread(**kwargs):
- global spec_left
- spec_left = librosa.stft(**kwargs)
-
- thread = threading.Thread(
- target=run_thread,
- kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length},
- )
- thread.start()
- spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length)
- thread.join()
-
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def combine_spectrograms(specs, mp):
- l = min([specs[i].shape[2] for i in specs])
- spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64)
- offset = 0
- bands_n = len(mp.param["band"])
-
- for d in range(1, bands_n + 1):
- h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"]
- spec_c[:, offset : offset + h, :l] = specs[d][
- :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l
- ]
- offset += h
-
- if offset > mp.param["bins"]:
- raise ValueError("Too much bins")
-
- # lowpass fiter
- if (
- mp.param["pre_filter_start"] > 0
- ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']:
- if bands_n == 1:
- spec_c = fft_lp_filter(
- spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"]
- )
- else:
- gp = 1
- for b in range(
- mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"]
- ):
- g = math.pow(
- 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0
- )
- gp = g
- spec_c[:, b, :] *= g
-
- return np.asfortranarray(spec_c)
-
-
-def spectrogram_to_image(spec, mode="magnitude"):
- if mode == "magnitude":
- if np.iscomplexobj(spec):
- y = np.abs(spec)
- else:
- y = spec
- y = np.log10(y**2 + 1e-8)
- elif mode == "phase":
- if np.iscomplexobj(spec):
- y = np.angle(spec)
- else:
- y = spec
-
- y -= y.min()
- y *= 255 / y.max()
- img = np.uint8(y)
-
- if y.ndim == 3:
- img = img.transpose(1, 2, 0)
- img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2)
-
- return img
-
-
-def reduce_vocal_aggressively(X, y, softmask):
- v = X - y
- y_mag_tmp = np.abs(y)
- v_mag_tmp = np.abs(v)
-
- v_mask = v_mag_tmp > y_mag_tmp
- y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf)
-
- return y_mag * np.exp(1.0j * np.angle(y))
-
-
-def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32):
- if min_range < fade_size * 2:
- raise ValueError("min_range must be >= fade_area * 2")
-
- mag = mag.copy()
-
- idx = np.where(ref.mean(axis=(0, 1)) < thres)[0]
- starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0])
- ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1])
- uninformative = np.where(ends - starts > min_range)[0]
- if len(uninformative) > 0:
- starts = starts[uninformative]
- ends = ends[uninformative]
- old_e = None
- for s, e in zip(starts, ends):
- if old_e is not None and s - old_e < fade_size:
- s = old_e - fade_size * 2
-
- if s != 0:
- weight = np.linspace(0, 1, fade_size)
- mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size]
- else:
- s -= fade_size
-
- if e != mag.shape[2]:
- weight = np.linspace(1, 0, fade_size)
- mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e]
- else:
- e += fade_size
-
- mag[:, :, s + fade_size : e - fade_size] += ref[
- :, :, s + fade_size : e - fade_size
- ]
- old_e = e
-
- return mag
-
-
-def align_wave_head_and_tail(a, b):
- l = min([a[0].size, b[0].size])
-
- return a[:l, :l], b[:l, :l]
-
-
-def cache_or_load(mix_path, inst_path, mp):
- mix_basename = os.path.splitext(os.path.basename(mix_path))[0]
- inst_basename = os.path.splitext(os.path.basename(inst_path))[0]
-
- cache_dir = "mph{}".format(
- hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest()
- )
- mix_cache_dir = os.path.join("cache", cache_dir)
- inst_cache_dir = os.path.join("cache", cache_dir)
-
- os.makedirs(mix_cache_dir, exist_ok=True)
- os.makedirs(inst_cache_dir, exist_ok=True)
-
- mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy")
- inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy")
-
- if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path):
- X_spec_m = np.load(mix_cache_path)
- y_spec_m = np.load(inst_cache_path)
- else:
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
-
- for d in range(len(mp.param["band"]), 0, -1):
- bp = mp.param["band"][d]
-
- if d == len(mp.param["band"]): # high-end band
- X_wave[d], _ = librosa.load(
- mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"]
- )
- y_wave[d], _ = librosa.load(
- inst_path,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- else: # lower bands
- X_wave[d] = librosa.resample(
- X_wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- y_wave[d] = librosa.resample(
- y_wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
-
- X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d])
-
- X_spec_s[d] = wave_to_spectrogram(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
- y_spec_s[d] = wave_to_spectrogram(
- y_wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
-
- del X_wave, y_wave
-
- X_spec_m = combine_spectrograms(X_spec_s, mp)
- y_spec_m = combine_spectrograms(y_spec_s, mp)
-
- if X_spec_m.shape != y_spec_m.shape:
- raise ValueError("The combined spectrograms are different: " + mix_path)
-
- _, ext = os.path.splitext(mix_path)
-
- np.save(mix_cache_path, X_spec_m)
- np.save(inst_cache_path, y_spec_m)
-
- return X_spec_m, y_spec_m
-
-
-def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse):
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- wave_left = librosa.istft(spec_left, hop_length=hop_length)
- wave_right = librosa.istft(spec_right, hop_length=hop_length)
-
- if reverse:
- return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)])
- elif mid_side:
- return np.asfortranarray(
- [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)]
- )
- elif mid_side_b2:
- return np.asfortranarray(
- [
- np.add(wave_right / 1.25, 0.4 * wave_left),
- np.subtract(wave_left / 1.25, 0.4 * wave_right),
- ]
- )
- else:
- return np.asfortranarray([wave_left, wave_right])
-
-
-def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2):
- import threading
-
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- def run_thread(**kwargs):
- global wave_left
- wave_left = librosa.istft(**kwargs)
-
- thread = threading.Thread(
- target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length}
- )
- thread.start()
- wave_right = librosa.istft(spec_right, hop_length=hop_length)
- thread.join()
-
- if reverse:
- return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)])
- elif mid_side:
- return np.asfortranarray(
- [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)]
- )
- elif mid_side_b2:
- return np.asfortranarray(
- [
- np.add(wave_right / 1.25, 0.4 * wave_left),
- np.subtract(wave_left / 1.25, 0.4 * wave_right),
- ]
- )
- else:
- return np.asfortranarray([wave_left, wave_right])
-
-
-def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None):
- wave_band = {}
- bands_n = len(mp.param["band"])
- offset = 0
-
- for d in range(1, bands_n + 1):
- bp = mp.param["band"][d]
- spec_s = np.ndarray(
- shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex
- )
- h = bp["crop_stop"] - bp["crop_start"]
- spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[
- :, offset : offset + h, :
- ]
-
- offset += h
- if d == bands_n: # higher
- if extra_bins_h: # if --high_end_process bypass
- max_bin = bp["n_fft"] // 2
- spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[
- :, :extra_bins_h, :
- ]
- if bp["hpf_start"] > 0:
- spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1)
- if bands_n == 1:
- wave = spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
- else:
- wave = np.add(
- wave,
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- )
- else:
- sr = mp.param["band"][d + 1]["sr"]
- if d == 1: # lower
- spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"])
- wave = librosa.resample(
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- bp["sr"],
- sr,
- res_type="sinc_fastest",
- )
- else: # mid
- spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1)
- spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"])
- wave2 = np.add(
- wave,
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- )
- # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest")
- wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy")
-
- return wave.T
-
-
-def fft_lp_filter(spec, bin_start, bin_stop):
- g = 1.0
- for b in range(bin_start, bin_stop):
- g -= 1 / (bin_stop - bin_start)
- spec[:, b, :] = g * spec[:, b, :]
-
- spec[:, bin_stop:, :] *= 0
-
- return spec
-
-
-def fft_hp_filter(spec, bin_start, bin_stop):
- g = 1.0
- for b in range(bin_start, bin_stop, -1):
- g -= 1 / (bin_start - bin_stop)
- spec[:, b, :] = g * spec[:, b, :]
-
- spec[:, 0 : bin_stop + 1, :] *= 0
-
- return spec
-
-
-def mirroring(a, spec_m, input_high_end, mp):
- if "mirroring" == a:
- mirror = np.flip(
- np.abs(
- spec_m[
- :,
- mp.param["pre_filter_start"]
- - 10
- - input_high_end.shape[1] : mp.param["pre_filter_start"]
- - 10,
- :,
- ]
- ),
- 1,
- )
- mirror = mirror * np.exp(1.0j * np.angle(input_high_end))
-
- return np.where(
- np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror
- )
-
- if "mirroring2" == a:
- mirror = np.flip(
- np.abs(
- spec_m[
- :,
- mp.param["pre_filter_start"]
- - 10
- - input_high_end.shape[1] : mp.param["pre_filter_start"]
- - 10,
- :,
- ]
- ),
- 1,
- )
- mi = np.multiply(mirror, input_high_end * 1.7)
-
- return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi)
-
-
-def ensembling(a, specs):
- for i in range(1, len(specs)):
- if i == 1:
- spec = specs[0]
-
- ln = min([spec.shape[2], specs[i].shape[2]])
- spec = spec[:, :, :ln]
- specs[i] = specs[i][:, :, :ln]
-
- if "min_mag" == a:
- spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec)
- if "max_mag" == a:
- spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec)
-
- return spec
-
-
-def stft(wave, nfft, hl):
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
- spec_left = librosa.stft(wave_left, nfft, hop_length=hl)
- spec_right = librosa.stft(wave_right, nfft, hop_length=hl)
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def istft(spec, hl):
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- wave_left = librosa.istft(spec_left, hop_length=hl)
- wave_right = librosa.istft(spec_right, hop_length=hl)
- wave = np.asfortranarray([wave_left, wave_right])
-
-
-if __name__ == "__main__":
- import cv2
- import sys
- import time
- import argparse
- from model_param_init import ModelParameters
-
- p = argparse.ArgumentParser()
- p.add_argument(
- "--algorithm",
- "-a",
- type=str,
- choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"],
- default="min_mag",
- )
- p.add_argument(
- "--model_params",
- "-m",
- type=str,
- default=os.path.join("modelparams", "1band_sr44100_hl512.json"),
- )
- p.add_argument("--output_name", "-o", type=str, default="output")
- p.add_argument("--vocals_only", "-v", action="store_true")
- p.add_argument("input", nargs="+")
- args = p.parse_args()
-
- start_time = time.time()
-
- if args.algorithm.startswith("invert") and len(args.input) != 2:
- raise ValueError("There should be two input files.")
-
- if not args.algorithm.startswith("invert") and len(args.input) < 2:
- raise ValueError("There must be at least two input files.")
-
- wave, specs = {}, {}
- mp = ModelParameters(args.model_params)
-
- for i in range(len(args.input)):
- spec = {}
-
- for d in range(len(mp.param["band"]), 0, -1):
- bp = mp.param["band"][d]
-
- if d == len(mp.param["band"]): # high-end band
- wave[d], _ = librosa.load(
- args.input[i],
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
-
- if len(wave[d].shape) == 1: # mono to stereo
- wave[d] = np.array([wave[d], wave[d]])
- else: # lower bands
- wave[d] = librosa.resample(
- wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
-
- spec[d] = wave_to_spectrogram(
- wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
-
- specs[i] = combine_spectrograms(spec, mp)
-
- del wave
-
- if args.algorithm == "deep":
- d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1])
- v_spec = d_spec - specs[1]
- sf.write(
- os.path.join("{}.wav".format(args.output_name)),
- cmb_spectrogram_to_wave(v_spec, mp),
- mp.param["sr"],
- )
-
- if args.algorithm.startswith("invert"):
- ln = min([specs[0].shape[2], specs[1].shape[2]])
- specs[0] = specs[0][:, :, :ln]
- specs[1] = specs[1][:, :, :ln]
-
- if "invert_p" == args.algorithm:
- X_mag = np.abs(specs[0])
- y_mag = np.abs(specs[1])
- max_mag = np.where(X_mag >= y_mag, X_mag, y_mag)
- v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0]))
- else:
- specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2)
- v_spec = specs[0] - specs[1]
-
- if not args.vocals_only:
- X_mag = np.abs(specs[0])
- y_mag = np.abs(specs[1])
- v_mag = np.abs(v_spec)
-
- X_image = spectrogram_to_image(X_mag)
- y_image = spectrogram_to_image(y_mag)
- v_image = spectrogram_to_image(v_mag)
-
- cv2.imwrite("{}_X.png".format(args.output_name), X_image)
- cv2.imwrite("{}_y.png".format(args.output_name), y_image)
- cv2.imwrite("{}_v.png".format(args.output_name), v_image)
-
- sf.write(
- "{}_X.wav".format(args.output_name),
- cmb_spectrogram_to_wave(specs[0], mp),
- mp.param["sr"],
- )
- sf.write(
- "{}_y.wav".format(args.output_name),
- cmb_spectrogram_to_wave(specs[1], mp),
- mp.param["sr"],
- )
-
- sf.write(
- "{}_v.wav".format(args.output_name),
- cmb_spectrogram_to_wave(v_spec, mp),
- mp.param["sr"],
- )
- else:
- if not args.algorithm == "deep":
- sf.write(
- os.path.join("ensembled", "{}.wav".format(args.output_name)),
- cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp),
- mp.param["sr"],
- )
-
- if args.algorithm == "align":
- trackalignment = [
- {
- "file1": '"{}"'.format(args.input[0]),
- "file2": '"{}"'.format(args.input[1]),
- }
- ]
-
- for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."):
- os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}")
-
- # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1))
diff --git a/spaces/834188divi/cardiffnlp-twitter-roberta-base-sentiment-latest/README.md b/spaces/834188divi/cardiffnlp-twitter-roberta-base-sentiment-latest/README.md
deleted file mode 100644
index 5997fa01d97217a6febb6302a0c89025a0ad35b9..0000000000000000000000000000000000000000
--- a/spaces/834188divi/cardiffnlp-twitter-roberta-base-sentiment-latest/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Cardiffnlp Twitter Roberta Base Sentiment Latest
-emoji: 📉
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/onnx_inference_demo.py b/spaces/AI-Hobbyist/Hoyo-RVC/onnx_inference_demo.py
deleted file mode 100644
index 14e75d0eb4a5dc3542ce1ed6d462c70c7f4e5679..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/onnx_inference_demo.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import soundfile
-from infer_pack.onnx_inference import OnnxRVC
-
-hop_size = 512
-sampling_rate = 40000 # 采样率
-f0_up_key = 0 # 升降调
-sid = 0 # 角色ID
-f0_method = "dio" # F0提取算法
-model_path = "ShirohaRVC.onnx" # 模型的完整路径
-vec_name = "vec-256-layer-9" # 内部自动补齐为 f"pretrained/{vec_name}.onnx" 需要onnx的vec模型
-wav_path = "123.wav" # 输入路径或ByteIO实例
-out_path = "out.wav" # 输出路径或ByteIO实例
-
-model = OnnxRVC(
- model_path, vec_path=vec_name, sr=sampling_rate, hop_size=hop_size, device="cuda"
-)
-
-audio = model.inference(wav_path, sid, f0_method=f0_method, f0_up_key=f0_up_key)
-
-soundfile.write(out_path, audio, sampling_rate)
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/__init__.py
deleted file mode 100644
index 5b688528f1f3e4efc0c2a1e9d490f33c4158b3f0..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""EnCodec grids."""
diff --git a/spaces/AIConsultant/MusicGen/tests/utils/__init__.py b/spaces/AIConsultant/MusicGen/tests/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/syntaspeech/syntaspeech.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/syntaspeech/syntaspeech.py
deleted file mode 100644
index a530acad5e5a0bcc547d6a866156cf2c357eeda6..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/syntaspeech/syntaspeech.py
+++ /dev/null
@@ -1,277 +0,0 @@
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torch.nn import Linear
-
-from text_to_speech.modules.commons.conv import ConvBlocks, ConditionalConvBlocks
-from text_to_speech.modules.commons.layers import Embedding
-from text_to_speech.modules.commons.rel_transformer import RelTransformerEncoder
-from text_to_speech.modules.commons.transformer import MultiheadAttention, FFTBlocks
-from text_to_speech.modules.tts.commons.align_ops import clip_mel2token_to_multiple, build_word_mask, expand_states, mel2ph_to_mel2word
-from text_to_speech.modules.tts.fs import FS_DECODERS, FastSpeech
-from text_to_speech.modules.tts.portaspeech.fvae import SyntaFVAE, FVAE
-from text_to_speech.utils.commons.meters import Timer
-from text_to_speech.utils.nn.seq_utils import group_hidden_by_segs
-from text_to_speech.modules.commons.nar_tts_modules import SyntaDurationPredictor
-
-
-class SinusoidalPosEmb(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.dim = dim
-
- def forward(self, x):
- """
-
- :param x: [B, T]
- :return: [B, T, H]
- """
- device = x.device
- half_dim = self.dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
- emb = x[:, :, None] * emb[None, :]
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class SyntaSpeech(FastSpeech):
- def __init__(self, ph_dict_size, word_dict_size, hparams, out_dims=None):
- super().__init__(ph_dict_size, hparams, out_dims)
- # build linguistic encoder
- if hparams['num_spk'] > 1:
- self.spk_embed_proj = Embedding(hparams['num_spk'], self.hidden_size)
- if hparams['use_word_encoder']:
- self.word_encoder = RelTransformerEncoder(
- word_dict_size, self.hidden_size, self.hidden_size, self.hidden_size, 2,
- hparams['word_enc_layers'], hparams['enc_ffn_kernel_size'])
- if hparams['dur_level'] == 'word':
- if hparams['word_encoder_type'] == 'rel_fft':
- self.ph2word_encoder = RelTransformerEncoder(
- 0, self.hidden_size, self.hidden_size, self.hidden_size, 2,
- hparams['word_enc_layers'], hparams['enc_ffn_kernel_size'])
- if hparams['word_encoder_type'] == 'fft':
- self.ph2word_encoder = FFTBlocks(
- self.hidden_size, hparams['word_enc_layers'], 1, num_heads=hparams['num_heads'])
- self.sin_pos = SinusoidalPosEmb(self.hidden_size)
- self.enc_pos_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.dec_query_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.dec_res_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.attn = MultiheadAttention(self.hidden_size, 1, encoder_decoder_attention=True, bias=False)
- self.attn.enable_torch_version = False
- if hparams['text_encoder_postnet']:
- self.text_encoder_postnet = ConvBlocks(
- self.hidden_size, self.hidden_size, [1] * 3, 5, layers_in_block=2)
- else:
- self.sin_pos = SinusoidalPosEmb(self.hidden_size)
-
- predictor_hidden = hparams['predictor_hidden'] if hparams['predictor_hidden'] > 0 else self.hidden_size
- self.dur_predictor = SyntaDurationPredictor(
- self.hidden_size,
- n_chans=predictor_hidden,
- n_layers=hparams['dur_predictor_layers'],
- dropout_rate=hparams['predictor_dropout'],
- kernel_size=hparams['dur_predictor_kernel'])
- # build VAE decoder
- if hparams['use_fvae']:
- del self.decoder
- del self.mel_out
- if hparams.get("use_gae_in_prior", True):
- self.fvae = SyntaFVAE(
- c_in_out=self.out_dims,
- hidden_size=hparams['fvae_enc_dec_hidden'], c_latent=hparams['latent_size'],
- kernel_size=hparams['fvae_kernel_size'],
- enc_n_layers=hparams['fvae_enc_n_layers'],
- dec_n_layers=hparams['fvae_dec_n_layers'],
- c_cond=self.hidden_size,
- use_prior_flow=hparams['use_prior_flow'],
- flow_hidden=hparams['prior_flow_hidden'],
- flow_kernel_size=hparams['prior_flow_kernel_size'],
- flow_n_steps=hparams['prior_flow_n_blocks'],
- strides=[hparams['fvae_strides']],
- encoder_type=hparams['fvae_encoder_type'],
- decoder_type=hparams['fvae_decoder_type'],
- )
- else:
- self.fvae = FVAE(
- c_in_out=self.out_dims,
- hidden_size=hparams['fvae_enc_dec_hidden'], c_latent=hparams['latent_size'],
- kernel_size=hparams['fvae_kernel_size'],
- enc_n_layers=hparams['fvae_enc_n_layers'],
- dec_n_layers=hparams['fvae_dec_n_layers'],
- c_cond=self.hidden_size,
- use_prior_flow=hparams['use_prior_flow'],
- flow_hidden=hparams['prior_flow_hidden'],
- flow_kernel_size=hparams['prior_flow_kernel_size'],
- flow_n_steps=hparams['prior_flow_n_blocks'],
- strides=[hparams['fvae_strides']],
- encoder_type=hparams['fvae_encoder_type'],
- decoder_type=hparams['fvae_decoder_type'],
- )
- else:
- self.decoder = FS_DECODERS[hparams['decoder_type']](hparams)
- self.mel_out = Linear(self.hidden_size, self.out_dims, bias=True)
- if hparams['use_pitch_embed']:
- self.pitch_embed = Embedding(300, self.hidden_size, 0)
- if self.hparams['add_word_pos']:
- self.word_pos_proj = Linear(self.hidden_size, self.hidden_size)
-
- def build_embedding(self, dictionary, embed_dim):
- num_embeddings = len(dictionary)
- emb = Embedding(num_embeddings, embed_dim, self.padding_idx)
- return emb
-
- def forward(self, txt_tokens, word_tokens, ph2word, word_len, mel2word=None, mel2ph=None,
- spk_embed=None, spk_id=None, pitch=None, infer=False, tgt_mels=None,
- global_step=None, graph_lst=None, etypes_lst=None, *args, **kwargs):
-
- if self.hparams['use_spk_embed']:
- spk_embed = spk_embed
- elif self.hparams['use_spk_id']:
- spk_embed = self.spk_embed_proj(spk_id)[:, None, :]
- else:
- spk_embed = 0
-
- ret = {}
- style_embed = self.forward_style_embed(spk_embed, spk_id) # speaker embedding, [B, 1, C]
- x, tgt_nonpadding = self.run_text_encoder(
- txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, style_embed, ret, graph_lst=graph_lst, etypes_lst=etypes_lst, **kwargs)
- x = x + style_embed # it maybe necessary to achieve multi-speaker
- x = x * tgt_nonpadding
- ret['nonpadding'] = tgt_nonpadding
- if self.hparams['use_pitch_embed']:
- x = x + self.pitch_embed(pitch)
- ret['decoder_inp'] = x
- if infer and (mel2ph is None or mel2word is None):
- mel2word = ret['mel2word']
- ret['mel_out_fvae'] = ret['mel_out'] = self.run_decoder(x, tgt_nonpadding, ret, infer, tgt_mels, global_step,
- mel2word=mel2word, ph2word=ph2word, graph_lst=graph_lst, etypes_lst=etypes_lst)
- return ret
-
- def run_text_encoder(self, txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, style_embed, ret, graph_lst, etypes_lst, **kwargs):
- word2word = torch.arange(word_len)[None, :].to(ph2word.device) + 1 # [B, T_mel, T_word]
- src_nonpadding = (txt_tokens > 0).float()[:, :, None]
- use_bert = self.hparams.get("use_bert") is True
- if use_bert:
- ph_encoder_out = self.encoder(txt_tokens, bert_feats=kwargs['bert_feats'], ph2word=ph2word,
- graph_lst=graph_lst, etypes_lst=etypes_lst,
- cl_feats=kwargs['cl_feats'], ret=ret) * src_nonpadding + style_embed
- else:
- ph_encoder_out = self.encoder(txt_tokens) * src_nonpadding + style_embed
- if self.hparams['use_word_encoder']:
- word_encoder_out = self.word_encoder(word_tokens) + style_embed
- ph_encoder_out = ph_encoder_out + expand_states(word_encoder_out, ph2word)
-
- dur_input = ph_encoder_out * src_nonpadding
- if self.hparams['dur_level'] == 'word':
- word_encoder_out = 0
- h_ph_gb_word = group_hidden_by_segs(ph_encoder_out, ph2word, word_len)[0]
- word_encoder_out = word_encoder_out + self.ph2word_encoder(h_ph_gb_word)
- if self.hparams['use_word_encoder']:
- word_encoder_out = word_encoder_out + self.word_encoder(word_tokens)
- mel2word = self.forward_dur(dur_input, mel2word, ret, ph2word=ph2word, word_len=word_len, graph_lst=graph_lst, etypes_lst=etypes_lst)
- mel2word = clip_mel2token_to_multiple(mel2word, self.hparams['frames_multiple'])
- ret['mel2word'] = mel2word
- tgt_nonpadding = (mel2word > 0).float()[:, :, None]
- enc_pos = self.get_pos_embed(word2word, ph2word) # [B, T_ph, H]
- dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H]
- dec_word_mask = build_word_mask(mel2word, ph2word) # [B, T_mel, T_ph]
- x, weight = self.attention(ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask)
- if self.hparams['add_word_pos']:
- x = x + self.word_pos_proj(dec_pos)
- ret['attn'] = weight
- else:
- mel2ph = self.forward_dur(dur_input, mel2ph, ret)
- mel2ph = clip_mel2token_to_multiple(mel2ph, self.hparams['frames_multiple'])
- mel2word = mel2ph_to_mel2word(mel2ph, ph2word)
- x = expand_states(ph_encoder_out, mel2ph)
- if self.hparams['add_word_pos']:
- dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H]
- x = x + self.word_pos_proj(dec_pos)
- tgt_nonpadding = (mel2ph > 0).float()[:, :, None]
- if self.hparams['use_word_encoder']:
- x = x + expand_states(word_encoder_out, mel2word)
- return x, tgt_nonpadding
-
- def attention(self, ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask):
- ph_kv = self.enc_pos_proj(torch.cat([ph_encoder_out, enc_pos], -1))
- word_enc_out_expend = expand_states(word_encoder_out, mel2word)
- word_enc_out_expend = torch.cat([word_enc_out_expend, dec_pos], -1)
- if self.hparams['text_encoder_postnet']:
- word_enc_out_expend = self.dec_res_proj(word_enc_out_expend)
- word_enc_out_expend = self.text_encoder_postnet(word_enc_out_expend)
- dec_q = x_res = word_enc_out_expend
- else:
- dec_q = self.dec_query_proj(word_enc_out_expend)
- x_res = self.dec_res_proj(word_enc_out_expend)
- ph_kv, dec_q = ph_kv.transpose(0, 1), dec_q.transpose(0, 1)
- x, (weight, _) = self.attn(dec_q, ph_kv, ph_kv, attn_mask=(1 - dec_word_mask) * -1e9)
- x = x.transpose(0, 1)
- x = x + x_res
- return x, weight
-
- def run_decoder(self, x, tgt_nonpadding, ret, infer, tgt_mels=None, global_step=0,
- mel2word=None, ph2word=None, graph_lst=None, etypes_lst=None):
- if not self.hparams['use_fvae']:
- x = self.decoder(x)
- x = self.mel_out(x)
- ret['kl'] = 0
- return x * tgt_nonpadding
- else:
- # x is the phoneme encoding
- x = x.transpose(1, 2) # [B, H, T]
- tgt_nonpadding_BHT = tgt_nonpadding.transpose(1, 2) # [B, H, T]
- if infer:
- z = self.fvae(cond=x, infer=True, mel2word=mel2word, ph2word=ph2word, graph_lst=graph_lst, etypes_lst=etypes_lst)
- else:
- tgt_mels = tgt_mels.transpose(1, 2) # [B, 80, T]
- z, ret['kl'], ret['z_p'], ret['m_q'], ret['logs_q'] = self.fvae(
- tgt_mels, tgt_nonpadding_BHT, cond=x, mel2word=mel2word, ph2word=ph2word, graph_lst=graph_lst, etypes_lst=etypes_lst)
- if global_step < self.hparams['posterior_start_steps']:
- z = torch.randn_like(z)
- x_recon = self.fvae.decoder(z, nonpadding=tgt_nonpadding_BHT, cond=x).transpose(1, 2)
- ret['pre_mel_out'] = x_recon
- return x_recon
-
- def forward_dur(self, dur_input, mel2word, ret, **kwargs):
- """
-
- :param dur_input: [B, T_txt, H]
- :param mel2ph: [B, T_mel]
- :param txt_tokens: [B, T_txt]
- :param ret:
- :return:
- """
- word_len = kwargs['word_len']
- ph2word = kwargs['ph2word']
- graph_lst = kwargs['graph_lst']
- etypes_lst = kwargs['etypes_lst']
- src_padding = dur_input.data.abs().sum(-1) == 0
- dur_input = dur_input.detach() + self.hparams['predictor_grad'] * (dur_input - dur_input.detach())
- dur = self.dur_predictor(dur_input, src_padding, ph2word, graph_lst, etypes_lst)
-
- B, T_ph = ph2word.shape
- dur = torch.zeros([B, word_len.max() + 1]).to(ph2word.device).scatter_add(1, ph2word, dur)
- dur = dur[:, 1:]
- ret['dur'] = dur
- if mel2word is None:
- mel2word = self.length_regulator(dur).detach()
- return mel2word
-
- def get_pos_embed(self, word2word, x2word):
- x_pos = build_word_mask(word2word, x2word).float() # [B, T_word, T_ph]
- x_pos = (x_pos.cumsum(-1) / x_pos.sum(-1).clamp(min=1)[..., None] * x_pos).sum(1)
- x_pos = self.sin_pos(x_pos.float()) # [B, T_ph, H]
- return x_pos
-
- def store_inverse_all(self):
- def remove_weight_norm(m):
- try:
- if hasattr(m, 'store_inverse'):
- m.store_inverse()
- nn.utils.remove_weight_norm(m)
- except ValueError: # this module didn't have weight norm
- return
-
- self.apply(remove_weight_norm)
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/pann_model.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/pann_model.py
deleted file mode 100644
index 109db5f418a0bad32cae2452742589ff52a19b85..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/pann_model.py
+++ /dev/null
@@ -1,543 +0,0 @@
-# PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
-# Reference from https://github.com/qiuqiangkong/audioset_tagging_cnn
-# Some layers are re-designed for CLAP
-import os
-os.environ['NUMBA_CACHE_DIR'] = '/tmp/'
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-from torchlibrosa.augmentation import SpecAugmentation
-
-from .utils import do_mixup, interpolate, pad_framewise_output
-from .feature_fusion import iAFF, AFF, DAF
-
-
-def init_layer(layer):
- """Initialize a Linear or Convolutional layer. """
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, 'bias'):
- if layer.bias is not None:
- layer.bias.data.fill_(0.)
-
-
-def init_bn(bn):
- """Initialize a Batchnorm layer. """
- bn.bias.data.fill_(0.)
- bn.weight.data.fill_(1.)
-
-
-class ConvBlock(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.conv2 = nn.Conv2d(in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
- self.bn2 = nn.BatchNorm2d(out_channels)
-
- self.init_weight()
-
- def init_weight(self):
- init_layer(self.conv1)
- init_layer(self.conv2)
- init_bn(self.bn1)
- init_bn(self.bn2)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- x = F.relu_(self.bn2(self.conv2(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class ConvBlock5x5(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock5x5, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(5, 5), stride=(1, 1),
- padding=(2, 2), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
-
- self.init_weight()
-
- def init_weight(self):
- init_layer(self.conv1)
- init_bn(self.bn1)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class AttBlock(nn.Module):
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
- super(AttBlock, self).__init__()
-
- self.activation = activation
- self.temperature = temperature
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
-
- self.bn_att = nn.BatchNorm1d(n_out)
- self.init_weights()
-
- def init_weights(self):
- init_layer(self.att)
- init_layer(self.cla)
- init_bn(self.bn_att)
-
- def forward(self, x):
- # x: (n_samples, n_in, n_time)
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
- cla = self.nonlinear_transform(self.cla(x))
- x = torch.sum(norm_att * cla, dim=2)
- return x, norm_att, cla
-
- def nonlinear_transform(self, x):
- if self.activation == 'linear':
- return x
- elif self.activation == 'sigmoid':
- return torch.sigmoid(x)
-
-
-class Cnn14(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
-
- super(Cnn14, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- if (self.enable_fusion) and (self.fusion_type == 'channel_map'):
- self.conv_block1 = ConvBlock(in_channels=4, out_channels=64)
- else:
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
-
- self.fc1 = nn.Linear(2048, 2048, bias=True)
- self.fc_audioset = nn.Linear(2048, classes_num, bias=True)
-
- if (self.enable_fusion) and (self.fusion_type in ['daf_1d','aff_1d','iaff_1d']):
- self.mel_conv1d = nn.Sequential(
- nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2),
- nn.BatchNorm1d(64) # No Relu
- )
- if self.fusion_type == 'daf_1d':
- self.fusion_model = DAF()
- elif self.fusion_type == 'aff_1d':
- self.fusion_model = AFF(channels=64, type='1D')
- elif self.fusion_type == 'iaff_1d':
- self.fusion_model = iAFF(channels=64, type='1D')
-
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
- self.mel_conv2d = nn.Sequential(
- nn.Conv2d(1, 64, kernel_size=(5,5), stride=(6, 2), padding=(2,2)),
- nn.BatchNorm2d(64),
- nn.ReLU(inplace=True)
- )
-
- if self.fusion_type == 'daf_2d':
- self.fusion_model = DAF()
- elif self.fusion_type == 'aff_2d':
- self.fusion_model = AFF(channels=64, type='2D')
- elif self.fusion_type == 'iaff_2d':
- self.fusion_model = iAFF(channels=64, type='2D')
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- if self.enable_fusion and input["longer"].sum() == 0:
- # if no audio is longer than 10s, then randomly select one audio to be longer
- input["longer"][torch.randint(0, input["longer"].shape[0], (1,))] = True
-
- if not self.enable_fusion:
- x = self.spectrogram_extractor(input['waveform'].to(device=device, non_blocking=True)) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- else:
- longer_list = input["longer"].to(device=device, non_blocking=True)
- x = input["mel_fusion"].to(device=device, non_blocking=True)
- longer_list_idx = torch.where(longer_list)[0]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- if self.fusion_type in ['daf_1d','aff_1d','iaff_1d']:
- new_x = x[:,0:1,:,:].clone().contiguous()
- # local processing
- if len(longer_list_idx) > 0:
- fusion_x_local = x[longer_list_idx,1:,:,:].clone().contiguous()
- FB,FC,FT,FF = fusion_x_local.size()
- fusion_x_local = fusion_x_local.view(FB * FC, FT, FF)
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1)).contiguous()
- fusion_x_local = self.mel_conv1d(fusion_x_local)
- fusion_x_local = fusion_x_local.view(FB,FC,FF,fusion_x_local.size(-1))
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1,3)).contiguous().flatten(2)
- if fusion_x_local.size(-1) < FT:
- fusion_x_local = torch.cat([fusion_x_local, torch.zeros((FB,FF,FT- fusion_x_local.size(-1)), device=device)], dim=-1)
- else:
- fusion_x_local = fusion_x_local[:,:,:FT]
- # 1D fusion
- new_x = new_x.squeeze(1).permute((0,2,1)).contiguous()
- new_x[longer_list_idx] = self.fusion_model(new_x[longer_list_idx], fusion_x_local)
- x = new_x.permute((0,2,1)).contiguous()[:,None,:,:]
- else:
- x = new_x
- elif self.fusion_type in ['daf_2d','aff_2d','iaff_2d','channel_map']:
- x = x # no change
-
- if self.training:
- x = self.spec_augmenter(x)
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
- global_x = x[:,0:1,:,:]
-
- # global processing
- B, C, H, W = global_x.shape
- global_x = self.conv_block1(global_x, pool_size=(2, 2), pool_type='avg')
- if len(longer_list_idx) > 0:
- local_x = x[longer_list_idx,1:,:,:].contiguous()
- TH = global_x.size(-2)
- # local processing
- B, C, H, W = local_x.shape
- local_x = local_x.view(B*C,1,H,W)
- local_x = self.mel_conv2d(local_x)
- local_x = local_x.view(B,C,local_x.size(1),local_x.size(2),local_x.size(3))
- local_x = local_x.permute((0,2,1,3,4)).contiguous().flatten(2,3)
- TB,TC,_,TW = local_x.size()
- if local_x.size(-2) < TH:
- local_x = torch.cat([local_x, torch.zeros((TB,TC,TH-local_x.size(-2),TW), device=global_x.device)], dim=-2)
- else:
- local_x = local_x[:,:,:TH,:]
-
- global_x[longer_list_idx] = self.fusion_model(global_x[longer_list_idx],local_x)
- x = global_x
- else:
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
-
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 32)
-
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
- return output_dict
-
-
-class Cnn6(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
-
- super(Cnn6, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock5x5(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock5x5(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock5x5(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock5x5(in_channels=256, out_channels=512)
-
- self.fc1 = nn.Linear(512, 512, bias=True)
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 16)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
-
- return output_dict
-
-
-class Cnn10(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
-
- super(Cnn10, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
- freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
-
- self.fc1 = nn.Linear(1024, 1024, bias=True)
- self.fc_audioset = nn.Linear(1024, classes_num, bias=True)
-
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 32)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
-
- return output_dict
-
-
-def create_pann_model(audio_cfg, enable_fusion=False, fusion_type='None'):
- try:
- ModelProto = eval(audio_cfg.model_name)
- model = ModelProto(
- sample_rate = audio_cfg.sample_rate,
- window_size = audio_cfg.window_size,
- hop_size =audio_cfg.hop_size,
- mel_bins = audio_cfg.mel_bins,
- fmin = audio_cfg.fmin,
- fmax = audio_cfg.fmax,
- classes_num = audio_cfg.class_num,
- enable_fusion = enable_fusion,
- fusion_type = fusion_type
- )
- return model
- except:
- raise RuntimeError(f'Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough.')
-
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/bert.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/bert.py
deleted file mode 100644
index 005e72dec67e4b1c05063dbd4d024166344fd2c4..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/bert.py
+++ /dev/null
@@ -1,32 +0,0 @@
-from transformers import BertTokenizer, BertModel
-tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
-model = BertModel.from_pretrained("bert-base-uncased")
-text = "Replace me by any text you'd like."
-
-def bert_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors='pt')
- output = model(**encoded_input)
- return output
-
-from transformers import RobertaTokenizer, RobertaModel
-
-tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
-model = RobertaModel.from_pretrained('roberta-base')
-text = "Replace me by any text you'd like."
-def Roberta_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors='pt')
- output = model(**encoded_input)
- return output
-
-from transformers import BartTokenizer, BartModel
-
-tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
-model = BartModel.from_pretrained('facebook/bart-base')
-text = "Replace me by any text you'd like."
-def bart_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors='pt')
- output = model(**encoded_input)
- return output
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/tokenizer.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/tokenizer.py
deleted file mode 100644
index 5b4a238b987ce66f2932b11451d916e40816b8a3..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/tokenizer.py
+++ /dev/null
@@ -1,180 +0,0 @@
-""" CLIP tokenizer
-
-Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
-"""
-import gzip
-import html
-import os
-from functools import lru_cache
-from typing import Union, List
-
-import ftfy
-import regex as re
-import torch
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe(), special_tokens=None):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
- merges = merges[1:49152-256-2+1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v+'' for v in vocab]
- for merge in merges:
- vocab.append(''.join(merge))
- if not special_tokens:
- special_tokens = ['', '']
- else:
- special_tokens = ['', ''] + special_tokens
- vocab.extend(special_tokens)
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {t:t for t in special_tokens}
- special = "|".join(special_tokens)
- self.pat = re.compile(special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
- self.vocab_size = len(self.encoder)
- self.all_special_ids = [self.encoder[t] for t in special_tokens]
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + ( token[-1] + '',)
- pairs = get_pairs(word)
-
- if not pairs:
- return token+''
-
- while True:
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
- new_word.append(first+second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = ' '.join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
- return bpe_tokens
-
- def decode(self, tokens):
- text = ''.join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
- return text
-
-
-_tokenizer = SimpleTokenizer()
-
-
-def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor:
- """
- Returns the tokenized representation of given input string(s)
-
- Parameters
- ----------
- texts : Union[str, List[str]]
- An input string or a list of input strings to tokenize
- context_length : int
- The context length to use; all CLIP models use 77 as the context length
-
- Returns
- -------
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
- """
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder[""]
- eot_token = _tokenizer.encoder[""]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length:
- tokens = tokens[:context_length] # Truncate
- result[i, :len(tokens)] = torch.tensor(tokens)
-
- return result
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/segment/augmentations.py b/spaces/Abhilashvj/planogram-compliance/utils/segment/augmentations.py
deleted file mode 100644
index 3c9b81a25c7b701cc9effad3f5fb86d7b5f98743..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/segment/augmentations.py
+++ /dev/null
@@ -1,128 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Image augmentation functions
-"""
-
-import math
-import random
-
-import cv2
-import numpy as np
-
-from ..augmentations import box_candidates
-from ..general import resample_segments, segment2box
-
-
-def mixup(im, labels, segments, im2, labels2, segments2):
- # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
- r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
- im = (im * r + im2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
- segments = np.concatenate((segments, segments2), 0)
- return im, labels, segments
-
-
-def random_perspective(
- im,
- targets=(),
- segments=(),
- degrees=10,
- translate=0.1,
- scale=0.1,
- shear=10,
- perspective=0.0,
- border=(0, 0),
-):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = im.shape[0] + border[0] * 2 # shape(h,w,c)
- width = im.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(
- -perspective, perspective
- ) # x perspective (about y)
- P[2, 1] = random.uniform(
- -perspective, perspective
- ) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(
- random.uniform(-shear, shear) * math.pi / 180
- ) # x shear (deg)
- S[1, 0] = math.tan(
- random.uniform(-shear, shear) * math.pi / 180
- ) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = (
- random.uniform(0.5 - translate, 0.5 + translate) * width
- ) # x translation (pixels)
- T[1, 2] = (
- random.uniform(0.5 - translate, 0.5 + translate) * height
- ) # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (
- (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any()
- ): # image changed
- if perspective:
- im = cv2.warpPerspective(
- im, M, dsize=(width, height), borderValue=(114, 114, 114)
- )
- else: # affine
- im = cv2.warpAffine(
- im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)
- )
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(im[:, :, ::-1]) # base
- # ax[1].imshow(im2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- new_segments = []
- if n:
- new = np.zeros((n, 4))
- segments = resample_segments(segments) # upsample
- for i, segment in enumerate(segments):
- xy = np.ones((len(segment), 3))
- xy[:, :2] = segment
- xy = xy @ M.T # transform
- xy = (
- xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]
- ) # perspective rescale or affine
-
- # clip
- new[i] = segment2box(xy, width, height)
- new_segments.append(xy)
-
- # filter candidates
- i = box_candidates(
- box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01
- )
- targets = targets[i]
- targets[:, 1:5] = new[i]
- new_segments = np.array(new_segments)[i]
-
- return im, targets, new_segments
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Conversation.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Conversation.ts
deleted file mode 100644
index 5ad6670d27b853d9261dffcaa7b08fc16f739d4f..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Conversation.ts
+++ /dev/null
@@ -1,17 +0,0 @@
-import type { Message } from "./Message";
-import type { Timestamps } from "./Timestamps";
-import type { User } from "./User";
-
-export interface Conversation extends Timestamps {
- sessionId?: string;
- userId?: User["_id"];
-
- model: string;
-
- title: string;
- messages: Message[];
-
- meta?: {
- fromShareId?: string;
- };
-}
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.js
deleted file mode 100644
index 87311e1d1036e0a6ef13b482cc5d17b34945e778..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Factory.js
+++ /dev/null
@@ -1,11 +0,0 @@
-import Anchor from "./Anchor.js";
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('anchor', function (gameObject, config) {
- return new Anchor(gameObject, config);
-});
-
-SetValue(window, 'RexPlugins.UI.Anchor', Anchor);
-
-export default Anchor;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/HideMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/HideMethods.js
deleted file mode 100644
index 456c4a4d1c8a9f51ecca578be2f31ce6a8d65794..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/HideMethods.js
+++ /dev/null
@@ -1,30 +0,0 @@
-import {
- Show,
- Hide,
- IsShown,
-} from '../utils/Hide.js';
-
-export default {
- show(gameObject) {
- if (gameObject === undefined) {
- gameObject = this;
- }
- Show(gameObject, false);
- return this;
- },
-
- hide(gameObject) {
- if (gameObject === undefined) {
- gameObject = this;
- }
- Hide(gameObject, true);
- return this;
- },
-
- isShow(gameObject) {
- if (gameObject === undefined) {
- gameObject = this;
- }
- return IsShown(gameObject);
- }
-}
\ No newline at end of file
diff --git a/spaces/Alfasign/HuggingGPT-Lite/README.md b/spaces/Alfasign/HuggingGPT-Lite/README.md
deleted file mode 100644
index 8fb42c5aff90ab9b222ef09eade123d693ac6db3..0000000000000000000000000000000000000000
--- a/spaces/Alfasign/HuggingGPT-Lite/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: HuggingGPT - Lite
-emoji: 🎐
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: taesiri/HuggingGPT-Lite
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/op/fused_act.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/op/fused_act.py
deleted file mode 100644
index 2d575bc9198e6d46eee040eb374c6d8f64c3363c..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/op/fused_act.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-module_path = os.path.dirname(__file__)
-
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- input = input.cuda()
- if input.ndim == 3:
- return (
- F.leaky_relu(
- input + bias.view(1, *rest_dim, bias.shape[0]), negative_slope=negative_slope
- )
- * scale
- )
- else:
- return (
- F.leaky_relu(
- input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope
- )
- * scale
- )
-
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/coaches/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/coaches/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py
deleted file mode 100644
index e94553294294fa49952f2dfe0e3c64a5e00bc878..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './libra_faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py
deleted file mode 100644
index 169278e5738b0abd4ae5e99594e4adbaaefa2d96..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './point_rend_r50_caffe_fpn_mstrain_1x_coco.py'
-# learning policy
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/README.md
deleted file mode 100644
index 8ab7c28962a13696457d1dd1f01fa8382653697f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/tridentnet/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Scale-Aware Trident Networks for Object Detection
-
-## Introduction
-
-[ALGORITHM]
-
-```
-@InProceedings{li2019scale,
- title={Scale-Aware Trident Networks for Object Detection},
- author={Li, Yanghao and Chen, Yuntao and Wang, Naiyan and Zhang, Zhaoxiang},
- journal={The International Conference on Computer Vision (ICCV)},
- year={2019}
-}
-```
-
-## Results and models
-
-We reports the test results using only one branch for inference.
-
-| Backbone | Style | mstrain | Lr schd | Mem (GB) | Inf time (fps) | box AP | Download |
-| :-------------: | :-----: | :-----: | :-----: | :------: | :------------: | :----: | :------: |
-| R-50 | caffe | N | 1x | | | 37.7 |[model](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_1x_coco/tridentnet_r50_caffe_1x_coco_20201230_141838-2ec0b530.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_1x_coco/tridentnet_r50_caffe_1x_coco_20201230_141838.log.json) |
-| R-50 | caffe | Y | 1x | | | 37.6 |[model](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco/tridentnet_r50_caffe_mstrain_1x_coco_20201230_141839-6ce55ccb.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco/tridentnet_r50_caffe_mstrain_1x_coco_20201230_141839.log.json) |
-| R-50 | caffe | Y | 3x | | | 40.3 |[model](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco/tridentnet_r50_caffe_mstrain_3x_coco_20201130_100539-46d227ba.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco/tridentnet_r50_caffe_mstrain_3x_coco_20201130_100539.log.json) |
-
-**Note**
-
-Similar to [Detectron2](https://github.com/facebookresearch/detectron2/tree/master/projects/TridentNet), we haven't implemented the Scale-aware Training Scheme in section 4.2 of the paper.
diff --git a/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/run.sh b/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/run.sh
deleted file mode 100644
index fbe76fb398212d2eb93f98007ea28d31cbb65ebe..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/run.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/usr/bin/env bash
-
-work_path=$(dirname $0)
-PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \
-python -m torch.distributed.launch --nproc_per_node=8 \
- tools/train.py ${work_path}/config.py \
- --launcher pytorch \
- --cfg-options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \
- --work-dir ${work_path}/ckpt \
- 2>&1 | tee -a ${work_path}/log.txt
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/__init__.py
deleted file mode 100644
index bbe70145b8bf7c304370f725f5afa8db98666679..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .res_layer import ResLayer
-
-__all__ = ['ResLayer']
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_20k_voc12aug.py
deleted file mode 100644
index f06448b168af4d2dcc5a1f96e4430a7948b7e170..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_voc12_aug.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py'
-]
-model = dict(decode_head=dict(num_classes=21))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/tokens.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/tokens.py
deleted file mode 100644
index 0338e7f25aaa9d8b82ed8c69ab9cae9996130629..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/tokens.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from modules.text_generation import decode, encode
-
-
-def token_count(prompt):
- tokens = encode(prompt)[0]
-
- return {
- 'results': [{
- 'tokens': len(tokens)
- }]
- }
-
-
-def token_encode(input, encoding_format):
- # if isinstance(input, list):
- tokens = encode(input)[0]
-
- return {
- 'results': [{
- 'tokens': tokens,
- 'length': len(tokens),
- }]
- }
-
-
-def token_decode(tokens, encoding_format):
- # if isinstance(input, list):
- # if encoding_format == "base64":
- # tokens = base64_to_float_list(tokens)
- output = decode(tokens)[0]
-
- return {
- 'results': [{
- 'text': output
- }]
- }
diff --git a/spaces/AnnasBlackHat/Image-Similarity/src/similarity/similarity.py b/spaces/AnnasBlackHat/Image-Similarity/src/similarity/similarity.py
deleted file mode 100644
index f979c358dadc83e79bbf839f9fc4cf7c55c7c2c3..0000000000000000000000000000000000000000
--- a/spaces/AnnasBlackHat/Image-Similarity/src/similarity/similarity.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from src.model import simlarity_model as model
-from src.util import image as image_util
-from src.util import matrix
-from .model_implements.mobilenet_v3 import ModelnetV3
-from .model_implements.vit_base import VitBase
-from .model_implements.bit import BigTransfer
-
-
-class Similarity:
- def get_models(self):
- return [
- model.SimilarityModel(name= 'Mobilenet V3', image_size= 224, model_cls = ModelnetV3()),
- model.SimilarityModel(name= 'Big Transfer (BiT)', image_size= 224, model_cls = BigTransfer()),
- model.SimilarityModel(name= 'Vision Transformer', image_size= 224, model_cls = VitBase(), image_input_type='pil'),
- ]
-
- def check_similarity(self, img_urls, model):
- imgs = []
- for url in img_urls:
- if url == "": continue
- imgs.append(image_util.load_image_url(url, required_size=(model.image_size, model.image_size), image_type=model.image_input_type))
-
- features = model.model_cls.extract_feature(imgs)
- results = []
- for i, v in enumerate(features):
- if i == 0: continue
- dist = matrix.cosine(features[0], v)
- print(f'{i} -- distance: {dist}')
- # results.append((imgs[i], f'similarity: {int(dist*100)}%'))
- original_img = image_util.load_image_url(img_urls[i], required_size=None, image_type='pil')
- results.append((original_img, f'similarity: {int(dist*100)}%'))
-
- return results
-
-
\ No newline at end of file
diff --git a/spaces/AnnasBlackHat/Image-Similarity/src/util/image.py b/spaces/AnnasBlackHat/Image-Similarity/src/util/image.py
deleted file mode 100644
index b3b509a4aed05daed121de6722b118ba648edcdb..0000000000000000000000000000000000000000
--- a/spaces/AnnasBlackHat/Image-Similarity/src/util/image.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from PIL import Image
-import numpy as np
-import requests
-
-def load_image_url(url, required_size = (224,224), image_type = 'array'):
- print(f'downloading.. {url}, type: {image_type}')
- img = Image.open(requests.get(url, stream=True).raw)
- img = Image.fromarray(np.array(img))
- if required_size is not None:
- img = img.resize(required_size)
- if image_type == 'array':
- img = (np.expand_dims(np.array(img), 0)/255).astype(np.float32)
- return img
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/__init__.py
deleted file mode 100644
index 8b9046b07bb4ddea7a707a392b42e72db7c9df67..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/pipelines/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from .compose import Compose
-from .formating import (Collect, ImageToTensor, ToDataContainer, ToTensor,
- Transpose, to_tensor)
-from .loading import LoadAnnotations, LoadImageFromFile
-from .test_time_aug import MultiScaleFlipAug
-from .transforms import (CLAHE, AdjustGamma, Normalize, Pad,
- PhotoMetricDistortion, RandomCrop, RandomFlip,
- RandomRotate, Rerange, Resize, RGB2Gray, SegRescale)
-
-__all__ = [
- 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer',
- 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile',
- 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop',
- 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate',
- 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray'
-]
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/data/util.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/data/util.py
deleted file mode 100644
index 5b60ceb2349e3bd7900ff325740e2022d2903b1c..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/data/util.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import torch
-
-from ldm.modules.midas.api import load_midas_transform
-
-
-class AddMiDaS(object):
- def __init__(self, model_type):
- super().__init__()
- self.transform = load_midas_transform(model_type)
-
- def pt2np(self, x):
- x = ((x + 1.0) * .5).detach().cpu().numpy()
- return x
-
- def np2pt(self, x):
- x = torch.from_numpy(x) * 2 - 1.
- return x
-
- def __call__(self, sample):
- # sample['jpg'] is tensor hwc in [-1, 1] at this point
- x = self.pt2np(sample['jpg'])
- x = self.transform({"image": x})["image"]
- sample['midas_in'] = x
- return sample
\ No newline at end of file
diff --git a/spaces/AquaSuisei/ChatGPTXE/run_Linux.sh b/spaces/AquaSuisei/ChatGPTXE/run_Linux.sh
deleted file mode 100644
index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000
--- a/spaces/AquaSuisei/ChatGPTXE/run_Linux.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$0")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir"
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/Awesimo/jojogan/e4e/criteria/lpips/lpips.py b/spaces/Awesimo/jojogan/e4e/criteria/lpips/lpips.py
deleted file mode 100644
index 1add6acc84c1c04cfcb536cf31ec5acdf24b716b..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/criteria/lpips/lpips.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import torch
-import torch.nn as nn
-
-from criteria.lpips.networks import get_network, LinLayers
-from criteria.lpips.utils import get_state_dict
-
-
-class LPIPS(nn.Module):
- r"""Creates a criterion that measures
- Learned Perceptual Image Patch Similarity (LPIPS).
- Arguments:
- net_type (str): the network type to compare the features:
- 'alex' | 'squeeze' | 'vgg'. Default: 'alex'.
- version (str): the version of LPIPS. Default: 0.1.
- """
- def __init__(self, net_type: str = 'alex', version: str = '0.1'):
-
- assert version in ['0.1'], 'v0.1 is only supported now'
-
- super(LPIPS, self).__init__()
-
- # pretrained network
- self.net = get_network(net_type).to("cuda")
-
- # linear layers
- self.lin = LinLayers(self.net.n_channels_list).to("cuda")
- self.lin.load_state_dict(get_state_dict(net_type, version))
-
- def forward(self, x: torch.Tensor, y: torch.Tensor):
- feat_x, feat_y = self.net(x), self.net(y)
-
- diff = [(fx - fy) ** 2 for fx, fy in zip(feat_x, feat_y)]
- res = [l(d).mean((2, 3), True) for d, l in zip(diff, self.lin)]
-
- return torch.sum(torch.cat(res, 0)) / x.shape[0]
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py
deleted file mode 100644
index 744d5306f5b0ba4cf508731bd790bad823b520fa..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.meta_arch import GeneralizedRCNN
-from detectron2.modeling.anchor_generator import DefaultAnchorGenerator
-from detectron2.modeling.backbone.fpn import LastLevelMaxPool
-from detectron2.modeling.backbone import BasicStem, FPN, ResNet
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.matcher import Matcher
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.proposal_generator import RPN, StandardRPNHead
-from detectron2.modeling.roi_heads import (
- StandardROIHeads,
- FastRCNNOutputLayers,
- MaskRCNNConvUpsampleHead,
- FastRCNNConvFCHead,
-)
-
-model = L(GeneralizedRCNN)(
- backbone=L(FPN)(
- bottom_up=L(ResNet)(
- stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"),
- stages=L(ResNet.make_default_stages)(
- depth=50,
- stride_in_1x1=True,
- norm="FrozenBN",
- ),
- out_features=["res2", "res3", "res4", "res5"],
- ),
- in_features="${.bottom_up.out_features}",
- out_channels=256,
- top_block=L(LastLevelMaxPool)(),
- ),
- proposal_generator=L(RPN)(
- in_features=["p2", "p3", "p4", "p5", "p6"],
- head=L(StandardRPNHead)(in_channels=256, num_anchors=3),
- anchor_generator=L(DefaultAnchorGenerator)(
- sizes=[[32], [64], [128], [256], [512]],
- aspect_ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64],
- offset=0.0,
- ),
- anchor_matcher=L(Matcher)(
- thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True
- ),
- box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]),
- batch_size_per_image=256,
- positive_fraction=0.5,
- pre_nms_topk=(2000, 1000),
- post_nms_topk=(1000, 1000),
- nms_thresh=0.7,
- ),
- roi_heads=L(StandardROIHeads)(
- num_classes=80,
- batch_size_per_image=512,
- positive_fraction=0.25,
- proposal_matcher=L(Matcher)(
- thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False
- ),
- box_in_features=["p2", "p3", "p4", "p5"],
- box_pooler=L(ROIPooler)(
- output_size=7,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- box_head=L(FastRCNNConvFCHead)(
- input_shape=ShapeSpec(channels=256, height=7, width=7),
- conv_dims=[],
- fc_dims=[1024, 1024],
- ),
- box_predictor=L(FastRCNNOutputLayers)(
- input_shape=ShapeSpec(channels=1024),
- test_score_thresh=0.05,
- box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)),
- num_classes="${..num_classes}",
- ),
- mask_in_features=["p2", "p3", "p4", "p5"],
- mask_pooler=L(ROIPooler)(
- output_size=14,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- mask_head=L(MaskRCNNConvUpsampleHead)(
- input_shape=ShapeSpec(channels=256, width=14, height=14),
- num_classes="${..num_classes}",
- conv_dims=[256, 256, 256, 256, 256],
- ),
- ),
- pixel_mean=[103.530, 116.280, 123.675],
- pixel_std=[1.0, 1.0, 1.0],
- input_format="BGR",
-)
diff --git a/spaces/Benson/text-generation/Examples/Apk.apkmonk.com.md b/spaces/Benson/text-generation/Examples/Apk.apkmonk.com.md
deleted file mode 100644
index 909d65973f9a4c2b9b673efc3678046d064c0363..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk.apkmonk.com.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-Modo de datos de Facebook APK Descargar: Cómo guardar datos y disfrutar de Facebook
-¿Te encanta usar Facebook pero odias la cantidad de datos que consume? ¿Desea mantenerse conectado con sus amigos y familiares sin preocuparse por su plan de datos o velocidad de red? Si respondiste afirmativamente a cualquiera de estas preguntas, es posible que quieras probar el Modo de datos de Facebook.
-¿Qué es el modo de datos de Facebook?
-Facebook Data Mode es una función que te permite reducir la cantidad de datos que Facebook utiliza en tu dispositivo Android. Para ello, comprime imágenes, vídeos y otros archivos multimedia antes de cargarlos en la pantalla. También limita algunas actividades en segundo plano y las notificaciones que podrían drenar sus datos.
-apk.apkmonk.com
Download File ✅ https://bltlly.com/2v6Mu2
-Al utilizar el modo de datos, puede disfrutar de Facebook sin sacrificar su presupuesto de datos o la calidad de la experiencia. Todavía puede navegar por su canal de noticias, chatear con sus amigos, ver videos y más. También puede volver al modo normal en cualquier momento.
-El modo de datos es diferente de Facebook Lite, que es una aplicación separada que ofrece una versión simplificada de Facebook para dispositivos de gama baja o redes lentas. El modo de datos está integrado en la aplicación principal de Facebook y le da más control sobre el uso y las preferencias de datos.
-Cómo descargar el modo de datos de Facebook APK?
-Si desea probar el modo de datos en su dispositivo Android, es necesario descargar e instalar la última versión de la aplicación de Facebook de Google Play Store u otras fuentes de confianza. También puede descargar el archivo APK del modo de datos de Facebook desde [aquí]( 1 ) o [aquí]( 2 ) si lo prefiere.
-Aquí están los pasos para descargar e instalar Facebook Data Mode APK:
-
-- Descargar el archivo APK de uno de los enlaces anteriores.
-- Vaya a la configuración del dispositivo y habilite la instalación desde fuentes desconocidas.
-- Busque el archivo descargado en su administrador de archivos y toque en él.
-- Siga las instrucciones en la pantalla para completar el proceso de instalación.
-
-
-Sin embargo, tenga cuidado al descargar archivos APK de fuentes desconocidas, ya que podrían contener malware o virus que podrían dañar su dispositivo o comprometer su privacidad. Siempre escanee los archivos antes de instalarlos y solo descargue de fuentes confiables.
-¿Cómo usar el modo de datos de Facebook?
-Usar el modo de datos de Facebook es muy fácil y conveniente. Aquí hay algunos consejos sobre cómo usarlo:
-
-- Para cambiar entre el modo de datos y el modo regular, toque en el icono de tres líneas horizontales en la esquina superior derecha de la aplicación. Luego, desplácese hacia abajo y toque en Configuración y privacidad. A continuación, toque en Ahorro de datos y cambie el interruptor para encenderlo o apagarlo.
-- Para optimizar el uso y el rendimiento de sus datos, puede ajustar algunos ajustes en el menú Ahorro de datos. Por ejemplo, puede optar por activar automáticamente el modo de datos cuando no esté conectado a Wi-Fi, o usar siempre el modo de datos independientemente de su conexión de red. También puede elegir cargar imágenes o vídeos de menor calidad, o desactivar la reproducción automática de vídeos.
-- Para acceder a algunas características y funciones que están limitadas o no disponibles en el modo de datos, puede volver temporalmente al modo regular tocando el banner azul en la parte superior de la aplicación. Por ejemplo, puede ver fotos o videos de alta resolución, ver transmisiones en vivo o usar videollamadas. Sin embargo, tenga en cuenta que esto consumirá más datos de lo habitual.
-
-El modo de datos es una gran manera de guardar datos y disfrutar de Facebook sin comprometer su experiencia. Sin embargo, también tiene algunas limitaciones y desventajas que usted debe tener en cuenta. Por ejemplo, es posible que el modo de datos no funcione bien con algunas aplicaciones o servicios de terceros que se integran con Facebook, como Instagram o Messenger. El modo de datos también puede afectar la precisión o la puntualidad de alguna información o notificaciones que recibas de Facebook, como actualizaciones de noticias o solicitudes de amistad.
-
-Conclusión
-
-Si tiene alguna pregunta o comentario sobre el modo de datos, no dude en dejar un comentario a continuación o contáctenos a través de nuestro sitio web. Nos encantaría saber de ti y ayudarte.
-Además, si te gustó este artículo, no olvides compartirlo con tus amigos y familiares que podrían encontrarlo útil. Y si quieres saber más sobre Facebook u otros temas relacionados, echa un vistazo a nuestros otros artículos o suscríbete a nuestro boletín para más actualizaciones.
-Preguntas frecuentes
-¿Cuál es la diferencia entre el modo de datos de Facebook y Facebook Lite?
-Facebook Data Mode es una característica dentro de la aplicación principal de Facebook que le permite reducir la cantidad de datos que Facebook utiliza en su dispositivo. Facebook Lite es una aplicación independiente que ofrece una versión simplificada de Facebook para dispositivos de gama baja o redes lentas. El modo de datos le da más control sobre el uso y las preferencias de datos, mientras que Lite le ofrece una experiencia más rápida y ligera.
- ¿Cuántos datos puedo guardar usando el modo de datos de Facebook?
-La cantidad de datos que puede guardar usando el modo de datos depende de varios factores, como su conexión de red, su configuración, sus patrones de uso y el tipo de contenido que ve o carga. Sin embargo, según Facebook, el modo de datos puede ayudarte a ahorrar hasta un 50% de tus datos en comparación con el modo normal.
-¿Afecta el modo de datos de Facebook a mi privacidad o seguridad?
-No, el modo de datos no afecta su privacidad o seguridad de ninguna manera. El modo de datos solo comprime o limita algunos de los archivos multimedia o actividades que consumen más datos en su dispositivo. No cambia ni accede a ninguna información personal o configuración de cuenta. Todavía puedes usar todas las funciones de privacidad y seguridad que Facebook ofrece en modo regular.
-¿Puedo usar el modo de datos de Facebook en otros dispositivos o plataformas?
-
-¿Dónde puedo obtener más información o soporte sobre el modo de datos de Facebook?
-Si necesita más información o soporte sobre el modo de datos, puede visitar el [Centro de ayuda de Facebook] o el [Foro de la comunidad de Facebook]. También puede ponerse en contacto con Facebook directamente a través de su página [Contáctenos] o su página [Comentarios]. Ya he escrito el artículo basado en el esquema que he proporcionado. No hay nada más que escribir. Espero que esté satisfecho con mi trabajo y que encuentre el artículo útil e informativo. Si tiene algún comentario o sugerencia, por favor hágamelo saber. Agradezco su aportación y cooperación. Gracias por elegirme como tu escritor de contenido.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/setup.py b/spaces/BernardoOlisan/vqganclip/CLIP/setup.py
deleted file mode 100644
index c9ea7d0d2f3d2fcf66d6f6e2aa0eb1a97a524bb6..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/CLIP/setup.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-
-import pkg_resources
-from setuptools import setup, find_packages
-
-setup(
- name="clip",
- py_modules=["clip"],
- version="1.0",
- description="",
- author="OpenAI",
- packages=find_packages(exclude=["tests*"]),
- install_requires=[
- str(r)
- for r in pkg_resources.parse_requirements(
- open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
- )
- ],
- include_package_data=True,
- extras_require={'dev': ['pytest']},
-)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/service.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/service.py
deleted file mode 100644
index fa183bc831bf9324469c9093ecee099b44e0ada0..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/service.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from botocore.docs.bcdoc.restdoc import DocumentStructure
-from botocore.docs.client import ClientDocumenter, ClientExceptionsDocumenter
-from botocore.docs.paginator import PaginatorDocumenter
-from botocore.docs.waiter import WaiterDocumenter
-from botocore.exceptions import DataNotFoundError
-
-
-class ServiceDocumenter:
- def __init__(self, service_name, session, root_docs_path):
- self._session = session
- self._service_name = service_name
- self._root_docs_path = root_docs_path
-
- self._client = self._session.create_client(
- service_name,
- region_name='us-east-1',
- aws_access_key_id='foo',
- aws_secret_access_key='bar',
- )
- self._event_emitter = self._client.meta.events
-
- self.sections = [
- 'title',
- 'client-api',
- 'client-exceptions',
- 'paginator-api',
- 'waiter-api',
- ]
-
- def document_service(self):
- """Documents an entire service.
-
- :returns: The reStructured text of the documented service.
- """
- doc_structure = DocumentStructure(
- self._service_name, section_names=self.sections, target='html'
- )
- self.title(doc_structure.get_section('title'))
- self.client_api(doc_structure.get_section('client-api'))
- self.client_exceptions(doc_structure.get_section('client-exceptions'))
- self.paginator_api(doc_structure.get_section('paginator-api'))
- self.waiter_api(doc_structure.get_section('waiter-api'))
- return doc_structure.flush_structure()
-
- def title(self, section):
- section.style.h1(self._client.__class__.__name__)
- self._event_emitter.emit(
- f"docs.title.{self._service_name}", section=section
- )
-
- def table_of_contents(self, section):
- section.style.table_of_contents(title='Table of Contents', depth=2)
-
- def client_api(self, section):
- examples = None
- try:
- examples = self.get_examples(self._service_name)
- except DataNotFoundError:
- pass
-
- ClientDocumenter(
- self._client, self._root_docs_path, examples
- ).document_client(section)
-
- def client_exceptions(self, section):
- ClientExceptionsDocumenter(
- self._client, self._root_docs_path
- ).document_exceptions(section)
-
- def paginator_api(self, section):
- try:
- service_paginator_model = self._session.get_paginator_model(
- self._service_name
- )
- except DataNotFoundError:
- return
- if service_paginator_model._paginator_config:
- paginator_documenter = PaginatorDocumenter(
- self._client, service_paginator_model, self._root_docs_path
- )
- paginator_documenter.document_paginators(section)
-
- def waiter_api(self, section):
- if self._client.waiter_names:
- service_waiter_model = self._session.get_waiter_model(
- self._service_name
- )
- waiter_documenter = WaiterDocumenter(
- self._client, service_waiter_model, self._root_docs_path
- )
- waiter_documenter.document_waiters(section)
-
- def get_examples(self, service_name, api_version=None):
- loader = self._session.get_component('data_loader')
- examples = loader.load_service_model(
- service_name, 'examples-1', api_version
- )
- return examples['examples']
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/parser/_parser.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/parser/_parser.py
deleted file mode 100644
index 37d1663b2f72447800d9a553929e3de932244289..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/parser/_parser.py
+++ /dev/null
@@ -1,1613 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module offers a generic date/time string parser which is able to parse
-most known formats to represent a date and/or time.
-
-This module attempts to be forgiving with regards to unlikely input formats,
-returning a datetime object even for dates which are ambiguous. If an element
-of a date/time stamp is omitted, the following rules are applied:
-
-- If AM or PM is left unspecified, a 24-hour clock is assumed, however, an hour
- on a 12-hour clock (``0 <= hour <= 12``) *must* be specified if AM or PM is
- specified.
-- If a time zone is omitted, a timezone-naive datetime is returned.
-
-If any other elements are missing, they are taken from the
-:class:`datetime.datetime` object passed to the parameter ``default``. If this
-results in a day number exceeding the valid number of days per month, the
-value falls back to the end of the month.
-
-Additional resources about date/time string formats can be found below:
-
-- `A summary of the international standard date and time notation
- `_
-- `W3C Date and Time Formats `_
-- `Time Formats (Planetary Rings Node) `_
-- `CPAN ParseDate module
- `_
-- `Java SimpleDateFormat Class
- `_
-"""
-from __future__ import unicode_literals
-
-import datetime
-import re
-import string
-import time
-import warnings
-
-from calendar import monthrange
-from io import StringIO
-
-import six
-from six import integer_types, text_type
-
-from decimal import Decimal
-
-from warnings import warn
-
-from .. import relativedelta
-from .. import tz
-
-__all__ = ["parse", "parserinfo", "ParserError"]
-
-
-# TODO: pandas.core.tools.datetimes imports this explicitly. Might be worth
-# making public and/or figuring out if there is something we can
-# take off their plate.
-class _timelex(object):
- # Fractional seconds are sometimes split by a comma
- _split_decimal = re.compile("([.,])")
-
- def __init__(self, instream):
- if isinstance(instream, (bytes, bytearray)):
- instream = instream.decode()
-
- if isinstance(instream, text_type):
- instream = StringIO(instream)
- elif getattr(instream, 'read', None) is None:
- raise TypeError('Parser must be a string or character stream, not '
- '{itype}'.format(itype=instream.__class__.__name__))
-
- self.instream = instream
- self.charstack = []
- self.tokenstack = []
- self.eof = False
-
- def get_token(self):
- """
- This function breaks the time string into lexical units (tokens), which
- can be parsed by the parser. Lexical units are demarcated by changes in
- the character set, so any continuous string of letters is considered
- one unit, any continuous string of numbers is considered one unit.
-
- The main complication arises from the fact that dots ('.') can be used
- both as separators (e.g. "Sep.20.2009") or decimal points (e.g.
- "4:30:21.447"). As such, it is necessary to read the full context of
- any dot-separated strings before breaking it into tokens; as such, this
- function maintains a "token stack", for when the ambiguous context
- demands that multiple tokens be parsed at once.
- """
- if self.tokenstack:
- return self.tokenstack.pop(0)
-
- seenletters = False
- token = None
- state = None
-
- while not self.eof:
- # We only realize that we've reached the end of a token when we
- # find a character that's not part of the current token - since
- # that character may be part of the next token, it's stored in the
- # charstack.
- if self.charstack:
- nextchar = self.charstack.pop(0)
- else:
- nextchar = self.instream.read(1)
- while nextchar == '\x00':
- nextchar = self.instream.read(1)
-
- if not nextchar:
- self.eof = True
- break
- elif not state:
- # First character of the token - determines if we're starting
- # to parse a word, a number or something else.
- token = nextchar
- if self.isword(nextchar):
- state = 'a'
- elif self.isnum(nextchar):
- state = '0'
- elif self.isspace(nextchar):
- token = ' '
- break # emit token
- else:
- break # emit token
- elif state == 'a':
- # If we've already started reading a word, we keep reading
- # letters until we find something that's not part of a word.
- seenletters = True
- if self.isword(nextchar):
- token += nextchar
- elif nextchar == '.':
- token += nextchar
- state = 'a.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == '0':
- # If we've already started reading a number, we keep reading
- # numbers until we find something that doesn't fit.
- if self.isnum(nextchar):
- token += nextchar
- elif nextchar == '.' or (nextchar == ',' and len(token) >= 2):
- token += nextchar
- state = '0.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == 'a.':
- # If we've seen some letters and a dot separator, continue
- # parsing, and the tokens will be broken up later.
- seenletters = True
- if nextchar == '.' or self.isword(nextchar):
- token += nextchar
- elif self.isnum(nextchar) and token[-1] == '.':
- token += nextchar
- state = '0.'
- else:
- self.charstack.append(nextchar)
- break # emit token
- elif state == '0.':
- # If we've seen at least one dot separator, keep going, we'll
- # break up the tokens later.
- if nextchar == '.' or self.isnum(nextchar):
- token += nextchar
- elif self.isword(nextchar) and token[-1] == '.':
- token += nextchar
- state = 'a.'
- else:
- self.charstack.append(nextchar)
- break # emit token
-
- if (state in ('a.', '0.') and (seenletters or token.count('.') > 1 or
- token[-1] in '.,')):
- l = self._split_decimal.split(token)
- token = l[0]
- for tok in l[1:]:
- if tok:
- self.tokenstack.append(tok)
-
- if state == '0.' and token.count('.') == 0:
- token = token.replace(',', '.')
-
- return token
-
- def __iter__(self):
- return self
-
- def __next__(self):
- token = self.get_token()
- if token is None:
- raise StopIteration
-
- return token
-
- def next(self):
- return self.__next__() # Python 2.x support
-
- @classmethod
- def split(cls, s):
- return list(cls(s))
-
- @classmethod
- def isword(cls, nextchar):
- """ Whether or not the next character is part of a word """
- return nextchar.isalpha()
-
- @classmethod
- def isnum(cls, nextchar):
- """ Whether the next character is part of a number """
- return nextchar.isdigit()
-
- @classmethod
- def isspace(cls, nextchar):
- """ Whether the next character is whitespace """
- return nextchar.isspace()
-
-
-class _resultbase(object):
-
- def __init__(self):
- for attr in self.__slots__:
- setattr(self, attr, None)
-
- def _repr(self, classname):
- l = []
- for attr in self.__slots__:
- value = getattr(self, attr)
- if value is not None:
- l.append("%s=%s" % (attr, repr(value)))
- return "%s(%s)" % (classname, ", ".join(l))
-
- def __len__(self):
- return (sum(getattr(self, attr) is not None
- for attr in self.__slots__))
-
- def __repr__(self):
- return self._repr(self.__class__.__name__)
-
-
-class parserinfo(object):
- """
- Class which handles what inputs are accepted. Subclass this to customize
- the language and acceptable values for each parameter.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM
- and YMD. Default is ``False``.
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken
- to be the year, otherwise the last number is taken to be the year.
- Default is ``False``.
- """
-
- # m from a.m/p.m, t from ISO T separator
- JUMP = [" ", ".", ",", ";", "-", "/", "'",
- "at", "on", "and", "ad", "m", "t", "of",
- "st", "nd", "rd", "th"]
-
- WEEKDAYS = [("Mon", "Monday"),
- ("Tue", "Tuesday"), # TODO: "Tues"
- ("Wed", "Wednesday"),
- ("Thu", "Thursday"), # TODO: "Thurs"
- ("Fri", "Friday"),
- ("Sat", "Saturday"),
- ("Sun", "Sunday")]
- MONTHS = [("Jan", "January"),
- ("Feb", "February"), # TODO: "Febr"
- ("Mar", "March"),
- ("Apr", "April"),
- ("May", "May"),
- ("Jun", "June"),
- ("Jul", "July"),
- ("Aug", "August"),
- ("Sep", "Sept", "September"),
- ("Oct", "October"),
- ("Nov", "November"),
- ("Dec", "December")]
- HMS = [("h", "hour", "hours"),
- ("m", "minute", "minutes"),
- ("s", "second", "seconds")]
- AMPM = [("am", "a"),
- ("pm", "p")]
- UTCZONE = ["UTC", "GMT", "Z", "z"]
- PERTAIN = ["of"]
- TZOFFSET = {}
- # TODO: ERA = ["AD", "BC", "CE", "BCE", "Stardate",
- # "Anno Domini", "Year of Our Lord"]
-
- def __init__(self, dayfirst=False, yearfirst=False):
- self._jump = self._convert(self.JUMP)
- self._weekdays = self._convert(self.WEEKDAYS)
- self._months = self._convert(self.MONTHS)
- self._hms = self._convert(self.HMS)
- self._ampm = self._convert(self.AMPM)
- self._utczone = self._convert(self.UTCZONE)
- self._pertain = self._convert(self.PERTAIN)
-
- self.dayfirst = dayfirst
- self.yearfirst = yearfirst
-
- self._year = time.localtime().tm_year
- self._century = self._year // 100 * 100
-
- def _convert(self, lst):
- dct = {}
- for i, v in enumerate(lst):
- if isinstance(v, tuple):
- for v in v:
- dct[v.lower()] = i
- else:
- dct[v.lower()] = i
- return dct
-
- def jump(self, name):
- return name.lower() in self._jump
-
- def weekday(self, name):
- try:
- return self._weekdays[name.lower()]
- except KeyError:
- pass
- return None
-
- def month(self, name):
- try:
- return self._months[name.lower()] + 1
- except KeyError:
- pass
- return None
-
- def hms(self, name):
- try:
- return self._hms[name.lower()]
- except KeyError:
- return None
-
- def ampm(self, name):
- try:
- return self._ampm[name.lower()]
- except KeyError:
- return None
-
- def pertain(self, name):
- return name.lower() in self._pertain
-
- def utczone(self, name):
- return name.lower() in self._utczone
-
- def tzoffset(self, name):
- if name in self._utczone:
- return 0
-
- return self.TZOFFSET.get(name)
-
- def convertyear(self, year, century_specified=False):
- """
- Converts two-digit years to year within [-50, 49]
- range of self._year (current local time)
- """
-
- # Function contract is that the year is always positive
- assert year >= 0
-
- if year < 100 and not century_specified:
- # assume current century to start
- year += self._century
-
- if year >= self._year + 50: # if too far in future
- year -= 100
- elif year < self._year - 50: # if too far in past
- year += 100
-
- return year
-
- def validate(self, res):
- # move to info
- if res.year is not None:
- res.year = self.convertyear(res.year, res.century_specified)
-
- if ((res.tzoffset == 0 and not res.tzname) or
- (res.tzname == 'Z' or res.tzname == 'z')):
- res.tzname = "UTC"
- res.tzoffset = 0
- elif res.tzoffset != 0 and res.tzname and self.utczone(res.tzname):
- res.tzoffset = 0
- return True
-
-
-class _ymd(list):
- def __init__(self, *args, **kwargs):
- super(self.__class__, self).__init__(*args, **kwargs)
- self.century_specified = False
- self.dstridx = None
- self.mstridx = None
- self.ystridx = None
-
- @property
- def has_year(self):
- return self.ystridx is not None
-
- @property
- def has_month(self):
- return self.mstridx is not None
-
- @property
- def has_day(self):
- return self.dstridx is not None
-
- def could_be_day(self, value):
- if self.has_day:
- return False
- elif not self.has_month:
- return 1 <= value <= 31
- elif not self.has_year:
- # Be permissive, assume leap year
- month = self[self.mstridx]
- return 1 <= value <= monthrange(2000, month)[1]
- else:
- month = self[self.mstridx]
- year = self[self.ystridx]
- return 1 <= value <= monthrange(year, month)[1]
-
- def append(self, val, label=None):
- if hasattr(val, '__len__'):
- if val.isdigit() and len(val) > 2:
- self.century_specified = True
- if label not in [None, 'Y']: # pragma: no cover
- raise ValueError(label)
- label = 'Y'
- elif val > 100:
- self.century_specified = True
- if label not in [None, 'Y']: # pragma: no cover
- raise ValueError(label)
- label = 'Y'
-
- super(self.__class__, self).append(int(val))
-
- if label == 'M':
- if self.has_month:
- raise ValueError('Month is already set')
- self.mstridx = len(self) - 1
- elif label == 'D':
- if self.has_day:
- raise ValueError('Day is already set')
- self.dstridx = len(self) - 1
- elif label == 'Y':
- if self.has_year:
- raise ValueError('Year is already set')
- self.ystridx = len(self) - 1
-
- def _resolve_from_stridxs(self, strids):
- """
- Try to resolve the identities of year/month/day elements using
- ystridx, mstridx, and dstridx, if enough of these are specified.
- """
- if len(self) == 3 and len(strids) == 2:
- # we can back out the remaining stridx value
- missing = [x for x in range(3) if x not in strids.values()]
- key = [x for x in ['y', 'm', 'd'] if x not in strids]
- assert len(missing) == len(key) == 1
- key = key[0]
- val = missing[0]
- strids[key] = val
-
- assert len(self) == len(strids) # otherwise this should not be called
- out = {key: self[strids[key]] for key in strids}
- return (out.get('y'), out.get('m'), out.get('d'))
-
- def resolve_ymd(self, yearfirst, dayfirst):
- len_ymd = len(self)
- year, month, day = (None, None, None)
-
- strids = (('y', self.ystridx),
- ('m', self.mstridx),
- ('d', self.dstridx))
-
- strids = {key: val for key, val in strids if val is not None}
- if (len(self) == len(strids) > 0 or
- (len(self) == 3 and len(strids) == 2)):
- return self._resolve_from_stridxs(strids)
-
- mstridx = self.mstridx
-
- if len_ymd > 3:
- raise ValueError("More than three YMD values")
- elif len_ymd == 1 or (mstridx is not None and len_ymd == 2):
- # One member, or two members with a month string
- if mstridx is not None:
- month = self[mstridx]
- # since mstridx is 0 or 1, self[mstridx-1] always
- # looks up the other element
- other = self[mstridx - 1]
- else:
- other = self[0]
-
- if len_ymd > 1 or mstridx is None:
- if other > 31:
- year = other
- else:
- day = other
-
- elif len_ymd == 2:
- # Two members with numbers
- if self[0] > 31:
- # 99-01
- year, month = self
- elif self[1] > 31:
- # 01-99
- month, year = self
- elif dayfirst and self[1] <= 12:
- # 13-01
- day, month = self
- else:
- # 01-13
- month, day = self
-
- elif len_ymd == 3:
- # Three members
- if mstridx == 0:
- if self[1] > 31:
- # Apr-2003-25
- month, year, day = self
- else:
- month, day, year = self
- elif mstridx == 1:
- if self[0] > 31 or (yearfirst and self[2] <= 31):
- # 99-Jan-01
- year, month, day = self
- else:
- # 01-Jan-01
- # Give precedence to day-first, since
- # two-digit years is usually hand-written.
- day, month, year = self
-
- elif mstridx == 2:
- # WTF!?
- if self[1] > 31:
- # 01-99-Jan
- day, year, month = self
- else:
- # 99-01-Jan
- year, day, month = self
-
- else:
- if (self[0] > 31 or
- self.ystridx == 0 or
- (yearfirst and self[1] <= 12 and self[2] <= 31)):
- # 99-01-01
- if dayfirst and self[2] <= 12:
- year, day, month = self
- else:
- year, month, day = self
- elif self[0] > 12 or (dayfirst and self[1] <= 12):
- # 13-01-01
- day, month, year = self
- else:
- # 01-13-01
- month, day, year = self
-
- return year, month, day
-
-
-class parser(object):
- def __init__(self, info=None):
- self.info = info or parserinfo()
-
- def parse(self, timestr, default=None,
- ignoretz=False, tzinfos=None, **kwargs):
- """
- Parse the date/time string into a :class:`datetime.datetime` object.
-
- :param timestr:
- Any date/time string using the supported formats.
-
- :param default:
- The default datetime object, if this is a datetime object and not
- ``None``, elements specified in ``timestr`` replace elements in the
- default object.
-
- :param ignoretz:
- If set ``True``, time zones in parsed strings are ignored and a
- naive :class:`datetime.datetime` object is returned.
-
- :param tzinfos:
- Additional time zone names / aliases which may be present in the
- string. This argument maps time zone names (and optionally offsets
- from those time zones) to time zones. This parameter can be a
- dictionary with timezone aliases mapping time zone names to time
- zones or a function taking two parameters (``tzname`` and
- ``tzoffset``) and returning a time zone.
-
- The timezones to which the names are mapped can be an integer
- offset from UTC in seconds or a :class:`tzinfo` object.
-
- .. doctest::
- :options: +NORMALIZE_WHITESPACE
-
- >>> from dateutil.parser import parse
- >>> from dateutil.tz import gettz
- >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")}
- >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200))
- >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21,
- tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago'))
-
- This parameter is ignored if ``ignoretz`` is set.
-
- :param \\*\\*kwargs:
- Keyword arguments as passed to ``_parse()``.
-
- :return:
- Returns a :class:`datetime.datetime` object or, if the
- ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the
- first element being a :class:`datetime.datetime` object, the second
- a tuple containing the fuzzy tokens.
-
- :raises ParserError:
- Raised for invalid or unknown string format, if the provided
- :class:`tzinfo` is not in a valid format, or if an invalid date
- would be created.
-
- :raises TypeError:
- Raised for non-string or character stream input.
-
- :raises OverflowError:
- Raised if the parsed date exceeds the largest valid C integer on
- your system.
- """
-
- if default is None:
- default = datetime.datetime.now().replace(hour=0, minute=0,
- second=0, microsecond=0)
-
- res, skipped_tokens = self._parse(timestr, **kwargs)
-
- if res is None:
- raise ParserError("Unknown string format: %s", timestr)
-
- if len(res) == 0:
- raise ParserError("String does not contain a date: %s", timestr)
-
- try:
- ret = self._build_naive(res, default)
- except ValueError as e:
- six.raise_from(ParserError(str(e) + ": %s", timestr), e)
-
- if not ignoretz:
- ret = self._build_tzaware(ret, res, tzinfos)
-
- if kwargs.get('fuzzy_with_tokens', False):
- return ret, skipped_tokens
- else:
- return ret
-
- class _result(_resultbase):
- __slots__ = ["year", "month", "day", "weekday",
- "hour", "minute", "second", "microsecond",
- "tzname", "tzoffset", "ampm","any_unused_tokens"]
-
- def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False,
- fuzzy_with_tokens=False):
- """
- Private method which performs the heavy lifting of parsing, called from
- ``parse()``, which passes on its ``kwargs`` to this function.
-
- :param timestr:
- The string to parse.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM
- and YMD. If set to ``None``, this value is retrieved from the
- current :class:`parserinfo` object (which itself defaults to
- ``False``).
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken
- to be the year, otherwise the last number is taken to be the year.
- If this is set to ``None``, the value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param fuzzy:
- Whether to allow fuzzy parsing, allowing for string like "Today is
- January 1, 2047 at 8:21:00AM".
-
- :param fuzzy_with_tokens:
- If ``True``, ``fuzzy`` is automatically set to True, and the parser
- will return a tuple where the first element is the parsed
- :class:`datetime.datetime` datetimestamp and the second element is
- a tuple containing the portions of the string which were ignored:
-
- .. doctest::
-
- >>> from dateutil.parser import parse
- >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True)
- (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at '))
-
- """
- if fuzzy_with_tokens:
- fuzzy = True
-
- info = self.info
-
- if dayfirst is None:
- dayfirst = info.dayfirst
-
- if yearfirst is None:
- yearfirst = info.yearfirst
-
- res = self._result()
- l = _timelex.split(timestr) # Splits the timestr into tokens
-
- skipped_idxs = []
-
- # year/month/day list
- ymd = _ymd()
-
- len_l = len(l)
- i = 0
- try:
- while i < len_l:
-
- # Check if it's a number
- value_repr = l[i]
- try:
- value = float(value_repr)
- except ValueError:
- value = None
-
- if value is not None:
- # Numeric token
- i = self._parse_numeric_token(l, i, info, ymd, res, fuzzy)
-
- # Check weekday
- elif info.weekday(l[i]) is not None:
- value = info.weekday(l[i])
- res.weekday = value
-
- # Check month name
- elif info.month(l[i]) is not None:
- value = info.month(l[i])
- ymd.append(value, 'M')
-
- if i + 1 < len_l:
- if l[i + 1] in ('-', '/'):
- # Jan-01[-99]
- sep = l[i + 1]
- ymd.append(l[i + 2])
-
- if i + 3 < len_l and l[i + 3] == sep:
- # Jan-01-99
- ymd.append(l[i + 4])
- i += 2
-
- i += 2
-
- elif (i + 4 < len_l and l[i + 1] == l[i + 3] == ' ' and
- info.pertain(l[i + 2])):
- # Jan of 01
- # In this case, 01 is clearly year
- if l[i + 4].isdigit():
- # Convert it here to become unambiguous
- value = int(l[i + 4])
- year = str(info.convertyear(value))
- ymd.append(year, 'Y')
- else:
- # Wrong guess
- pass
- # TODO: not hit in tests
- i += 4
-
- # Check am/pm
- elif info.ampm(l[i]) is not None:
- value = info.ampm(l[i])
- val_is_ampm = self._ampm_valid(res.hour, res.ampm, fuzzy)
-
- if val_is_ampm:
- res.hour = self._adjust_ampm(res.hour, value)
- res.ampm = value
-
- elif fuzzy:
- skipped_idxs.append(i)
-
- # Check for a timezone name
- elif self._could_be_tzname(res.hour, res.tzname, res.tzoffset, l[i]):
- res.tzname = l[i]
- res.tzoffset = info.tzoffset(res.tzname)
-
- # Check for something like GMT+3, or BRST+3. Notice
- # that it doesn't mean "I am 3 hours after GMT", but
- # "my time +3 is GMT". If found, we reverse the
- # logic so that timezone parsing code will get it
- # right.
- if i + 1 < len_l and l[i + 1] in ('+', '-'):
- l[i + 1] = ('+', '-')[l[i + 1] == '+']
- res.tzoffset = None
- if info.utczone(res.tzname):
- # With something like GMT+3, the timezone
- # is *not* GMT.
- res.tzname = None
-
- # Check for a numbered timezone
- elif res.hour is not None and l[i] in ('+', '-'):
- signal = (-1, 1)[l[i] == '+']
- len_li = len(l[i + 1])
-
- # TODO: check that l[i + 1] is integer?
- if len_li == 4:
- # -0300
- hour_offset = int(l[i + 1][:2])
- min_offset = int(l[i + 1][2:])
- elif i + 2 < len_l and l[i + 2] == ':':
- # -03:00
- hour_offset = int(l[i + 1])
- min_offset = int(l[i + 3]) # TODO: Check that l[i+3] is minute-like?
- i += 2
- elif len_li <= 2:
- # -[0]3
- hour_offset = int(l[i + 1][:2])
- min_offset = 0
- else:
- raise ValueError(timestr)
-
- res.tzoffset = signal * (hour_offset * 3600 + min_offset * 60)
-
- # Look for a timezone name between parenthesis
- if (i + 5 < len_l and
- info.jump(l[i + 2]) and l[i + 3] == '(' and
- l[i + 5] == ')' and
- 3 <= len(l[i + 4]) and
- self._could_be_tzname(res.hour, res.tzname,
- None, l[i + 4])):
- # -0300 (BRST)
- res.tzname = l[i + 4]
- i += 4
-
- i += 1
-
- # Check jumps
- elif not (info.jump(l[i]) or fuzzy):
- raise ValueError(timestr)
-
- else:
- skipped_idxs.append(i)
- i += 1
-
- # Process year/month/day
- year, month, day = ymd.resolve_ymd(yearfirst, dayfirst)
-
- res.century_specified = ymd.century_specified
- res.year = year
- res.month = month
- res.day = day
-
- except (IndexError, ValueError):
- return None, None
-
- if not info.validate(res):
- return None, None
-
- if fuzzy_with_tokens:
- skipped_tokens = self._recombine_skipped(l, skipped_idxs)
- return res, tuple(skipped_tokens)
- else:
- return res, None
-
- def _parse_numeric_token(self, tokens, idx, info, ymd, res, fuzzy):
- # Token is a number
- value_repr = tokens[idx]
- try:
- value = self._to_decimal(value_repr)
- except Exception as e:
- six.raise_from(ValueError('Unknown numeric token'), e)
-
- len_li = len(value_repr)
-
- len_l = len(tokens)
-
- if (len(ymd) == 3 and len_li in (2, 4) and
- res.hour is None and
- (idx + 1 >= len_l or
- (tokens[idx + 1] != ':' and
- info.hms(tokens[idx + 1]) is None))):
- # 19990101T23[59]
- s = tokens[idx]
- res.hour = int(s[:2])
-
- if len_li == 4:
- res.minute = int(s[2:])
-
- elif len_li == 6 or (len_li > 6 and tokens[idx].find('.') == 6):
- # YYMMDD or HHMMSS[.ss]
- s = tokens[idx]
-
- if not ymd and '.' not in tokens[idx]:
- ymd.append(s[:2])
- ymd.append(s[2:4])
- ymd.append(s[4:])
- else:
- # 19990101T235959[.59]
-
- # TODO: Check if res attributes already set.
- res.hour = int(s[:2])
- res.minute = int(s[2:4])
- res.second, res.microsecond = self._parsems(s[4:])
-
- elif len_li in (8, 12, 14):
- # YYYYMMDD
- s = tokens[idx]
- ymd.append(s[:4], 'Y')
- ymd.append(s[4:6])
- ymd.append(s[6:8])
-
- if len_li > 8:
- res.hour = int(s[8:10])
- res.minute = int(s[10:12])
-
- if len_li > 12:
- res.second = int(s[12:])
-
- elif self._find_hms_idx(idx, tokens, info, allow_jump=True) is not None:
- # HH[ ]h or MM[ ]m or SS[.ss][ ]s
- hms_idx = self._find_hms_idx(idx, tokens, info, allow_jump=True)
- (idx, hms) = self._parse_hms(idx, tokens, info, hms_idx)
- if hms is not None:
- # TODO: checking that hour/minute/second are not
- # already set?
- self._assign_hms(res, value_repr, hms)
-
- elif idx + 2 < len_l and tokens[idx + 1] == ':':
- # HH:MM[:SS[.ss]]
- res.hour = int(value)
- value = self._to_decimal(tokens[idx + 2]) # TODO: try/except for this?
- (res.minute, res.second) = self._parse_min_sec(value)
-
- if idx + 4 < len_l and tokens[idx + 3] == ':':
- res.second, res.microsecond = self._parsems(tokens[idx + 4])
-
- idx += 2
-
- idx += 2
-
- elif idx + 1 < len_l and tokens[idx + 1] in ('-', '/', '.'):
- sep = tokens[idx + 1]
- ymd.append(value_repr)
-
- if idx + 2 < len_l and not info.jump(tokens[idx + 2]):
- if tokens[idx + 2].isdigit():
- # 01-01[-01]
- ymd.append(tokens[idx + 2])
- else:
- # 01-Jan[-01]
- value = info.month(tokens[idx + 2])
-
- if value is not None:
- ymd.append(value, 'M')
- else:
- raise ValueError()
-
- if idx + 3 < len_l and tokens[idx + 3] == sep:
- # We have three members
- value = info.month(tokens[idx + 4])
-
- if value is not None:
- ymd.append(value, 'M')
- else:
- ymd.append(tokens[idx + 4])
- idx += 2
-
- idx += 1
- idx += 1
-
- elif idx + 1 >= len_l or info.jump(tokens[idx + 1]):
- if idx + 2 < len_l and info.ampm(tokens[idx + 2]) is not None:
- # 12 am
- hour = int(value)
- res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 2]))
- idx += 1
- else:
- # Year, month or day
- ymd.append(value)
- idx += 1
-
- elif info.ampm(tokens[idx + 1]) is not None and (0 <= value < 24):
- # 12am
- hour = int(value)
- res.hour = self._adjust_ampm(hour, info.ampm(tokens[idx + 1]))
- idx += 1
-
- elif ymd.could_be_day(value):
- ymd.append(value)
-
- elif not fuzzy:
- raise ValueError()
-
- return idx
-
- def _find_hms_idx(self, idx, tokens, info, allow_jump):
- len_l = len(tokens)
-
- if idx+1 < len_l and info.hms(tokens[idx+1]) is not None:
- # There is an "h", "m", or "s" label following this token. We take
- # assign the upcoming label to the current token.
- # e.g. the "12" in 12h"
- hms_idx = idx + 1
-
- elif (allow_jump and idx+2 < len_l and tokens[idx+1] == ' ' and
- info.hms(tokens[idx+2]) is not None):
- # There is a space and then an "h", "m", or "s" label.
- # e.g. the "12" in "12 h"
- hms_idx = idx + 2
-
- elif idx > 0 and info.hms(tokens[idx-1]) is not None:
- # There is a "h", "m", or "s" preceding this token. Since neither
- # of the previous cases was hit, there is no label following this
- # token, so we use the previous label.
- # e.g. the "04" in "12h04"
- hms_idx = idx-1
-
- elif (1 < idx == len_l-1 and tokens[idx-1] == ' ' and
- info.hms(tokens[idx-2]) is not None):
- # If we are looking at the final token, we allow for a
- # backward-looking check to skip over a space.
- # TODO: Are we sure this is the right condition here?
- hms_idx = idx - 2
-
- else:
- hms_idx = None
-
- return hms_idx
-
- def _assign_hms(self, res, value_repr, hms):
- # See GH issue #427, fixing float rounding
- value = self._to_decimal(value_repr)
-
- if hms == 0:
- # Hour
- res.hour = int(value)
- if value % 1:
- res.minute = int(60*(value % 1))
-
- elif hms == 1:
- (res.minute, res.second) = self._parse_min_sec(value)
-
- elif hms == 2:
- (res.second, res.microsecond) = self._parsems(value_repr)
-
- def _could_be_tzname(self, hour, tzname, tzoffset, token):
- return (hour is not None and
- tzname is None and
- tzoffset is None and
- len(token) <= 5 and
- (all(x in string.ascii_uppercase for x in token)
- or token in self.info.UTCZONE))
-
- def _ampm_valid(self, hour, ampm, fuzzy):
- """
- For fuzzy parsing, 'a' or 'am' (both valid English words)
- may erroneously trigger the AM/PM flag. Deal with that
- here.
- """
- val_is_ampm = True
-
- # If there's already an AM/PM flag, this one isn't one.
- if fuzzy and ampm is not None:
- val_is_ampm = False
-
- # If AM/PM is found and hour is not, raise a ValueError
- if hour is None:
- if fuzzy:
- val_is_ampm = False
- else:
- raise ValueError('No hour specified with AM or PM flag.')
- elif not 0 <= hour <= 12:
- # If AM/PM is found, it's a 12 hour clock, so raise
- # an error for invalid range
- if fuzzy:
- val_is_ampm = False
- else:
- raise ValueError('Invalid hour specified for 12-hour clock.')
-
- return val_is_ampm
-
- def _adjust_ampm(self, hour, ampm):
- if hour < 12 and ampm == 1:
- hour += 12
- elif hour == 12 and ampm == 0:
- hour = 0
- return hour
-
- def _parse_min_sec(self, value):
- # TODO: Every usage of this function sets res.second to the return
- # value. Are there any cases where second will be returned as None and
- # we *don't* want to set res.second = None?
- minute = int(value)
- second = None
-
- sec_remainder = value % 1
- if sec_remainder:
- second = int(60 * sec_remainder)
- return (minute, second)
-
- def _parse_hms(self, idx, tokens, info, hms_idx):
- # TODO: Is this going to admit a lot of false-positives for when we
- # just happen to have digits and "h", "m" or "s" characters in non-date
- # text? I guess hex hashes won't have that problem, but there's plenty
- # of random junk out there.
- if hms_idx is None:
- hms = None
- new_idx = idx
- elif hms_idx > idx:
- hms = info.hms(tokens[hms_idx])
- new_idx = hms_idx
- else:
- # Looking backwards, increment one.
- hms = info.hms(tokens[hms_idx]) + 1
- new_idx = idx
-
- return (new_idx, hms)
-
- # ------------------------------------------------------------------
- # Handling for individual tokens. These are kept as methods instead
- # of functions for the sake of customizability via subclassing.
-
- def _parsems(self, value):
- """Parse a I[.F] seconds value into (seconds, microseconds)."""
- if "." not in value:
- return int(value), 0
- else:
- i, f = value.split(".")
- return int(i), int(f.ljust(6, "0")[:6])
-
- def _to_decimal(self, val):
- try:
- decimal_value = Decimal(val)
- # See GH 662, edge case, infinite value should not be converted
- # via `_to_decimal`
- if not decimal_value.is_finite():
- raise ValueError("Converted decimal value is infinite or NaN")
- except Exception as e:
- msg = "Could not convert %s to decimal" % val
- six.raise_from(ValueError(msg), e)
- else:
- return decimal_value
-
- # ------------------------------------------------------------------
- # Post-Parsing construction of datetime output. These are kept as
- # methods instead of functions for the sake of customizability via
- # subclassing.
-
- def _build_tzinfo(self, tzinfos, tzname, tzoffset):
- if callable(tzinfos):
- tzdata = tzinfos(tzname, tzoffset)
- else:
- tzdata = tzinfos.get(tzname)
- # handle case where tzinfo is paased an options that returns None
- # eg tzinfos = {'BRST' : None}
- if isinstance(tzdata, datetime.tzinfo) or tzdata is None:
- tzinfo = tzdata
- elif isinstance(tzdata, text_type):
- tzinfo = tz.tzstr(tzdata)
- elif isinstance(tzdata, integer_types):
- tzinfo = tz.tzoffset(tzname, tzdata)
- else:
- raise TypeError("Offset must be tzinfo subclass, tz string, "
- "or int offset.")
- return tzinfo
-
- def _build_tzaware(self, naive, res, tzinfos):
- if (callable(tzinfos) or (tzinfos and res.tzname in tzinfos)):
- tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset)
- aware = naive.replace(tzinfo=tzinfo)
- aware = self._assign_tzname(aware, res.tzname)
-
- elif res.tzname and res.tzname in time.tzname:
- aware = naive.replace(tzinfo=tz.tzlocal())
-
- # Handle ambiguous local datetime
- aware = self._assign_tzname(aware, res.tzname)
-
- # This is mostly relevant for winter GMT zones parsed in the UK
- if (aware.tzname() != res.tzname and
- res.tzname in self.info.UTCZONE):
- aware = aware.replace(tzinfo=tz.UTC)
-
- elif res.tzoffset == 0:
- aware = naive.replace(tzinfo=tz.UTC)
-
- elif res.tzoffset:
- aware = naive.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
-
- elif not res.tzname and not res.tzoffset:
- # i.e. no timezone information was found.
- aware = naive
-
- elif res.tzname:
- # tz-like string was parsed but we don't know what to do
- # with it
- warnings.warn("tzname {tzname} identified but not understood. "
- "Pass `tzinfos` argument in order to correctly "
- "return a timezone-aware datetime. In a future "
- "version, this will raise an "
- "exception.".format(tzname=res.tzname),
- category=UnknownTimezoneWarning)
- aware = naive
-
- return aware
-
- def _build_naive(self, res, default):
- repl = {}
- for attr in ("year", "month", "day", "hour",
- "minute", "second", "microsecond"):
- value = getattr(res, attr)
- if value is not None:
- repl[attr] = value
-
- if 'day' not in repl:
- # If the default day exceeds the last day of the month, fall back
- # to the end of the month.
- cyear = default.year if res.year is None else res.year
- cmonth = default.month if res.month is None else res.month
- cday = default.day if res.day is None else res.day
-
- if cday > monthrange(cyear, cmonth)[1]:
- repl['day'] = monthrange(cyear, cmonth)[1]
-
- naive = default.replace(**repl)
-
- if res.weekday is not None and not res.day:
- naive = naive + relativedelta.relativedelta(weekday=res.weekday)
-
- return naive
-
- def _assign_tzname(self, dt, tzname):
- if dt.tzname() != tzname:
- new_dt = tz.enfold(dt, fold=1)
- if new_dt.tzname() == tzname:
- return new_dt
-
- return dt
-
- def _recombine_skipped(self, tokens, skipped_idxs):
- """
- >>> tokens = ["foo", " ", "bar", " ", "19June2000", "baz"]
- >>> skipped_idxs = [0, 1, 2, 5]
- >>> _recombine_skipped(tokens, skipped_idxs)
- ["foo bar", "baz"]
- """
- skipped_tokens = []
- for i, idx in enumerate(sorted(skipped_idxs)):
- if i > 0 and idx - 1 == skipped_idxs[i - 1]:
- skipped_tokens[-1] = skipped_tokens[-1] + tokens[idx]
- else:
- skipped_tokens.append(tokens[idx])
-
- return skipped_tokens
-
-
-DEFAULTPARSER = parser()
-
-
-def parse(timestr, parserinfo=None, **kwargs):
- """
-
- Parse a string in one of the supported formats, using the
- ``parserinfo`` parameters.
-
- :param timestr:
- A string containing a date/time stamp.
-
- :param parserinfo:
- A :class:`parserinfo` object containing parameters for the parser.
- If ``None``, the default arguments to the :class:`parserinfo`
- constructor are used.
-
- The ``**kwargs`` parameter takes the following keyword arguments:
-
- :param default:
- The default datetime object, if this is a datetime object and not
- ``None``, elements specified in ``timestr`` replace elements in the
- default object.
-
- :param ignoretz:
- If set ``True``, time zones in parsed strings are ignored and a naive
- :class:`datetime` object is returned.
-
- :param tzinfos:
- Additional time zone names / aliases which may be present in the
- string. This argument maps time zone names (and optionally offsets
- from those time zones) to time zones. This parameter can be a
- dictionary with timezone aliases mapping time zone names to time
- zones or a function taking two parameters (``tzname`` and
- ``tzoffset``) and returning a time zone.
-
- The timezones to which the names are mapped can be an integer
- offset from UTC in seconds or a :class:`tzinfo` object.
-
- .. doctest::
- :options: +NORMALIZE_WHITESPACE
-
- >>> from dateutil.parser import parse
- >>> from dateutil.tz import gettz
- >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")}
- >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200))
- >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos)
- datetime.datetime(2012, 1, 19, 17, 21,
- tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago'))
-
- This parameter is ignored if ``ignoretz`` is set.
-
- :param dayfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the day (``True``) or month (``False``). If
- ``yearfirst`` is set to ``True``, this distinguishes between YDM and
- YMD. If set to ``None``, this value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param yearfirst:
- Whether to interpret the first value in an ambiguous 3-integer date
- (e.g. 01/05/09) as the year. If ``True``, the first number is taken to
- be the year, otherwise the last number is taken to be the year. If
- this is set to ``None``, the value is retrieved from the current
- :class:`parserinfo` object (which itself defaults to ``False``).
-
- :param fuzzy:
- Whether to allow fuzzy parsing, allowing for string like "Today is
- January 1, 2047 at 8:21:00AM".
-
- :param fuzzy_with_tokens:
- If ``True``, ``fuzzy`` is automatically set to True, and the parser
- will return a tuple where the first element is the parsed
- :class:`datetime.datetime` datetimestamp and the second element is
- a tuple containing the portions of the string which were ignored:
-
- .. doctest::
-
- >>> from dateutil.parser import parse
- >>> parse("Today is January 1, 2047 at 8:21:00AM", fuzzy_with_tokens=True)
- (datetime.datetime(2047, 1, 1, 8, 21), (u'Today is ', u' ', u'at '))
-
- :return:
- Returns a :class:`datetime.datetime` object or, if the
- ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the
- first element being a :class:`datetime.datetime` object, the second
- a tuple containing the fuzzy tokens.
-
- :raises ParserError:
- Raised for invalid or unknown string formats, if the provided
- :class:`tzinfo` is not in a valid format, or if an invalid date would
- be created.
-
- :raises OverflowError:
- Raised if the parsed date exceeds the largest valid C integer on
- your system.
- """
- if parserinfo:
- return parser(parserinfo).parse(timestr, **kwargs)
- else:
- return DEFAULTPARSER.parse(timestr, **kwargs)
-
-
-class _tzparser(object):
-
- class _result(_resultbase):
-
- __slots__ = ["stdabbr", "stdoffset", "dstabbr", "dstoffset",
- "start", "end"]
-
- class _attr(_resultbase):
- __slots__ = ["month", "week", "weekday",
- "yday", "jyday", "day", "time"]
-
- def __repr__(self):
- return self._repr("")
-
- def __init__(self):
- _resultbase.__init__(self)
- self.start = self._attr()
- self.end = self._attr()
-
- def parse(self, tzstr):
- res = self._result()
- l = [x for x in re.split(r'([,:.]|[a-zA-Z]+|[0-9]+)',tzstr) if x]
- used_idxs = list()
- try:
-
- len_l = len(l)
-
- i = 0
- while i < len_l:
- # BRST+3[BRDT[+2]]
- j = i
- while j < len_l and not [x for x in l[j]
- if x in "0123456789:,-+"]:
- j += 1
- if j != i:
- if not res.stdabbr:
- offattr = "stdoffset"
- res.stdabbr = "".join(l[i:j])
- else:
- offattr = "dstoffset"
- res.dstabbr = "".join(l[i:j])
-
- for ii in range(j):
- used_idxs.append(ii)
- i = j
- if (i < len_l and (l[i] in ('+', '-') or l[i][0] in
- "0123456789")):
- if l[i] in ('+', '-'):
- # Yes, that's right. See the TZ variable
- # documentation.
- signal = (1, -1)[l[i] == '+']
- used_idxs.append(i)
- i += 1
- else:
- signal = -1
- len_li = len(l[i])
- if len_li == 4:
- # -0300
- setattr(res, offattr, (int(l[i][:2]) * 3600 +
- int(l[i][2:]) * 60) * signal)
- elif i + 1 < len_l and l[i + 1] == ':':
- # -03:00
- setattr(res, offattr,
- (int(l[i]) * 3600 +
- int(l[i + 2]) * 60) * signal)
- used_idxs.append(i)
- i += 2
- elif len_li <= 2:
- # -[0]3
- setattr(res, offattr,
- int(l[i][:2]) * 3600 * signal)
- else:
- return None
- used_idxs.append(i)
- i += 1
- if res.dstabbr:
- break
- else:
- break
-
-
- if i < len_l:
- for j in range(i, len_l):
- if l[j] == ';':
- l[j] = ','
-
- assert l[i] == ','
-
- i += 1
-
- if i >= len_l:
- pass
- elif (8 <= l.count(',') <= 9 and
- not [y for x in l[i:] if x != ','
- for y in x if y not in "0123456789+-"]):
- # GMT0BST,3,0,30,3600,10,0,26,7200[,3600]
- for x in (res.start, res.end):
- x.month = int(l[i])
- used_idxs.append(i)
- i += 2
- if l[i] == '-':
- value = int(l[i + 1]) * -1
- used_idxs.append(i)
- i += 1
- else:
- value = int(l[i])
- used_idxs.append(i)
- i += 2
- if value:
- x.week = value
- x.weekday = (int(l[i]) - 1) % 7
- else:
- x.day = int(l[i])
- used_idxs.append(i)
- i += 2
- x.time = int(l[i])
- used_idxs.append(i)
- i += 2
- if i < len_l:
- if l[i] in ('-', '+'):
- signal = (-1, 1)[l[i] == "+"]
- used_idxs.append(i)
- i += 1
- else:
- signal = 1
- used_idxs.append(i)
- res.dstoffset = (res.stdoffset + int(l[i]) * signal)
-
- # This was a made-up format that is not in normal use
- warn(('Parsed time zone "%s"' % tzstr) +
- 'is in a non-standard dateutil-specific format, which ' +
- 'is now deprecated; support for parsing this format ' +
- 'will be removed in future versions. It is recommended ' +
- 'that you switch to a standard format like the GNU ' +
- 'TZ variable format.', tz.DeprecatedTzFormatWarning)
- elif (l.count(',') == 2 and l[i:].count('/') <= 2 and
- not [y for x in l[i:] if x not in (',', '/', 'J', 'M',
- '.', '-', ':')
- for y in x if y not in "0123456789"]):
- for x in (res.start, res.end):
- if l[i] == 'J':
- # non-leap year day (1 based)
- used_idxs.append(i)
- i += 1
- x.jyday = int(l[i])
- elif l[i] == 'M':
- # month[-.]week[-.]weekday
- used_idxs.append(i)
- i += 1
- x.month = int(l[i])
- used_idxs.append(i)
- i += 1
- assert l[i] in ('-', '.')
- used_idxs.append(i)
- i += 1
- x.week = int(l[i])
- if x.week == 5:
- x.week = -1
- used_idxs.append(i)
- i += 1
- assert l[i] in ('-', '.')
- used_idxs.append(i)
- i += 1
- x.weekday = (int(l[i]) - 1) % 7
- else:
- # year day (zero based)
- x.yday = int(l[i]) + 1
-
- used_idxs.append(i)
- i += 1
-
- if i < len_l and l[i] == '/':
- used_idxs.append(i)
- i += 1
- # start time
- len_li = len(l[i])
- if len_li == 4:
- # -0300
- x.time = (int(l[i][:2]) * 3600 +
- int(l[i][2:]) * 60)
- elif i + 1 < len_l and l[i + 1] == ':':
- # -03:00
- x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60
- used_idxs.append(i)
- i += 2
- if i + 1 < len_l and l[i + 1] == ':':
- used_idxs.append(i)
- i += 2
- x.time += int(l[i])
- elif len_li <= 2:
- # -[0]3
- x.time = (int(l[i][:2]) * 3600)
- else:
- return None
- used_idxs.append(i)
- i += 1
-
- assert i == len_l or l[i] == ','
-
- i += 1
-
- assert i >= len_l
-
- except (IndexError, ValueError, AssertionError):
- return None
-
- unused_idxs = set(range(len_l)).difference(used_idxs)
- res.any_unused_tokens = not {l[n] for n in unused_idxs}.issubset({",",":"})
- return res
-
-
-DEFAULTTZPARSER = _tzparser()
-
-
-def _parsetz(tzstr):
- return DEFAULTTZPARSER.parse(tzstr)
-
-
-class ParserError(ValueError):
- """Exception subclass used for any failure to parse a datetime string.
-
- This is a subclass of :py:exc:`ValueError`, and should be raised any time
- earlier versions of ``dateutil`` would have raised ``ValueError``.
-
- .. versionadded:: 2.8.1
- """
- def __str__(self):
- try:
- return self.args[0] % self.args[1:]
- except (TypeError, IndexError):
- return super(ParserError, self).__str__()
-
- def __repr__(self):
- args = ", ".join("'%s'" % arg for arg in self.args)
- return "%s(%s)" % (self.__class__.__name__, args)
-
-
-class UnknownTimezoneWarning(RuntimeWarning):
- """Raised when the parser finds a timezone it cannot parse into a tzinfo.
-
- .. versionadded:: 2.7.0
- """
-# vim:ts=4:sw=4:et
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py
deleted file mode 100644
index b206692a0a976d8336e3f5896eadf4765a33fb2c..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/base.py
+++ /dev/null
@@ -1,141 +0,0 @@
-from typing import FrozenSet, Iterable, Optional, Tuple, Union
-
-from pip._vendor.packaging.specifiers import SpecifierSet
-from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
-from pip._vendor.packaging.version import LegacyVersion, Version
-
-from pip._internal.models.link import Link, links_equivalent
-from pip._internal.req.req_install import InstallRequirement
-from pip._internal.utils.hashes import Hashes
-
-CandidateLookup = Tuple[Optional["Candidate"], Optional[InstallRequirement]]
-CandidateVersion = Union[LegacyVersion, Version]
-
-
-def format_name(project: str, extras: FrozenSet[str]) -> str:
- if not extras:
- return project
- canonical_extras = sorted(canonicalize_name(e) for e in extras)
- return "{}[{}]".format(project, ",".join(canonical_extras))
-
-
-class Constraint:
- def __init__(
- self, specifier: SpecifierSet, hashes: Hashes, links: FrozenSet[Link]
- ) -> None:
- self.specifier = specifier
- self.hashes = hashes
- self.links = links
-
- @classmethod
- def empty(cls) -> "Constraint":
- return Constraint(SpecifierSet(), Hashes(), frozenset())
-
- @classmethod
- def from_ireq(cls, ireq: InstallRequirement) -> "Constraint":
- links = frozenset([ireq.link]) if ireq.link else frozenset()
- return Constraint(ireq.specifier, ireq.hashes(trust_internet=False), links)
-
- def __bool__(self) -> bool:
- return bool(self.specifier) or bool(self.hashes) or bool(self.links)
-
- def __and__(self, other: InstallRequirement) -> "Constraint":
- if not isinstance(other, InstallRequirement):
- return NotImplemented
- specifier = self.specifier & other.specifier
- hashes = self.hashes & other.hashes(trust_internet=False)
- links = self.links
- if other.link:
- links = links.union([other.link])
- return Constraint(specifier, hashes, links)
-
- def is_satisfied_by(self, candidate: "Candidate") -> bool:
- # Reject if there are any mismatched URL constraints on this package.
- if self.links and not all(_match_link(link, candidate) for link in self.links):
- return False
- # We can safely always allow prereleases here since PackageFinder
- # already implements the prerelease logic, and would have filtered out
- # prerelease candidates if the user does not expect them.
- return self.specifier.contains(candidate.version, prereleases=True)
-
-
-class Requirement:
- @property
- def project_name(self) -> NormalizedName:
- """The "project name" of a requirement.
-
- This is different from ``name`` if this requirement contains extras,
- in which case ``name`` would contain the ``[...]`` part, while this
- refers to the name of the project.
- """
- raise NotImplementedError("Subclass should override")
-
- @property
- def name(self) -> str:
- """The name identifying this requirement in the resolver.
-
- This is different from ``project_name`` if this requirement contains
- extras, where ``project_name`` would not contain the ``[...]`` part.
- """
- raise NotImplementedError("Subclass should override")
-
- def is_satisfied_by(self, candidate: "Candidate") -> bool:
- return False
-
- def get_candidate_lookup(self) -> CandidateLookup:
- raise NotImplementedError("Subclass should override")
-
- def format_for_error(self) -> str:
- raise NotImplementedError("Subclass should override")
-
-
-def _match_link(link: Link, candidate: "Candidate") -> bool:
- if candidate.source_link:
- return links_equivalent(link, candidate.source_link)
- return False
-
-
-class Candidate:
- @property
- def project_name(self) -> NormalizedName:
- """The "project name" of the candidate.
-
- This is different from ``name`` if this candidate contains extras,
- in which case ``name`` would contain the ``[...]`` part, while this
- refers to the name of the project.
- """
- raise NotImplementedError("Override in subclass")
-
- @property
- def name(self) -> str:
- """The name identifying this candidate in the resolver.
-
- This is different from ``project_name`` if this candidate contains
- extras, where ``project_name`` would not contain the ``[...]`` part.
- """
- raise NotImplementedError("Override in subclass")
-
- @property
- def version(self) -> CandidateVersion:
- raise NotImplementedError("Override in subclass")
-
- @property
- def is_installed(self) -> bool:
- raise NotImplementedError("Override in subclass")
-
- @property
- def is_editable(self) -> bool:
- raise NotImplementedError("Override in subclass")
-
- @property
- def source_link(self) -> Optional[Link]:
- raise NotImplementedError("Override in subclass")
-
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
- raise NotImplementedError("Override in subclass")
-
- def get_install_requirement(self) -> Optional[InstallRequirement]:
- raise NotImplementedError("Override in subclass")
-
- def format_for_error(self) -> str:
- raise NotImplementedError("Subclass should override")
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/windows.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/windows.py
deleted file mode 100644
index e7573c3d6ae773d852da06c107c07b253d44b496..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/windows.py
+++ /dev/null
@@ -1,195 +0,0 @@
-from __future__ import annotations
-
-import ctypes
-import os
-import sys
-from functools import lru_cache
-from typing import Callable
-
-from .api import PlatformDirsABC
-
-
-class Windows(PlatformDirsABC):
- """`MSDN on where to store app data files
- `_.
- Makes use of the
- `appname `,
- `appauthor `,
- `version `,
- `roaming `,
- `opinion `,
- `ensure_exists `.
- """
-
- @property
- def user_data_dir(self) -> str:
- """
- :return: data directory tied to the user, e.g.
- ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname`` (not roaming) or
- ``%USERPROFILE%\\AppData\\Roaming\\$appauthor\\$appname`` (roaming)
- """
- const = "CSIDL_APPDATA" if self.roaming else "CSIDL_LOCAL_APPDATA"
- path = os.path.normpath(get_win_folder(const))
- return self._append_parts(path)
-
- def _append_parts(self, path: str, *, opinion_value: str | None = None) -> str:
- params = []
- if self.appname:
- if self.appauthor is not False:
- author = self.appauthor or self.appname
- params.append(author)
- params.append(self.appname)
- if opinion_value is not None and self.opinion:
- params.append(opinion_value)
- if self.version:
- params.append(self.version)
- path = os.path.join(path, *params)
- self._optionally_create_directory(path)
- return path
-
- @property
- def site_data_dir(self) -> str:
- """:return: data directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname``"""
- path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
- return self._append_parts(path)
-
- @property
- def user_config_dir(self) -> str:
- """:return: config directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def site_config_dir(self) -> str:
- """:return: config directory shared by the users, same as `site_data_dir`"""
- return self.site_data_dir
-
- @property
- def user_cache_dir(self) -> str:
- """
- :return: cache directory tied to the user (if opinionated with ``Cache`` folder within ``$appname``) e.g.
- ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname\\Cache\\$version``
- """
- path = os.path.normpath(get_win_folder("CSIDL_LOCAL_APPDATA"))
- return self._append_parts(path, opinion_value="Cache")
-
- @property
- def site_cache_dir(self) -> str:
- """:return: cache directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname\\Cache\\$version``"""
- path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
- return self._append_parts(path, opinion_value="Cache")
-
- @property
- def user_state_dir(self) -> str:
- """:return: state directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def user_log_dir(self) -> str:
- """
- :return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``Logs`` in it
- """
- path = self.user_data_dir
- if self.opinion:
- path = os.path.join(path, "Logs")
- self._optionally_create_directory(path)
- return path
-
- @property
- def user_documents_dir(self) -> str:
- """
- :return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``
- """
- return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
-
- @property
- def user_runtime_dir(self) -> str:
- """
- :return: runtime directory tied to the user, e.g.
- ``%USERPROFILE%\\AppData\\Local\\Temp\\$appauthor\\$appname``
- """
- path = os.path.normpath(os.path.join(get_win_folder("CSIDL_LOCAL_APPDATA"), "Temp"))
- return self._append_parts(path)
-
-
-def get_win_folder_from_env_vars(csidl_name: str) -> str:
- """Get folder from environment variables."""
- if csidl_name == "CSIDL_PERSONAL": # does not have an environment name
- return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Documents")
-
- env_var_name = {
- "CSIDL_APPDATA": "APPDATA",
- "CSIDL_COMMON_APPDATA": "ALLUSERSPROFILE",
- "CSIDL_LOCAL_APPDATA": "LOCALAPPDATA",
- }.get(csidl_name)
- if env_var_name is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
- result = os.environ.get(env_var_name)
- if result is None:
- raise ValueError(f"Unset environment variable: {env_var_name}")
- return result
-
-
-def get_win_folder_from_registry(csidl_name: str) -> str:
- """Get folder from the registry.
-
- This is a fallback technique at best. I'm not sure if using the
- registry for this guarantees us the correct answer for all CSIDL_*
- names.
- """
- shell_folder_name = {
- "CSIDL_APPDATA": "AppData",
- "CSIDL_COMMON_APPDATA": "Common AppData",
- "CSIDL_LOCAL_APPDATA": "Local AppData",
- "CSIDL_PERSONAL": "Personal",
- }.get(csidl_name)
- if shell_folder_name is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
- if sys.platform != "win32": # only needed for mypy type checker to know that this code runs only on Windows
- raise NotImplementedError
- import winreg
-
- key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders")
- directory, _ = winreg.QueryValueEx(key, shell_folder_name)
- return str(directory)
-
-
-def get_win_folder_via_ctypes(csidl_name: str) -> str:
- """Get folder with ctypes."""
- csidl_const = {
- "CSIDL_APPDATA": 26,
- "CSIDL_COMMON_APPDATA": 35,
- "CSIDL_LOCAL_APPDATA": 28,
- "CSIDL_PERSONAL": 5,
- }.get(csidl_name)
- if csidl_const is None:
- raise ValueError(f"Unknown CSIDL name: {csidl_name}")
-
- buf = ctypes.create_unicode_buffer(1024)
- windll = getattr(ctypes, "windll") # noqa: B009 # using getattr to avoid false positive with mypy type checker
- windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
-
- # Downgrade to short path name if it has highbit chars.
- if any(ord(c) > 255 for c in buf):
- buf2 = ctypes.create_unicode_buffer(1024)
- if windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
- buf = buf2
-
- return buf.value
-
-
-def _pick_get_win_folder() -> Callable[[str], str]:
- if hasattr(ctypes, "windll"):
- return get_win_folder_via_ctypes
- try:
- import winreg # noqa: F401
- except ImportError:
- return get_win_folder_from_env_vars
- else:
- return get_win_folder_from_registry
-
-
-get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder())
-
-__all__ = [
- "Windows",
-]
diff --git a/spaces/Billyosoro/ESRGAN/Training.md b/spaces/Billyosoro/ESRGAN/Training.md
deleted file mode 100644
index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/Training.md
+++ /dev/null
@@ -1,100 +0,0 @@
-# :computer: How to Train Real-ESRGAN
-
-The training codes have been released.
-Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models.
-
-## Overview
-
-The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically,
-
-1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN.
-1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss.
-
-## Dataset Preparation
-
-We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required.
-You can download from :
-
-1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
-2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
-3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
-
-For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales.
-
-We then crop DF2K images into sub-images for faster IO and processing.
-
-You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file):
-
-```txt
-DF2K_HR_sub/000001_s001.png
-DF2K_HR_sub/000001_s002.png
-DF2K_HR_sub/000001_s003.png
-...
-```
-
-## Train Real-ESRNet
-
-1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`.
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
- ```
-1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly:
- ```yml
- train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # modify to the root path of your folder
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
- io_backend:
- type: disk
- ```
-1. If you want to perform validation during training, uncomment those lines and modify accordingly:
- ```yml
- # Uncomment these for validation
- # val:
- # name: validation
- # type: PairedImageDataset
- # dataroot_gt: path_to_gt
- # dataroot_lq: path_to_lq
- # io_backend:
- # type: disk
-
- ...
-
- # Uncomment these for validation
- # validation settings
- # val:
- # val_freq: !!float 5e3
- # save_img: True
-
- # metrics:
- # psnr: # metric name, can be arbitrary
- # type: calculate_psnr
- # crop_border: 4
- # test_y_channel: false
- ```
-1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
- ```
-1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
- ```
-
-## Train Real-ESRGAN
-
-1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
-1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above.
-1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
- ```
-1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
- ```
diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/README.md b/spaces/BraydenMoore/MARCI-NFL-Betting/README.md
deleted file mode 100644
index b1fb0a3abcb9899c7847c25f64bf83d17791c028..0000000000000000000000000000000000000000
--- a/spaces/BraydenMoore/MARCI-NFL-Betting/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: MARCI (NFL Betting)
-emoji: 🏈
-colorFrom: red
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/main.py b/spaces/CVPR/LIVE/main.py
deleted file mode 100644
index 00ed8601b4b1d85741ab8d5c75adbbf425942d2b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/main.py
+++ /dev/null
@@ -1,1040 +0,0 @@
-"""
-Here are some use cases:
-python main.py --config config/all.yaml --experiment experiment_8x1 --signature demo1 --target data/demo1.png
-"""
-import pydiffvg
-import torch
-import cv2
-import matplotlib.pyplot as plt
-import random
-import argparse
-import math
-import errno
-from tqdm import tqdm
-from torch.optim.lr_scheduler import CosineAnnealingLR, LambdaLR
-from torch.nn.functional import adaptive_avg_pool2d
-import warnings
-warnings.filterwarnings("ignore")
-
-import PIL
-import PIL.Image
-import os
-import os.path as osp
-import numpy as np
-import numpy.random as npr
-import shutil
-import copy
-# import skfmm
-from xing_loss import xing_loss
-
-import yaml
-from easydict import EasyDict as edict
-
-
-pydiffvg.set_print_timing(False)
-gamma = 1.0
-
-##########
-# helper #
-##########
-
-from utils import \
- get_experiment_id, \
- get_path_schedule, \
- edict_2_dict, \
- check_and_create_dir
-
-def get_bezier_circle(radius=1, segments=4, bias=None):
- points = []
- if bias is None:
- bias = (random.random(), random.random())
- avg_degree = 360 / (segments*3)
- for i in range(0, segments*3):
- point = (np.cos(np.deg2rad(i * avg_degree)),
- np.sin(np.deg2rad(i * avg_degree)))
- points.append(point)
- points = torch.tensor(points)
- points = (points)*radius + torch.tensor(bias).unsqueeze(dim=0)
- points = points.type(torch.FloatTensor)
- return points
-
-def get_sdf(phi, method='skfmm', **kwargs):
- if method == 'skfmm':
- import skfmm
- phi = (phi-0.5)*2
- if (phi.max() <= 0) or (phi.min() >= 0):
- return np.zeros(phi.shape).astype(np.float32)
- sd = skfmm.distance(phi, dx=1)
-
- flip_negative = kwargs.get('flip_negative', True)
- if flip_negative:
- sd = np.abs(sd)
-
- truncate = kwargs.get('truncate', 10)
- sd = np.clip(sd, -truncate, truncate)
- # print(f"max sd value is: {sd.max()}")
-
- zero2max = kwargs.get('zero2max', True)
- if zero2max and flip_negative:
- sd = sd.max() - sd
- elif zero2max:
- raise ValueError
-
- normalize = kwargs.get('normalize', 'sum')
- if normalize == 'sum':
- sd /= sd.sum()
- elif normalize == 'to1':
- sd /= sd.max()
- return sd
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument('--debug', action='store_true', default=False)
- parser.add_argument("--config", type=str)
- parser.add_argument("--experiment", type=str)
- parser.add_argument("--seed", type=int)
- parser.add_argument("--target", type=str, help="target image path")
- parser.add_argument('--log_dir', metavar='DIR', default="log/debug")
- parser.add_argument('--initial', type=str, default="random", choices=['random', 'circle'])
- parser.add_argument('--signature', nargs='+', type=str)
- parser.add_argument('--seginit', nargs='+', type=str)
- parser.add_argument("--num_segments", type=int, default=4)
- # parser.add_argument("--num_paths", type=str, default="1,1,1")
- # parser.add_argument("--num_iter", type=int, default=500)
- # parser.add_argument('--free', action='store_true')
- # Please ensure that image resolution is divisible by pool_size; otherwise the performance would drop a lot.
- # parser.add_argument('--pool_size', type=int, default=40, help="the pooled image size for next path initialization")
- # parser.add_argument('--save_loss', action='store_true')
- # parser.add_argument('--save_init', action='store_true')
- # parser.add_argument('--save_image', action='store_true')
- # parser.add_argument('--save_video', action='store_true')
- # parser.add_argument('--print_weight', action='store_true')
- # parser.add_argument('--circle_init_radius', type=float)
- cfg = edict()
- args = parser.parse_args()
- cfg.debug = args.debug
- cfg.config = args.config
- cfg.experiment = args.experiment
- cfg.seed = args.seed
- cfg.target = args.target
- cfg.log_dir = args.log_dir
- cfg.initial = args.initial
- cfg.signature = args.signature
- # set cfg num_segments in command
- cfg.num_segments = args.num_segments
- if args.seginit is not None:
- cfg.seginit = edict()
- cfg.seginit.type = args.seginit[0]
- if cfg.seginit.type == 'circle':
- cfg.seginit.radius = float(args.seginit[1])
- return cfg
-
-def ycrcb_conversion(im, format='[bs x 3 x 2D]', reverse=False):
- mat = torch.FloatTensor([
- [ 65.481/255, 128.553/255, 24.966/255], # ranged_from [0, 219/255]
- [-37.797/255, -74.203/255, 112.000/255], # ranged_from [-112/255, 112/255]
- [112.000/255, -93.786/255, -18.214/255], # ranged_from [-112/255, 112/255]
- ]).to(im.device)
-
- if reverse:
- mat = mat.inverse()
-
- if format == '[bs x 3 x 2D]':
- im = im.permute(0, 2, 3, 1)
- im = torch.matmul(im, mat.T)
- im = im.permute(0, 3, 1, 2).contiguous()
- return im
- elif format == '[2D x 3]':
- im = torch.matmul(im, mat.T)
- return im
- else:
- raise ValueError
-
-class random_coord_init():
- def __init__(self, canvas_size):
- self.canvas_size = canvas_size
- def __call__(self):
- h, w = self.canvas_size
- return [npr.uniform(0, 1)*w, npr.uniform(0, 1)*h]
-
-class naive_coord_init():
- def __init__(self, pred, gt, format='[bs x c x 2D]', replace_sampling=True):
- if isinstance(pred, torch.Tensor):
- pred = pred.detach().cpu().numpy()
- if isinstance(gt, torch.Tensor):
- gt = gt.detach().cpu().numpy()
-
- if format == '[bs x c x 2D]':
- self.map = ((pred[0] - gt[0])**2).sum(0)
- elif format == ['[2D x c]']:
- self.map = ((pred - gt)**2).sum(-1)
- else:
- raise ValueError
- self.replace_sampling = replace_sampling
-
- def __call__(self):
- coord = np.where(self.map == self.map.max())
- coord_h, coord_w = coord[0][0], coord[1][0]
- if self.replace_sampling:
- self.map[coord_h, coord_w] = -1
- return [coord_w, coord_h]
-
-
-class sparse_coord_init():
- def __init__(self, pred, gt, format='[bs x c x 2D]', quantile_interval=200, nodiff_thres=0.1):
- if isinstance(pred, torch.Tensor):
- pred = pred.detach().cpu().numpy()
- if isinstance(gt, torch.Tensor):
- gt = gt.detach().cpu().numpy()
- if format == '[bs x c x 2D]':
- self.map = ((pred[0] - gt[0])**2).sum(0)
- self.reference_gt = copy.deepcopy(
- np.transpose(gt[0], (1, 2, 0)))
- elif format == ['[2D x c]']:
- self.map = (np.abs(pred - gt)).sum(-1)
- self.reference_gt = copy.deepcopy(gt[0])
- else:
- raise ValueError
- # OptionA: Zero too small errors to avoid the error too small deadloop
- self.map[self.map < nodiff_thres] = 0
- quantile_interval = np.linspace(0., 1., quantile_interval)
- quantized_interval = np.quantile(self.map, quantile_interval)
- # remove redundant
- quantized_interval = np.unique(quantized_interval)
- quantized_interval = sorted(quantized_interval[1:-1])
- self.map = np.digitize(self.map, quantized_interval, right=False)
- self.map = np.clip(self.map, 0, 255).astype(np.uint8)
- self.idcnt = {}
- for idi in sorted(np.unique(self.map)):
- self.idcnt[idi] = (self.map==idi).sum()
- self.idcnt.pop(min(self.idcnt.keys()))
- # remove smallest one to remove the correct region
- def __call__(self):
- if len(self.idcnt) == 0:
- h, w = self.map.shape
- return [npr.uniform(0, 1)*w, npr.uniform(0, 1)*h]
- target_id = max(self.idcnt, key=self.idcnt.get)
- _, component, cstats, ccenter = cv2.connectedComponentsWithStats(
- (self.map==target_id).astype(np.uint8), connectivity=4)
- # remove cid = 0, it is the invalid area
- csize = [ci[-1] for ci in cstats[1:]]
- target_cid = csize.index(max(csize))+1
- center = ccenter[target_cid][::-1]
- coord = np.stack(np.where(component == target_cid)).T
- dist = np.linalg.norm(coord-center, axis=1)
- target_coord_id = np.argmin(dist)
- coord_h, coord_w = coord[target_coord_id]
- # replace_sampling
- self.idcnt[target_id] -= max(csize)
- if self.idcnt[target_id] == 0:
- self.idcnt.pop(target_id)
- self.map[component == target_cid] = 0
- return [coord_w, coord_h]
-
-
-def init_shapes(num_paths,
- num_segments,
- canvas_size,
- seginit_cfg,
- shape_cnt,
- pos_init_method=None,
- trainable_stroke=False,
- gt=None,
- **kwargs):
- shapes = []
- shape_groups = []
- h, w = canvas_size
-
- # change path init location
- if pos_init_method is None:
- pos_init_method = random_coord_init(canvas_size=canvas_size)
-
- for i in range(num_paths):
- num_control_points = [2] * num_segments
-
- if seginit_cfg.type=="random":
- points = []
- p0 = pos_init_method()
- color_ref = copy.deepcopy(p0)
- points.append(p0)
- for j in range(num_segments):
- radius = seginit_cfg.radius
- p1 = (p0[0] + radius * npr.uniform(-0.5, 0.5),
- p0[1] + radius * npr.uniform(-0.5, 0.5))
- p2 = (p1[0] + radius * npr.uniform(-0.5, 0.5),
- p1[1] + radius * npr.uniform(-0.5, 0.5))
- p3 = (p2[0] + radius * npr.uniform(-0.5, 0.5),
- p2[1] + radius * npr.uniform(-0.5, 0.5))
- points.append(p1)
- points.append(p2)
- if j < num_segments - 1:
- points.append(p3)
- p0 = p3
- points = torch.FloatTensor(points)
-
- # circle points initialization
- elif seginit_cfg.type=="circle":
- radius = seginit_cfg.radius
- if radius is None:
- radius = npr.uniform(0.5, 1)
- center = pos_init_method()
- color_ref = copy.deepcopy(center)
- points = get_bezier_circle(
- radius=radius, segments=num_segments,
- bias=center)
-
- path = pydiffvg.Path(num_control_points = torch.LongTensor(num_control_points),
- points = points,
- stroke_width = torch.tensor(0.0),
- is_closed = True)
- shapes.append(path)
- # !!!!!!problem is here. the shape group shape_ids is wrong
-
- if gt is not None:
- wref, href = color_ref
- wref = max(0, min(int(wref), w-1))
- href = max(0, min(int(href), h-1))
- fill_color_init = list(gt[0, :, href, wref]) + [1.]
- fill_color_init = torch.FloatTensor(fill_color_init)
- stroke_color_init = torch.FloatTensor(npr.uniform(size=[4]))
- else:
- fill_color_init = torch.FloatTensor(npr.uniform(size=[4]))
- stroke_color_init = torch.FloatTensor(npr.uniform(size=[4]))
-
- path_group = pydiffvg.ShapeGroup(
- shape_ids = torch.LongTensor([shape_cnt+i]),
- fill_color = fill_color_init,
- stroke_color = stroke_color_init,
- )
- shape_groups.append(path_group)
-
- point_var = []
- color_var = []
-
- for path in shapes:
- path.points.requires_grad = True
- point_var.append(path.points)
- for group in shape_groups:
- group.fill_color.requires_grad = True
- color_var.append(group.fill_color)
-
- if trainable_stroke:
- stroke_width_var = []
- stroke_color_var = []
- for path in shapes:
- path.stroke_width.requires_grad = True
- stroke_width_var.append(path.stroke_width)
- for group in shape_groups:
- group.stroke_color.requires_grad = True
- stroke_color_var.append(group.stroke_color)
- return shapes, shape_groups, point_var, color_var, stroke_width_var, stroke_color_var
- else:
- return shapes, shape_groups, point_var, color_var
-
-class linear_decay_lrlambda_f(object):
- def __init__(self, decay_every, decay_ratio):
- self.decay_every = decay_every
- self.decay_ratio = decay_ratio
-
- def __call__(self, n):
- decay_time = n//self.decay_every
- decay_step = n %self.decay_every
- lr_s = self.decay_ratio**decay_time
- lr_e = self.decay_ratio**(decay_time+1)
- r = decay_step/self.decay_every
- lr = lr_s * (1-r) + lr_e * r
- return lr
-
-def main_func(target, experiment, num_iter, cfg_arg):
- with open(cfg_arg.config, 'r') as f:
- cfg = yaml.load(f, Loader=yaml.FullLoader)
- cfg_default = edict(cfg['default'])
- cfg = edict(cfg[cfg_arg.experiment])
- cfg.update(cfg_default)
- cfg.update(cfg_arg)
- cfg.exid = get_experiment_id(cfg.debug)
-
- cfg.experiment_dir = \
- osp.join(cfg.log_dir, '{}_{}'.format(cfg.exid, '_'.join(cfg.signature)))
- cfg.target = target
- cfg.experiment = experiment
- cfg.num_iter = num_iter
-
- configfile = osp.join(cfg.experiment_dir, 'config.yaml')
- check_and_create_dir(configfile)
- with open(osp.join(configfile), 'w') as f:
- yaml.dump(edict_2_dict(cfg), f)
-
- # Use GPU if available
- pydiffvg.set_use_gpu(torch.cuda.is_available())
- device = pydiffvg.get_device()
-
- # gt = np.array(PIL.Image.open(cfg.target))
- gt = np.array(cfg.target)
- print(f"Input image shape is: {gt.shape}")
- if len(gt.shape) == 2:
- print("Converting the gray-scale image to RGB.")
- gt = gt.unsqueeze(dim=-1).repeat(1,1,3)
- if gt.shape[2] == 4:
- print("Input image includes alpha channel, simply dropout alpha channel.")
- gt = gt[:, :, :3]
- gt = (gt/255).astype(np.float32)
- gt = torch.FloatTensor(gt).permute(2, 0, 1)[None].to(device)
- if cfg.use_ycrcb:
- gt = ycrcb_conversion(gt)
- h, w = gt.shape[2:]
-
- path_schedule = get_path_schedule(**cfg.path_schedule)
-
- if cfg.seed is not None:
- random.seed(cfg.seed)
- npr.seed(cfg.seed)
- torch.manual_seed(cfg.seed)
- render = pydiffvg.RenderFunction.apply
-
- shapes_record, shape_groups_record = [], []
-
- region_loss = None
- loss_matrix = []
-
- para_point, para_color = {}, {}
- if cfg.trainable.stroke:
- para_stroke_width, para_stroke_color = {}, {}
-
- pathn_record = []
- # Background
- if cfg.trainable.bg:
- # meancolor = gt.mean([2, 3])[0]
- para_bg = torch.tensor([1., 1., 1.], requires_grad=True, device=device)
- else:
- if cfg.use_ycrcb:
- para_bg = torch.tensor([219/255, 0, 0], requires_grad=False, device=device)
- else:
- para_bg = torch.tensor([1., 1., 1.], requires_grad=False, device=device)
-
- ##################
- # start_training #
- ##################
-
- loss_weight = None
- loss_weight_keep = 0
- if cfg.coord_init.type == 'naive':
- pos_init_method = naive_coord_init(
- para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
- elif cfg.coord_init.type == 'sparse':
- pos_init_method = sparse_coord_init(
- para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
- elif cfg.coord_init.type == 'random':
- pos_init_method = random_coord_init([h, w])
- else:
- raise ValueError
-
- lrlambda_f = linear_decay_lrlambda_f(cfg.num_iter, 0.4)
- optim_schedular_dict = {}
-
- for path_idx, pathn in enumerate(path_schedule):
- loss_list = []
- print("=> Adding [{}] paths, [{}] ...".format(pathn, cfg.seginit.type))
- pathn_record.append(pathn)
- pathn_record_str = '-'.join([str(i) for i in pathn_record])
-
- # initialize new shapes related stuffs.
- if cfg.trainable.stroke:
- shapes, shape_groups, point_var, color_var, stroke_width_var, stroke_color_var = init_shapes(
- pathn, cfg.num_segments, (h, w),
- cfg.seginit, len(shapes_record),
- pos_init_method,
- trainable_stroke=True,
- gt=gt, )
- para_stroke_width[path_idx] = stroke_width_var
- para_stroke_color[path_idx] = stroke_color_var
- else:
- shapes, shape_groups, point_var, color_var = init_shapes(
- pathn, cfg.num_segments, (h, w),
- cfg.seginit, len(shapes_record),
- pos_init_method,
- trainable_stroke=False,
- gt=gt, )
-
- shapes_record += shapes
- shape_groups_record += shape_groups
-
- if cfg.save.init:
- filename = os.path.join(
- cfg.experiment_dir, "svg-init",
- "{}-init.svg".format(pathn_record_str))
- check_and_create_dir(filename)
- pydiffvg.save_svg(
- filename, w, h,
- shapes_record, shape_groups_record)
-
- para = {}
- if (cfg.trainable.bg) and (path_idx == 0):
- para['bg'] = [para_bg]
- para['point'] = point_var
- para['color'] = color_var
- if cfg.trainable.stroke:
- para['stroke_width'] = stroke_width_var
- para['stroke_color'] = stroke_color_var
-
- pg = [{'params' : para[ki], 'lr' : cfg.lr_base[ki]} for ki in sorted(para.keys())]
- optim = torch.optim.Adam(pg)
-
- if cfg.trainable.record:
- scheduler = LambdaLR(
- optim, lr_lambda=lrlambda_f, last_epoch=-1)
- else:
- scheduler = LambdaLR(
- optim, lr_lambda=lrlambda_f, last_epoch=cfg.num_iter)
- optim_schedular_dict[path_idx] = (optim, scheduler)
-
- # Inner loop training
- t_range = tqdm(range(cfg.num_iter))
- for t in t_range:
-
- for _, (optim, _) in optim_schedular_dict.items():
- optim.zero_grad()
-
- # Forward pass: render the image.
- scene_args = pydiffvg.RenderFunction.serialize_scene(
- w, h, shapes_record, shape_groups_record)
- img = render(w, h, 2, 2, t, None, *scene_args)
-
- # Compose img with white background
- img = img[:, :, 3:4] * img[:, :, :3] + \
- para_bg * (1 - img[:, :, 3:4])
-
-
-
-
-
- if cfg.save.video:
- filename = os.path.join(
- cfg.experiment_dir, "video-png",
- "{}-iter{}.png".format(pathn_record_str, t))
- check_and_create_dir(filename)
- if cfg.use_ycrcb:
- imshow = ycrcb_conversion(
- img, format='[2D x 3]', reverse=True).detach().cpu()
- else:
- imshow = img.detach().cpu()
- pydiffvg.imwrite(imshow, filename, gamma=gamma)
-
- # ### added for app
- # if t%30==0 and t !=0 :
- # # print(f"debug: {t}, {filename} {img.size()}")
- # return img.detach().cpu().numpy(), t
-
- x = img.unsqueeze(0).permute(0, 3, 1, 2) # HWC -> NCHW
-
- if cfg.use_ycrcb:
- color_reweight = torch.FloatTensor([255/219, 255/224, 255/255]).to(device)
- loss = ((x-gt)*(color_reweight.view(1, -1, 1, 1)))**2
- else:
- loss = ((x-gt)**2)
-
- if cfg.loss.use_l1_loss:
- loss = abs(x-gt)
-
- if cfg.loss.use_distance_weighted_loss:
- if cfg.use_ycrcb:
- raise ValueError
- shapes_forsdf = copy.deepcopy(shapes)
- shape_groups_forsdf = copy.deepcopy(shape_groups)
- for si in shapes_forsdf:
- si.stroke_width = torch.FloatTensor([0]).to(device)
- for sg_idx, sgi in enumerate(shape_groups_forsdf):
- sgi.fill_color = torch.FloatTensor([1, 1, 1, 1]).to(device)
- sgi.shape_ids = torch.LongTensor([sg_idx]).to(device)
-
- sargs_forsdf = pydiffvg.RenderFunction.serialize_scene(
- w, h, shapes_forsdf, shape_groups_forsdf)
- with torch.no_grad():
- im_forsdf = render(w, h, 2, 2, 0, None, *sargs_forsdf)
- # use alpha channel is a trick to get 0-1 image
- im_forsdf = (im_forsdf[:, :, 3]).detach().cpu().numpy()
- loss_weight = get_sdf(im_forsdf, normalize='to1')
- loss_weight += loss_weight_keep
- loss_weight = np.clip(loss_weight, 0, 1)
- loss_weight = torch.FloatTensor(loss_weight).to(device)
-
- if cfg.save.loss:
- save_loss = loss.squeeze(dim=0).mean(dim=0,keepdim=False).cpu().detach().numpy()
- save_weight = loss_weight.cpu().detach().numpy()
- save_weighted_loss = save_loss*save_weight
- # normalize to [0,1]
- save_loss = (save_loss - np.min(save_loss))/np.ptp(save_loss)
- save_weight = (save_weight - np.min(save_weight))/np.ptp(save_weight)
- save_weighted_loss = (save_weighted_loss - np.min(save_weighted_loss))/np.ptp(save_weighted_loss)
-
- # save
- plt.imshow(save_loss, cmap='Reds')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-mseloss.png".format(pathn_record_str, t))
- check_and_create_dir(filename)
- plt.savefig(filename, dpi=800)
- plt.close()
-
- plt.imshow(save_weight, cmap='Greys')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-sdfweight.png".format(pathn_record_str, t))
- plt.savefig(filename, dpi=800)
- plt.close()
-
- plt.imshow(save_weighted_loss, cmap='Reds')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-weightedloss.png".format(pathn_record_str, t))
- plt.savefig(filename, dpi=800)
- plt.close()
-
-
-
-
-
- if loss_weight is None:
- loss = loss.sum(1).mean()
- else:
- loss = (loss.sum(1)*loss_weight).mean()
-
- # if (cfg.loss.bis_loss_weight is not None) and (cfg.loss.bis_loss_weight > 0):
- # loss_bis = bezier_intersection_loss(point_var[0]) * cfg.loss.bis_loss_weight
- # loss = loss + loss_bis
- if (cfg.loss.xing_loss_weight is not None) \
- and (cfg.loss.xing_loss_weight > 0):
- loss_xing = xing_loss(point_var) * cfg.loss.xing_loss_weight
- loss = loss + loss_xing
-
-
- loss_list.append(loss.item())
- t_range.set_postfix({'loss': loss.item()})
- loss.backward()
-
- # step
- for _, (optim, scheduler) in optim_schedular_dict.items():
- optim.step()
- scheduler.step()
-
- for group in shape_groups_record:
- group.fill_color.data.clamp_(0.0, 1.0)
-
- if cfg.loss.use_distance_weighted_loss:
- loss_weight_keep = loss_weight.detach().cpu().numpy() * 1
-
- if not cfg.trainable.record:
- for _, pi in pg.items():
- for ppi in pi:
- pi.require_grad = False
- optim_schedular_dict = {}
-
- if cfg.save.image:
- filename = os.path.join(
- cfg.experiment_dir, "demo-png", "{}.png".format(pathn_record_str))
- check_and_create_dir(filename)
- if cfg.use_ycrcb:
- imshow = ycrcb_conversion(
- img, format='[2D x 3]', reverse=True).detach().cpu()
- else:
- imshow = img.detach().cpu()
- pydiffvg.imwrite(imshow, filename, gamma=gamma)
-
- svg_app_file_name = ""
- if cfg.save.output:
- filename = os.path.join(
- cfg.experiment_dir, "output-svg", "{}.svg".format(pathn_record_str))
- check_and_create_dir(filename)
- pydiffvg.save_svg(filename, w, h, shapes_record, shape_groups_record)
- svg_app_file_name = filename
-
- loss_matrix.append(loss_list)
-
- # calculate the pixel loss
- # pixel_loss = ((x-gt)**2).sum(dim=1, keepdim=True).sqrt_() # [N,1,H, W]
- # region_loss = adaptive_avg_pool2d(pixel_loss, cfg.region_loss_pool_size)
- # loss_weight = torch.softmax(region_loss.reshape(1, 1, -1), dim=-1)\
- # .reshape_as(region_loss)
-
- pos_init_method = naive_coord_init(x, gt)
-
- if cfg.coord_init.type == 'naive':
- pos_init_method = naive_coord_init(x, gt)
- elif cfg.coord_init.type == 'sparse':
- pos_init_method = sparse_coord_init(x, gt)
- elif cfg.coord_init.type == 'random':
- pos_init_method = random_coord_init([h, w])
- else:
- raise ValueError
-
- if cfg.save.video:
- print("saving iteration video...")
- img_array = []
- for ii in range(0, cfg.num_iter):
- filename = os.path.join(
- cfg.experiment_dir, "video-png",
- "{}-iter{}.png".format(pathn_record_str, ii))
- img = cv2.imread(filename)
- # cv2.putText(
- # img, "Path:{} \nIteration:{}".format(pathn_record_str, ii),
- # (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
- img_array.append(img)
-
- videoname = os.path.join(
- cfg.experiment_dir, "video-avi",
- "{}.avi".format(pathn_record_str))
- check_and_create_dir(videoname)
- out = cv2.VideoWriter(
- videoname,
- # cv2.VideoWriter_fourcc(*'mp4v'),
- cv2.VideoWriter_fourcc(*'FFV1'),
- 20.0, (w, h))
- for iii in range(len(img_array)):
- out.write(img_array[iii])
- out.release()
- # shutil.rmtree(os.path.join(cfg.experiment_dir, "video-png"))
-
- print("The last loss is: {}".format(loss.item()))
- return img.detach().cpu().numpy(), svg_app_file_name
-
-
-if __name__ == "__main__":
-
- ###############
- # make config #
- ###############
-
- cfg_arg = parse_args()
- with open(cfg_arg.config, 'r') as f:
- cfg = yaml.load(f, Loader=yaml.FullLoader)
- cfg_default = edict(cfg['default'])
- cfg = edict(cfg[cfg_arg.experiment])
- cfg.update(cfg_default)
- cfg.update(cfg_arg)
- cfg.exid = get_experiment_id(cfg.debug)
-
- cfg.experiment_dir = \
- osp.join(cfg.log_dir, '{}_{}'.format(cfg.exid, '_'.join(cfg.signature)))
- configfile = osp.join(cfg.experiment_dir, 'config.yaml')
- check_and_create_dir(configfile)
- with open(osp.join(configfile), 'w') as f:
- yaml.dump(edict_2_dict(cfg), f)
-
- # Use GPU if available
- pydiffvg.set_use_gpu(torch.cuda.is_available())
- device = pydiffvg.get_device()
-
- gt = np.array(PIL.Image.open(cfg.target))
- print(f"Input image shape is: {gt.shape}")
- if len(gt.shape) == 2:
- print("Converting the gray-scale image to RGB.")
- gt = gt.unsqueeze(dim=-1).repeat(1,1,3)
- if gt.shape[2] == 4:
- print("Input image includes alpha channel, simply dropout alpha channel.")
- gt = gt[:, :, :3]
- gt = (gt/255).astype(np.float32)
- gt = torch.FloatTensor(gt).permute(2, 0, 1)[None].to(device)
- if cfg.use_ycrcb:
- gt = ycrcb_conversion(gt)
- h, w = gt.shape[2:]
-
- path_schedule = get_path_schedule(**cfg.path_schedule)
-
- if cfg.seed is not None:
- random.seed(cfg.seed)
- npr.seed(cfg.seed)
- torch.manual_seed(cfg.seed)
- render = pydiffvg.RenderFunction.apply
-
- shapes_record, shape_groups_record = [], []
-
- region_loss = None
- loss_matrix = []
-
- para_point, para_color = {}, {}
- if cfg.trainable.stroke:
- para_stroke_width, para_stroke_color = {}, {}
-
- pathn_record = []
- # Background
- if cfg.trainable.bg:
- # meancolor = gt.mean([2, 3])[0]
- para_bg = torch.tensor([1., 1., 1.], requires_grad=True, device=device)
- else:
- if cfg.use_ycrcb:
- para_bg = torch.tensor([219/255, 0, 0], requires_grad=False, device=device)
- else:
- para_bg = torch.tensor([1., 1., 1.], requires_grad=False, device=device)
-
- ##################
- # start_training #
- ##################
-
- loss_weight = None
- loss_weight_keep = 0
- if cfg.coord_init.type == 'naive':
- pos_init_method = naive_coord_init(
- para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
- elif cfg.coord_init.type == 'sparse':
- pos_init_method = sparse_coord_init(
- para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
- elif cfg.coord_init.type == 'random':
- pos_init_method = random_coord_init([h, w])
- else:
- raise ValueError
-
- lrlambda_f = linear_decay_lrlambda_f(cfg.num_iter, 0.4)
- optim_schedular_dict = {}
-
- for path_idx, pathn in enumerate(path_schedule):
- loss_list = []
- print("=> Adding [{}] paths, [{}] ...".format(pathn, cfg.seginit.type))
- pathn_record.append(pathn)
- pathn_record_str = '-'.join([str(i) for i in pathn_record])
-
- # initialize new shapes related stuffs.
- if cfg.trainable.stroke:
- shapes, shape_groups, point_var, color_var, stroke_width_var, stroke_color_var = init_shapes(
- pathn, cfg.num_segments, (h, w),
- cfg.seginit, len(shapes_record),
- pos_init_method,
- trainable_stroke=True,
- gt=gt, )
- para_stroke_width[path_idx] = stroke_width_var
- para_stroke_color[path_idx] = stroke_color_var
- else:
- shapes, shape_groups, point_var, color_var = init_shapes(
- pathn, cfg.num_segments, (h, w),
- cfg.seginit, len(shapes_record),
- pos_init_method,
- trainable_stroke=False,
- gt=gt, )
-
- shapes_record += shapes
- shape_groups_record += shape_groups
-
- if cfg.save.init:
- filename = os.path.join(
- cfg.experiment_dir, "svg-init",
- "{}-init.svg".format(pathn_record_str))
- check_and_create_dir(filename)
- pydiffvg.save_svg(
- filename, w, h,
- shapes_record, shape_groups_record)
-
- para = {}
- if (cfg.trainable.bg) and (path_idx == 0):
- para['bg'] = [para_bg]
- para['point'] = point_var
- para['color'] = color_var
- if cfg.trainable.stroke:
- para['stroke_width'] = stroke_width_var
- para['stroke_color'] = stroke_color_var
-
- pg = [{'params' : para[ki], 'lr' : cfg.lr_base[ki]} for ki in sorted(para.keys())]
- optim = torch.optim.Adam(pg)
-
- if cfg.trainable.record:
- scheduler = LambdaLR(
- optim, lr_lambda=lrlambda_f, last_epoch=-1)
- else:
- scheduler = LambdaLR(
- optim, lr_lambda=lrlambda_f, last_epoch=cfg.num_iter)
- optim_schedular_dict[path_idx] = (optim, scheduler)
-
- # Inner loop training
- t_range = tqdm(range(cfg.num_iter))
- for t in t_range:
-
- for _, (optim, _) in optim_schedular_dict.items():
- optim.zero_grad()
-
- # Forward pass: render the image.
- scene_args = pydiffvg.RenderFunction.serialize_scene(
- w, h, shapes_record, shape_groups_record)
- img = render(w, h, 2, 2, t, None, *scene_args)
-
- # Compose img with white background
- img = img[:, :, 3:4] * img[:, :, :3] + \
- para_bg * (1 - img[:, :, 3:4])
-
- if cfg.save.video:
- filename = os.path.join(
- cfg.experiment_dir, "video-png",
- "{}-iter{}.png".format(pathn_record_str, t))
- check_and_create_dir(filename)
- if cfg.use_ycrcb:
- imshow = ycrcb_conversion(
- img, format='[2D x 3]', reverse=True).detach().cpu()
- else:
- imshow = img.detach().cpu()
- pydiffvg.imwrite(imshow, filename, gamma=gamma)
-
- x = img.unsqueeze(0).permute(0, 3, 1, 2) # HWC -> NCHW
-
- if cfg.use_ycrcb:
- color_reweight = torch.FloatTensor([255/219, 255/224, 255/255]).to(device)
- loss = ((x-gt)*(color_reweight.view(1, -1, 1, 1)))**2
- else:
- loss = ((x-gt)**2)
-
- if cfg.loss.use_l1_loss:
- loss = abs(x-gt)
-
- if cfg.loss.use_distance_weighted_loss:
- if cfg.use_ycrcb:
- raise ValueError
- shapes_forsdf = copy.deepcopy(shapes)
- shape_groups_forsdf = copy.deepcopy(shape_groups)
- for si in shapes_forsdf:
- si.stroke_width = torch.FloatTensor([0]).to(device)
- for sg_idx, sgi in enumerate(shape_groups_forsdf):
- sgi.fill_color = torch.FloatTensor([1, 1, 1, 1]).to(device)
- sgi.shape_ids = torch.LongTensor([sg_idx]).to(device)
-
- sargs_forsdf = pydiffvg.RenderFunction.serialize_scene(
- w, h, shapes_forsdf, shape_groups_forsdf)
- with torch.no_grad():
- im_forsdf = render(w, h, 2, 2, 0, None, *sargs_forsdf)
- # use alpha channel is a trick to get 0-1 image
- im_forsdf = (im_forsdf[:, :, 3]).detach().cpu().numpy()
- loss_weight = get_sdf(im_forsdf, normalize='to1')
- loss_weight += loss_weight_keep
- loss_weight = np.clip(loss_weight, 0, 1)
- loss_weight = torch.FloatTensor(loss_weight).to(device)
-
- if cfg.save.loss:
- save_loss = loss.squeeze(dim=0).mean(dim=0,keepdim=False).cpu().detach().numpy()
- save_weight = loss_weight.cpu().detach().numpy()
- save_weighted_loss = save_loss*save_weight
- # normalize to [0,1]
- save_loss = (save_loss - np.min(save_loss))/np.ptp(save_loss)
- save_weight = (save_weight - np.min(save_weight))/np.ptp(save_weight)
- save_weighted_loss = (save_weighted_loss - np.min(save_weighted_loss))/np.ptp(save_weighted_loss)
-
- # save
- plt.imshow(save_loss, cmap='Reds')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-mseloss.png".format(pathn_record_str, t))
- check_and_create_dir(filename)
- plt.savefig(filename, dpi=800)
- plt.close()
-
- plt.imshow(save_weight, cmap='Greys')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-sdfweight.png".format(pathn_record_str, t))
- plt.savefig(filename, dpi=800)
- plt.close()
-
- plt.imshow(save_weighted_loss, cmap='Reds')
- plt.axis('off')
- # plt.colorbar()
- filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-weightedloss.png".format(pathn_record_str, t))
- plt.savefig(filename, dpi=800)
- plt.close()
-
-
-
-
-
- if loss_weight is None:
- loss = loss.sum(1).mean()
- else:
- loss = (loss.sum(1)*loss_weight).mean()
-
- # if (cfg.loss.bis_loss_weight is not None) and (cfg.loss.bis_loss_weight > 0):
- # loss_bis = bezier_intersection_loss(point_var[0]) * cfg.loss.bis_loss_weight
- # loss = loss + loss_bis
- if (cfg.loss.xing_loss_weight is not None) \
- and (cfg.loss.xing_loss_weight > 0):
- loss_xing = xing_loss(point_var) * cfg.loss.xing_loss_weight
- loss = loss + loss_xing
-
-
- loss_list.append(loss.item())
- t_range.set_postfix({'loss': loss.item()})
- loss.backward()
-
- # step
- for _, (optim, scheduler) in optim_schedular_dict.items():
- optim.step()
- scheduler.step()
-
- for group in shape_groups_record:
- group.fill_color.data.clamp_(0.0, 1.0)
-
- if cfg.loss.use_distance_weighted_loss:
- loss_weight_keep = loss_weight.detach().cpu().numpy() * 1
-
- if not cfg.trainable.record:
- for _, pi in pg.items():
- for ppi in pi:
- pi.require_grad = False
- optim_schedular_dict = {}
-
- if cfg.save.image:
- filename = os.path.join(
- cfg.experiment_dir, "demo-png", "{}.png".format(pathn_record_str))
- check_and_create_dir(filename)
- if cfg.use_ycrcb:
- imshow = ycrcb_conversion(
- img, format='[2D x 3]', reverse=True).detach().cpu()
- else:
- imshow = img.detach().cpu()
- pydiffvg.imwrite(imshow, filename, gamma=gamma)
-
- if cfg.save.output:
- filename = os.path.join(
- cfg.experiment_dir, "output-svg", "{}.svg".format(pathn_record_str))
- check_and_create_dir(filename)
- pydiffvg.save_svg(filename, w, h, shapes_record, shape_groups_record)
-
- loss_matrix.append(loss_list)
-
- # calculate the pixel loss
- # pixel_loss = ((x-gt)**2).sum(dim=1, keepdim=True).sqrt_() # [N,1,H, W]
- # region_loss = adaptive_avg_pool2d(pixel_loss, cfg.region_loss_pool_size)
- # loss_weight = torch.softmax(region_loss.reshape(1, 1, -1), dim=-1)\
- # .reshape_as(region_loss)
-
- pos_init_method = naive_coord_init(x, gt)
-
- if cfg.coord_init.type == 'naive':
- pos_init_method = naive_coord_init(x, gt)
- elif cfg.coord_init.type == 'sparse':
- pos_init_method = sparse_coord_init(x, gt)
- elif cfg.coord_init.type == 'random':
- pos_init_method = random_coord_init([h, w])
- else:
- raise ValueError
-
- if cfg.save.video:
- print("saving iteration video...")
- img_array = []
- for ii in range(0, cfg.num_iter):
- filename = os.path.join(
- cfg.experiment_dir, "video-png",
- "{}-iter{}.png".format(pathn_record_str, ii))
- img = cv2.imread(filename)
- # cv2.putText(
- # img, "Path:{} \nIteration:{}".format(pathn_record_str, ii),
- # (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
- img_array.append(img)
-
- videoname = os.path.join(
- cfg.experiment_dir, "video-avi",
- "{}.avi".format(pathn_record_str))
- check_and_create_dir(videoname)
- out = cv2.VideoWriter(
- videoname,
- # cv2.VideoWriter_fourcc(*'mp4v'),
- cv2.VideoWriter_fourcc(*'FFV1'),
- 20.0, (w, h))
- for iii in range(len(img_array)):
- out.write(img_array[iii])
- out.release()
- # shutil.rmtree(os.path.join(cfg.experiment_dir, "video-png"))
-
- print("The last loss is: {}".format(loss.item()))
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h
deleted file mode 100644
index d22b4aac348c13fdafa9f03662c820d8fc3b377b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/adjacent_difference.h
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-template
- OutputIterator adjacent_difference(execution_policy &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- BinaryFunction binary_op)
-{
- // tbb prefers generic::adjacent_difference to cpp::adjacent_difference
- return thrust::system::detail::generic::adjacent_difference(exec, first, last, result, binary_op);
-} // end adjacent_difference()
-
-} // end detail
-} // end tbb
-} // end system
-} // end thrust
-
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/demo/gradio_app.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/demo/gradio_app.py
deleted file mode 100644
index 15e08323f485291df8b53eefd4691c087d7863f7..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/demo/gradio_app.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import argparse
-from functools import partial
-import cv2
-import requests
-import os
-from io import BytesIO
-from PIL import Image
-import numpy as np
-from pathlib import Path
-
-
-import warnings
-
-import torch
-
-# prepare the environment
-os.system("python setup.py build develop --user")
-os.system("pip install packaging==21.3")
-os.system("pip install gradio")
-
-
-warnings.filterwarnings("ignore")
-
-import gradio as gr
-
-from groundingdino.models import build_model
-from groundingdino.util.slconfig import SLConfig
-from groundingdino.util.utils import clean_state_dict
-from groundingdino.util.inference import annotate, load_image, predict
-import groundingdino.datasets.transforms as T
-
-from huggingface_hub import hf_hub_download
-
-
-
-# Use this command for evaluate the GLIP-T model
-config_file = "groundingdino/config/GroundingDINO_SwinT_OGC.py"
-ckpt_repo_id = "ShilongLiu/GroundingDINO"
-ckpt_filenmae = "groundingdino_swint_ogc.pth"
-
-
-def load_model_hf(model_config_path, repo_id, filename, device='cpu'):
- args = SLConfig.fromfile(model_config_path)
- model = build_model(args)
- args.device = device
-
- cache_file = hf_hub_download(repo_id=repo_id, filename=filename)
- checkpoint = torch.load(cache_file, map_location='cpu')
- log = model.load_state_dict(clean_state_dict(checkpoint['model']), strict=False)
- print("Model loaded from {} \n => {}".format(cache_file, log))
- _ = model.eval()
- return model
-
-def image_transform_grounding(init_image):
- transform = T.Compose([
- T.RandomResize([800], max_size=1333),
- T.ToTensor(),
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
- ])
- image, _ = transform(init_image, None) # 3, h, w
- return init_image, image
-
-def image_transform_grounding_for_vis(init_image):
- transform = T.Compose([
- T.RandomResize([800], max_size=1333),
- ])
- image, _ = transform(init_image, None) # 3, h, w
- return image
-
-model = load_model_hf(config_file, ckpt_repo_id, ckpt_filenmae)
-
-def run_grounding(input_image, grounding_caption, box_threshold, text_threshold):
- init_image = input_image.convert("RGB")
- original_size = init_image.size
-
- _, image_tensor = image_transform_grounding(init_image)
- image_pil: Image = image_transform_grounding_for_vis(init_image)
-
- # run grounidng
- boxes, logits, phrases = predict(model, image_tensor, grounding_caption, box_threshold, text_threshold, device='cpu')
- annotated_frame = annotate(image_source=np.asarray(image_pil), boxes=boxes, logits=logits, phrases=phrases)
- image_with_box = Image.fromarray(cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB))
-
-
- return image_with_box
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser("Grounding DINO demo", add_help=True)
- parser.add_argument("--debug", action="store_true", help="using debug mode")
- parser.add_argument("--share", action="store_true", help="share the app")
- args = parser.parse_args()
-
- block = gr.Blocks().queue()
- with block:
- gr.Markdown("# [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO)")
- gr.Markdown("### Open-World Detection with Grounding DINO")
-
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="pil")
- grounding_caption = gr.Textbox(label="Detection Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- box_threshold = gr.Slider(
- label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001
- )
- text_threshold = gr.Slider(
- label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001
- )
-
- with gr.Column():
- gallery = gr.outputs.Image(
- type="pil",
- # label="grounding results"
- ).style(full_width=True, full_height=True)
- # gallery = gr.Gallery(label="Generated images", show_label=False).style(
- # grid=[1], height="auto", container=True, full_width=True, full_height=True)
-
- run_button.click(fn=run_grounding, inputs=[
- input_image, grounding_caption, box_threshold, text_threshold], outputs=[gallery])
-
-
- block.launch(server_name='0.0.0.0', server_port=7579, debug=args.debug, share=args.share)
-
diff --git a/spaces/Dao3/DreamlikeArt-PhotoReal-2.0/style.css b/spaces/Dao3/DreamlikeArt-PhotoReal-2.0/style.css
deleted file mode 100644
index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000
--- a/spaces/Dao3/DreamlikeArt-PhotoReal-2.0/style.css
+++ /dev/null
@@ -1,84 +0,0 @@
-#col-container {
- max-width: 800px;
- margin-left: auto;
- margin-right: auto;
-}
-a {
- color: inherit;
- text-decoration: underline;
-}
-.gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
-}
-.gr-button {
- color: white;
- border-color: #9d66e5;
- background: #9d66e5;
-}
-input[type='range'] {
- accent-color: #9d66e5;
-}
-.dark input[type='range'] {
- accent-color: #dfdfdf;
-}
-.container {
- max-width: 800px;
- margin: auto;
- padding-top: 1.5rem;
-}
-#gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
-}
-#gallery>div>.h-full {
- min-height: 20rem;
-}
-.details:hover {
- text-decoration: underline;
-}
-.gr-button {
- white-space: nowrap;
-}
-.gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
-}
-#advanced-options {
- margin-bottom: 20px;
-}
-.footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-.footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
-}
-.dark .logo{ filter: invert(1); }
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-.acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
-}
-
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py b/spaces/Dinoking/Guccio-AI-Designer/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py
deleted file mode 100644
index 454236a4bfa0d11fda0d52e0ce9b2926f8c32d30..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# coding: utf-8
-"""
-BigGAN config.
-"""
-from __future__ import (absolute_import, division, print_function, unicode_literals)
-
-import copy
-import json
-
-class BigGANConfig(object):
- """ Configuration class to store the configuration of a `BigGAN`.
- Defaults are for the 128x128 model.
- layers tuple are (up-sample in the layer ?, input channels, output channels)
- """
- def __init__(self,
- output_dim=128,
- z_dim=128,
- class_embed_dim=128,
- channel_width=128,
- num_classes=1000,
- layers=[(False, 16, 16),
- (True, 16, 16),
- (False, 16, 16),
- (True, 16, 8),
- (False, 8, 8),
- (True, 8, 4),
- (False, 4, 4),
- (True, 4, 2),
- (False, 2, 2),
- (True, 2, 1)],
- attention_layer_position=8,
- eps=1e-4,
- n_stats=51):
- """Constructs BigGANConfig. """
- self.output_dim = output_dim
- self.z_dim = z_dim
- self.class_embed_dim = class_embed_dim
- self.channel_width = channel_width
- self.num_classes = num_classes
- self.layers = layers
- self.attention_layer_position = attention_layer_position
- self.eps = eps
- self.n_stats = n_stats
-
- @classmethod
- def from_dict(cls, json_object):
- """Constructs a `BigGANConfig` from a Python dictionary of parameters."""
- config = BigGANConfig()
- for key, value in json_object.items():
- config.__dict__[key] = value
- return config
-
- @classmethod
- def from_json_file(cls, json_file):
- """Constructs a `BigGANConfig` from a json file of parameters."""
- with open(json_file, "r", encoding='utf-8') as reader:
- text = reader.read()
- return cls.from_dict(json.loads(text))
-
- def __repr__(self):
- return str(self.to_json_string())
-
- def to_dict(self):
- """Serializes this instance to a Python dictionary."""
- output = copy.deepcopy(self.__dict__)
- return output
-
- def to_json_string(self):
- """Serializes this instance to a JSON string."""
- return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
diff --git a/spaces/DragGan/DragGan-Inversion/training/dataset.py b/spaces/DragGan/DragGan-Inversion/training/dataset.py
deleted file mode 100644
index f04842155f754b0aac49b91b1de1de6db017a776..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/training/dataset.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Streaming images and labels from datasets created with dataset_tool.py."""
-
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import json
-import torch
-import dnnlib
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-# ----------------------------------------------------------------------------
-
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- max_size=None,
- # Enable conditioning labels? False = label dimension is zero.
- use_labels=False,
- # Artificially double the size of the dataset via x-flips. Applied after max_size.
- xflip=False,
- # Random seed to use when applying max_size.
- random_seed=0,
- ):
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate(
- [self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros(
- [self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- assert self.image_shape[1] == self.image_shape[2]
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-# ----------------------------------------------------------------------------
-
-
-class ImageFolderDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- # Ensure specific resolution, None = highest available.
- resolution=None,
- # Additional arguments for the Dataset base class.
- **super_kwargs,
- ):
- self._path = path
- self._zipfile = None
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(
- root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(
- fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + \
- list(self._load_raw_image(0).shape)
- if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- raise IOError('Image files do not match the specified resolution')
- super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
- image = image.transpose(2, 0, 1) # HWC => CHW
- return image
-
- def _load_raw_labels(self):
- fname = 'dataset.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')]
- for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/ElainaFanBoy/MusicGen/Makefile b/spaces/ElainaFanBoy/MusicGen/Makefile
deleted file mode 100644
index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/Makefile
+++ /dev/null
@@ -1,21 +0,0 @@
-default: linter tests
-
-install:
- pip install -U pip
- pip install -U -e '.[dev]'
-
-linter:
- flake8 audiocraft && mypy audiocraft
- flake8 tests && mypy tests
-
-tests:
- coverage run -m pytest tests
- coverage report --include 'audiocraft/*'
-
-docs:
- pdoc3 --html -o docs -f audiocraft
-
-dist:
- python setup.py sdist
-
-.PHONY: linter tests docs dist
diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/text/symbols.py b/spaces/EyanAn/vits-uma-genshin-honkai/text/symbols.py
deleted file mode 100644
index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000
--- a/spaces/EyanAn/vits-uma-genshin-honkai/text/symbols.py
+++ /dev/null
@@ -1,39 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-'''# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-'''
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
\ No newline at end of file
diff --git a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/pipeline/factory.py b/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/pipeline/factory.py
deleted file mode 100644
index 9049ad20c314063e30c84fb60f9b2ff4edb06c17..0000000000000000000000000000000000000000
--- a/spaces/FSDL-Fashion/fashion_img_search/fis/feature_extraction/pipeline/factory.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from fis.feature_extraction.detection.base import BaseDetector
-from fis.feature_extraction.embedding.base import BaseEncoder
-from fis.feature_extraction.pipeline.base import EncodingPipeline
-
-
-class PipelineFactory:
- """Factory method for encoding pipelines.
-
- Example use:
- >>> from fis.feature_extraction.pipeline.factory import PipelineFactory
- >>> factory = PipelineFactory()
- >>> factory.register_pipeline(
- ... name="example_pipeline",
- ... detection_model=BaseDetector(),
- ... embedding_model=BaseEncoder()
- ... )
- >>> pipeline = factory.get('example_pipeline')
- """
-
- def __init__(self):
- """Instantiate factory object."""
- self._pipelines = {}
-
- def register_pipeline(self, name: str, detection_model: BaseDetector, embedding_model: BaseEncoder) -> None:
- """Register a new pipeline to the factory.
-
- Args:
- name: Name of the pipeline to create.
- detection_model: Instance of a BaseDetector object.
- embedding_model: Instance of a BaseEncoder object.
- """
- pipeline = EncodingPipeline(name=name, detection_model=detection_model, embedding_model=embedding_model)
- self._pipelines[name] = pipeline
-
- def get(self, name: str) -> EncodingPipeline:
- """Get a pipeline from its name.
-
- Args:
- name: Name of the pipeline to get.
-
- Raises:
- ValueError: If no pipeline has been registered with the given name.
-
- Returns:
- Encoding pipeline.
- """
- pipeline = self._pipelines.get(name)
- if not pipeline:
- raise ValueError(name)
-
- return pipeline
diff --git a/spaces/Fernando22/freegpt-webui/Dockerfile b/spaces/Fernando22/freegpt-webui/Dockerfile
deleted file mode 100644
index 7ac29c145f7d05ea9b1344e50e634629c9d88984..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/Dockerfile
+++ /dev/null
@@ -1,18 +0,0 @@
-FROM python:3.10-slim-buster
-
-WORKDIR /app
-
-COPY requirements.txt requirements.txt
-
-RUN python -m venv venv
-ENV PATH="/app/venv/bin:$PATH"
-
-RUN apt-get update && \
- apt-get install -y --no-install-recommends build-essential libffi-dev cmake libcurl4-openssl-dev && \
- pip3 install --no-cache-dir -r requirements.txt
-
-COPY . .
-
-RUN chmod -R 777 translations
-
-CMD ["python3", "./run.py"]
diff --git a/spaces/FridaZuley/RVC_HFKawaii/rvc_for_realtime.py b/spaces/FridaZuley/RVC_HFKawaii/rvc_for_realtime.py
deleted file mode 100644
index 55070f668c385ba0a9ba50989b282448cd75e59b..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/rvc_for_realtime.py
+++ /dev/null
@@ -1,297 +0,0 @@
-import faiss, torch, traceback, parselmouth, numpy as np, torchcrepe, torch.nn as nn, pyworld
-from fairseq import checkpoint_utils
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-import os, sys
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from configs.config import Config
-from multiprocessing import Manager as M
-
-mm = M()
-config = Config()
-
-
-class RVC:
- def __init__(
- self, key, pth_path, index_path, index_rate, n_cpu, inp_q, opt_q, device
- ) -> None:
- """
- 初始化
- """
- try:
- global config
- self.inp_q = inp_q
- self.opt_q = opt_q
- self.device = device
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.sr = 16000
- self.window = 160
- self.n_cpu = n_cpu
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
- self.model = hubert_model
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- self.is_half = config.is_half
- except:
- print(traceback.format_exc())
-
- def get_f0_post(self, f0):
- f0_min = self.f0_min
- f0_max = self.f0_max
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int_)
- return f0_coarse, f0bak
-
- def get_f0(self, x, f0_up_key, n_cpu, method="harvest"):
- n_cpu = int(n_cpu)
- if method == "crepe":
- return self.get_f0_crepe(x, f0_up_key)
- if method == "rmvpe":
- return self.get_f0_rmvpe(x, f0_up_key)
- if method == "pm":
- p_len = x.shape[0] // 160
- f0 = (
- parselmouth.Sound(x, 16000)
- .to_pitch_ac(
- time_step=0.01,
- voicing_threshold=0.6,
- pitch_floor=50,
- pitch_ceiling=1100,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- print(pad_size, p_len - len(f0) - pad_size)
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
-
- f0 *= pow(2, f0_up_key / 12)
- return self.get_f0_post(f0)
- if n_cpu == 1:
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=16000,
- f0_ceil=1100,
- f0_floor=50,
- frame_period=10,
- )
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- return self.get_f0_post(f0)
- f0bak = np.zeros(x.shape[0] // 160, dtype=np.float64)
- length = len(x)
- part_length = int(length / n_cpu / 160) * 160
- ts = ttime()
- res_f0 = mm.dict()
- for idx in range(n_cpu):
- tail = part_length * (idx + 1) + 320
- if idx == 0:
- self.inp_q.put((idx, x[:tail], res_f0, n_cpu, ts))
- else:
- self.inp_q.put(
- (idx, x[part_length * idx - 320 : tail], res_f0, n_cpu, ts)
- )
- while 1:
- res_ts = self.opt_q.get()
- if res_ts == ts:
- break
- f0s = [i[1] for i in sorted(res_f0.items(), key=lambda x: x[0])]
- for idx, f0 in enumerate(f0s):
- if idx == 0:
- f0 = f0[:-3]
- elif idx != n_cpu - 1:
- f0 = f0[2:-3]
- else:
- f0 = f0[2:-1]
- f0bak[
- part_length * idx // 160 : part_length * idx // 160 + f0.shape[0]
- ] = f0
- f0bak = signal.medfilt(f0bak, 3)
- f0bak *= pow(2, f0_up_key / 12)
- return self.get_f0_post(f0bak)
-
- def get_f0_crepe(self, x, f0_up_key):
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- 160,
- self.f0_min,
- self.f0_max,
- "full",
- batch_size=512,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- f0 *= pow(2, f0_up_key / 12)
- return self.get_f0_post(f0)
-
- def get_f0_rmvpe(self, x, f0_up_key):
- if hasattr(self, "model_rmvpe") == False:
- from infer.lib.rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- # self.model_rmvpe = RMVPE("aug2_58000_half.pt", is_half=self.is_half, device=self.device)
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- f0 *= pow(2, f0_up_key / 12)
- return self.get_f0_post(f0)
-
- def infer(
- self,
- feats: torch.Tensor,
- indata: np.ndarray,
- rate1,
- rate2,
- cache_pitch,
- cache_pitchf,
- f0method,
- ) -> np.ndarray:
- feats = feats.view(1, -1)
- if config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- feats = feats.to(self.device)
- t1 = ttime()
- with torch.no_grad():
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
- inputs = {
- "source": feats,
- "padding_mask": padding_mask,
- "output_layer": 9 if self.version == "v1" else 12,
- }
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
- t2 = ttime()
- try:
- if hasattr(self, "index") and self.index_rate != 0:
- leng_replace_head = int(rate1 * feats[0].shape[0])
- npy = feats[0][-leng_replace_head:].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if config.is_half:
- npy = npy.astype("float16")
- feats[0][-leng_replace_head:] = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * self.index_rate
- + (1 - self.index_rate) * feats[0][-leng_replace_head:]
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t3 = ttime()
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(indata, self.f0_up_key, self.n_cpu, f0method)
- cache_pitch[:] = np.append(cache_pitch[pitch[:-1].shape[0] :], pitch[:-1])
- cache_pitchf[:] = np.append(
- cache_pitchf[pitchf[:-1].shape[0] :], pitchf[:-1]
- )
- p_len = min(feats.shape[1], 13000, cache_pitch.shape[0])
- else:
- cache_pitch, cache_pitchf = None, None
- p_len = min(feats.shape[1], 13000)
- t4 = ttime()
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- cache_pitch = cache_pitch[:p_len]
- cache_pitchf = cache_pitchf[:p_len]
- cache_pitch = torch.LongTensor(cache_pitch).unsqueeze(0).to(self.device)
- cache_pitchf = torch.FloatTensor(cache_pitchf).unsqueeze(0).to(self.device)
- p_len = torch.LongTensor([p_len]).to(self.device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(self.device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(
- feats, p_len, cache_pitch, cache_pitchf, sid, rate2
- )[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid, rate2)[0][0, 0]
- .data.cpu()
- .float()
- )
- t5 = ttime()
- print("time->fea-index-f0-model:", t2 - t1, t3 - t2, t4 - t3, t5 - t4)
- return infered_audio
diff --git a/spaces/GIZ/SDSN-demo/appStore/coherence.py b/spaces/GIZ/SDSN-demo/appStore/coherence.py
deleted file mode 100644
index 6f53bb1e5575a30e92d4698f61b3e65f6249af97..0000000000000000000000000000000000000000
--- a/spaces/GIZ/SDSN-demo/appStore/coherence.py
+++ /dev/null
@@ -1,156 +0,0 @@
-# set path
-import glob, os, sys;
-sys.path.append('../utils')
-
-import streamlit as st
-import ast
-import logging
-from utils.ndc_explorer import countrySpecificCCA, countrySpecificCCM
-from utils.checkconfig import getconfig
-from utils.semantic_search import runSemanticPreprocessingPipeline,process_semantic_output
-from utils.semantic_search import semanticSearchPipeline, runSemanticPipeline
-from st_aggrid import AgGrid
-from st_aggrid.shared import ColumnsAutoSizeMode
-
-# Reading data and Declaring necessary variables
-with open('docStore/ndcs/countryList.txt') as dfile:
- countryList = dfile.read()
-countryList = ast.literal_eval(countryList)
-countrynames = list(countryList.keys())
-
-with open('docStore/ndcs/cca.txt', encoding='utf-8', errors='ignore') as dfile:
- cca_sent = dfile.read()
-cca_sent = ast.literal_eval(cca_sent)
-
-with open('docStore/ndcs/ccm.txt', encoding='utf-8', errors='ignore') as dfile:
- ccm_sent = dfile.read()
-ccm_sent = ast.literal_eval(ccm_sent)
-
-config = getconfig('paramconfig.cfg')
-split_by = config.get('coherence','SPLIT_BY')
-split_length = int(config.get('coherence','SPLIT_LENGTH'))
-split_overlap = int(config.get('coherence','SPLIT_OVERLAP'))
-split_respect_sentence_boundary = bool(int(config.get('coherence',
- 'RESPECT_SENTENCE_BOUNDARY')))
-remove_punc = bool(int(config.get('coherence','REMOVE_PUNC')))
-embedding_model = config.get('coherence','RETRIEVER')
-embedding_model_format = config.get('coherence','RETRIEVER_FORMAT')
-embedding_layer = int(config.get('coherence','RETRIEVER_EMB_LAYER'))
-embedding_dim = int(config.get('coherence','EMBEDDING_DIM'))
-max_seq_len = int(config.get('coherence','MAX_SEQ_LENGTH'))
-retriever_top_k = int(config.get('coherence','RETRIEVER_TOP_K'))
-
-
-
-def app():
-
- #### APP INFO #####
- with st.container():
- st.markdown(" NDC Comparison
",
- unsafe_allow_html=True)
- st.write(' ')
- st.write(' ')
- with st.expander("ℹ️ - About this app", expanded=False):
-
- st.write(
- """
- The *NDC Comparison* application provides easy evaluation of
- coherence between a given policy document and a country’s (Intended)\
- Nationally Determined Contribution (INDCs/NDCs) using open-source \
- data from the German Institute of Development and Sustainability’s \
- (IDOS) [NDC Explorer](https://klimalog.idos-research.de/ndc/#NDCExplorer/worldMap?NewAndUpdatedNDC??income???catIncome).\
- """)
- st.write("")
- st.write(""" User can select a country context via the drop-down menu \
- on the left-hand side of the application. Subsequently, the user is \
- given the opportunity to manually upload another policy document \
- from the same national context or to select a pre-loaded example \
- document. Thereafter, the user can choose between two categories \
- to compare coherence between the documents: climate change adaptation \
- and climate change mitigation. Based on the selected information, \
- the application identifies relevant paragraphs in the uploaded \
- document and assigns them to the respective indicator from the NDC \
- Explorer. Currently, the NDC Explorer has 20 indicators under \
- climate change mitigation (e.g., fossil fuel production, REDD+) and \
- 22 indicators under climate change adaptation (e.g., sea level rise,\
- investment needs). The assignment of the paragraph to a corresponding\
- indicator is based on vector similarities in which top 3 results
- if found are shown to the user. """)
- st.write("")
- st.write("")
- st.markdown("Some runtime metrics tested with cpu: Intel(R) Xeon(R) CPU @ 2.20GHz, memory: 13GB")
- col1,col2= st.columns(2)
- with col1:
- st.caption("OCR File processing")
- # st.markdown('50 sec
', unsafe_allow_html=True)
- st.write("50 sec")
-
- with col2:
- st.caption("NDC comparison on 200 paragraphs(~ 35 pages)")
- # st.markdown('12 sec
', unsafe_allow_html=True)
- st.write("140 sec")
-
- with st.sidebar:
-
- option = st.selectbox('Select Country', (countrynames))
- countryCode = countryList[option]
- st.markdown("---")
-
- genre = st.radio( "Select Category",('Climate Change Adaptation',
- 'Climate Change Mitigation'))
- st.markdown("---")
-
- with st.container():
- if st.button("Compare with NDC"):
- sent_cca = countrySpecificCCA(cca_sent,1,countryCode)
- sent_ccm = countrySpecificCCM(ccm_sent,1,countryCode)
-
- if 'filepath' in st.session_state:
- allDocuments = runSemanticPreprocessingPipeline(
- file_path= st.session_state['filepath'],
- file_name = st.session_state['filename'],
- split_by=split_by,
- split_length= split_length,
- split_overlap=split_overlap,
- remove_punc= remove_punc,
- split_respect_sentence_boundary=split_respect_sentence_boundary)
- # genre = st.radio( "Select Category",('Climate Change Adaptation', 'Climate Change Mitigation'))
- if genre == 'Climate Change Adaptation':
- sent_dict = sent_cca
- else:
- sent_dict = sent_ccm
- sent_labels = []
- for key,sent in sent_dict.items():
- sent_labels.append(sent)
- if len(allDocuments['documents']) > 100:
- warning_msg = ": This might take sometime, please sit back and relax."
- else:
- warning_msg = ""
- logging.info("starting Coherence analysis, \
- country selected {}".format(option))
- with st.spinner("Performing Coherence Analysis for {} \
- under {} category{}".format(option,genre,warning_msg)):
- semanticsearch_pipeline, doc_store = semanticSearchPipeline(documents = allDocuments['documents'],
- embedding_model= embedding_model,
- embedding_layer= embedding_layer,
- embedding_model_format= embedding_model_format,
- retriever_top_k= retriever_top_k,
- embedding_dim=embedding_dim,
- max_seq_len=max_seq_len, useQueryCheck=False)
- raw_output = runSemanticPipeline(pipeline=semanticsearch_pipeline,queries=sent_labels)
- results_df = process_semantic_output(raw_output)
- results_df = results_df.drop(['answer','answer_offset',
- 'context_offset','context','reader_score','id'],
- axis = 1)
-
- for i,key in enumerate(list(sent_dict.keys())):
- st.subheader("Relevant paragraphs for topic: {}".format(key))
- df = results_df[results_df['query']==sent_dict[key]].reset_index(drop=True)
- for j in range(3):
- st.write('Result {}.'.format(j+1))
- st.write(df.loc[j]['content']+'\n')
-
- else:
- st.info("🤔 No document found, please try to upload it at the sidebar!")
- logging.warning("Terminated as no document provided")
\ No newline at end of file
diff --git a/spaces/GT4SD/polymer_blocks/model_cards/description.md b/spaces/GT4SD/polymer_blocks/model_cards/description.md
deleted file mode 100644
index 47145d41cebeac58b11c472db0f0d791c89fbdf1..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/polymer_blocks/model_cards/description.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-*PolymerBlocks* is a sequence-based molecular generator tuned to generate blocks of polymers (e.g., catalysts and monomers). The model relies on a Variational Autoencoder architecture as described in [Born et al. (2021; *iScience*)](https://www.sciencedirect.com/science/article/pii/S2589004221002376)
-
-For **examples** and **documentation** of the model parameters, please see below.
-Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page.
diff --git a/spaces/Gradio-Blocks/CBNetV2/images/README.md b/spaces/Gradio-Blocks/CBNetV2/images/README.md
deleted file mode 100644
index 7bf8a7c50604f01a06c4de6720a350e9666e487d..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/CBNetV2/images/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-These images are freely-usable ones from https://www.pexels.com/.
-
-- https://www.pexels.com/photo/assorted-color-kittens-45170/
-- https://www.pexels.com/photo/white-wooden-kitchen-cabinet-1599791/
-- https://www.pexels.com/photo/assorted-books-on-book-shelves-1370295/
-- https://www.pexels.com/photo/pile-of-assorted-varieties-of-vegetables-2255935/
-- https://www.pexels.com/photo/sliced-fruits-on-tray-1132047/
-- https://www.pexels.com/photo/group-of-people-carrying-surfboards-1549196/
-- https://www.pexels.com/photo/aerial-photo-of-vehicles-in-the-city-1031698/
diff --git a/spaces/Gradio-Blocks/ViTPose/images/README.md b/spaces/Gradio-Blocks/ViTPose/images/README.md
deleted file mode 100644
index 906dae5219bd6f75fc0ef74e52aa75c8dc1dbc81..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/ViTPose/images/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-These images are from the following public domain:
-
-- https://www.pexels.com/photo/women-in-active-wear-balancing-their-body-while-leaning-by-the-doorway-5770445/
-- https://www.pexels.com/photo/woman-balancing-her-body-on-a-handstand-using-one-hand-5770708/
-- https://www.pexels.com/photo/persons-in-black-shirt-and-pants-690598/
-- https://www.pexels.com/photo/photo-of-woman-doing-a-ballet-dance-1164975/
-- https://www.pexels.com/photo/beautiful-woman-in-a-red-dress-wearing-red-lipstick-7909580/
-- https://www.pexels.com/photo/girl-in-red-jacket-riding-bicycle-5792907/
-- https://www.pexels.com/photo/woman-wearing-a-white-gown-walking-on-grass-field-8574605/
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
deleted file mode 100644
index 50883ffeb16369ea6210f2ece8fc2d7e084b0134..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- norm_eval=False,
- plugins=[
- dict(
- cfg=dict(type='ContextBlock', ratio=1. / 16),
- stages=(False, True, True, True),
- position='after_conv3')
- ]))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/base_bbox_coder.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/base_bbox_coder.py
deleted file mode 100644
index cf0b34c7cc2fe561718b0c884990beb40a993643..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/base_bbox_coder.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-
-class BaseBBoxCoder(metaclass=ABCMeta):
- """Base bounding box coder."""
-
- def __init__(self, **kwargs):
- pass
-
- @abstractmethod
- def encode(self, bboxes, gt_bboxes):
- """Encode deltas between bboxes and ground truth boxes."""
-
- @abstractmethod
- def decode(self, bboxes, bboxes_pred):
- """Decode the predicted bboxes according to prediction and base
- boxes."""
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/enc_head.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/enc_head.py
deleted file mode 100644
index 0c11994cf6272bd52ed3576486f4b8d7366af940..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/enc_head.py
+++ /dev/null
@@ -1,187 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, build_norm_layer
-
-from mmseg.ops import Encoding, resize
-from ..builder import HEADS, build_loss
-from .decode_head import BaseDecodeHead
-
-
-class EncModule(nn.Module):
- """Encoding Module used in EncNet.
-
- Args:
- in_channels (int): Input channels.
- num_codes (int): Number of code words.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg):
- super(EncModule, self).__init__()
- self.encoding_project = ConvModule(
- in_channels,
- in_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- # TODO: resolve this hack
- # change to 1d
- if norm_cfg is not None:
- encoding_norm_cfg = norm_cfg.copy()
- if encoding_norm_cfg['type'] in ['BN', 'IN']:
- encoding_norm_cfg['type'] += '1d'
- else:
- encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace(
- '2d', '1d')
- else:
- # fallback to BN1d
- encoding_norm_cfg = dict(type='BN1d')
- self.encoding = nn.Sequential(
- Encoding(channels=in_channels, num_codes=num_codes),
- build_norm_layer(encoding_norm_cfg, num_codes)[1],
- nn.ReLU(inplace=True))
- self.fc = nn.Sequential(
- nn.Linear(in_channels, in_channels), nn.Sigmoid())
-
- def forward(self, x):
- """Forward function."""
- encoding_projection = self.encoding_project(x)
- encoding_feat = self.encoding(encoding_projection).mean(dim=1)
- batch_size, channels, _, _ = x.size()
- gamma = self.fc(encoding_feat)
- y = gamma.view(batch_size, channels, 1, 1)
- output = F.relu_(x + x * y)
- return encoding_feat, output
-
-
-@HEADS.register_module()
-class EncHead(BaseDecodeHead):
- """Context Encoding for Semantic Segmentation.
-
- This head is the implementation of `EncNet
- `_.
-
- Args:
- num_codes (int): Number of code words. Default: 32.
- use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to
- regularize the training. Default: True.
- add_lateral (bool): Whether use lateral connection to fuse features.
- Default: False.
- loss_se_decode (dict): Config of decode loss.
- Default: dict(type='CrossEntropyLoss', use_sigmoid=True).
- """
-
- def __init__(self,
- num_codes=32,
- use_se_loss=True,
- add_lateral=False,
- loss_se_decode=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=0.2),
- **kwargs):
- super(EncHead, self).__init__(
- input_transform='multiple_select', **kwargs)
- self.use_se_loss = use_se_loss
- self.add_lateral = add_lateral
- self.num_codes = num_codes
- self.bottleneck = ConvModule(
- self.in_channels[-1],
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- if add_lateral:
- self.lateral_convs = nn.ModuleList()
- for in_channels in self.in_channels[:-1]: # skip the last one
- self.lateral_convs.append(
- ConvModule(
- in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.fusion = ConvModule(
- len(self.in_channels) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.enc_module = EncModule(
- self.channels,
- num_codes=num_codes,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- if self.use_se_loss:
- self.loss_se_decode = build_loss(loss_se_decode)
- self.se_layer = nn.Linear(self.channels, self.num_classes)
-
- def forward(self, inputs):
- """Forward function."""
- inputs = self._transform_inputs(inputs)
- feat = self.bottleneck(inputs[-1])
- if self.add_lateral:
- laterals = [
- resize(
- lateral_conv(inputs[i]),
- size=feat.shape[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
- feat = self.fusion(torch.cat([feat, *laterals], 1))
- encode_feat, output = self.enc_module(feat)
- output = self.cls_seg(output)
- if self.use_se_loss:
- se_output = self.se_layer(encode_feat)
- return output, se_output
- else:
- return output
-
- def forward_test(self, inputs, img_metas, test_cfg):
- """Forward function for testing, ignore se_loss."""
- if self.use_se_loss:
- return self.forward(inputs)[0]
- else:
- return self.forward(inputs)
-
- @staticmethod
- def _convert_to_onehot_labels(seg_label, num_classes):
- """Convert segmentation label to onehot.
-
- Args:
- seg_label (Tensor): Segmentation label of shape (N, H, W).
- num_classes (int): Number of classes.
-
- Returns:
- Tensor: Onehot labels of shape (N, num_classes).
- """
-
- batch_size = seg_label.size(0)
- onehot_labels = seg_label.new_zeros((batch_size, num_classes))
- for i in range(batch_size):
- hist = seg_label[i].float().histc(
- bins=num_classes, min=0, max=num_classes - 1)
- onehot_labels[i] = hist > 0
- return onehot_labels
-
- def losses(self, seg_logit, seg_label):
- """Compute segmentation and semantic encoding loss."""
- seg_logit, se_seg_logit = seg_logit
- loss = dict()
- loss.update(super(EncHead, self).losses(seg_logit, seg_label))
- se_loss = self.loss_se_decode(
- se_seg_logit,
- self._convert_to_onehot_labels(seg_label, self.num_classes))
- loss['loss_se'] = se_loss
- return loss
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/experiment_config.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/experiment_config.py
deleted file mode 100644
index 62455b4e998b9c4dd6c2b46bcdcfdcd4308bbfb7..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/configs/experiment_config.py
+++ /dev/null
@@ -1,1554 +0,0 @@
-from functools import partial
-from typing import Dict, List, Optional
-
-from ..run_type import run_type
-from .base_config import base_cfg
-
-class cfg_debug_ubuntu(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'debug-training-on-ubuntu',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'debug-training-on-ubuntu'
-
- '''MultiMAE'''
- self.dim_tokens = 1280
- self.encoder_depth = 32
- self.num_heads = 16
- self.pretrained_backbone = 'huge-mae'
- self.input_patch_size = 14
- self.output_patch_size = 14
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- # self.warmup_epoch_batch_size = 0
- # self.warmup_min_batch_size = 0
-
- self.batch_size = 1 # <---------------
- self.val_batch_size = 10 # <---------------
- self.nepochs = 1000
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.num_workers = 8
- self.train_function_version = 2
-
-class cfg_debug_colab(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'debug-training-on-colab',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'debug-training-on-colab'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 20 # <---------------
- self.val_batch_size = 100 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
-
- self.train_function_version = 2
-
-class cfg_debug_kaggle(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'debug-training-on-kaggle',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'debug-training-on-kaggle'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # <---------------
- self.val_batch_size = 200 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
-
- self.train_function_version = 2
-
-class cfg_set_1(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'cfg-set-1',
- datasets_set=1,
- run_type=run_type.rt
- )
-
-class cfg_set_2(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'cfg-set-2',
- datasets_set=2,
- run_type=run_type.rt
- )
-
-class cfg_set_3(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'cfg-set-3',
- datasets_set=3,
- run_type=run_type.rt
- )
-
-class cfg_set_4(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'cfg-set-4',
- datasets_set=4,
- run_type=run_type.rt
- )
-
-
-# Old class name: cfgv4_1_16
-class cfgv4_0_18(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.18',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv4-Base'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 200
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 4
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_17
-class cfgv4_0_19(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.19',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 300
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_18
-class cfgv4_0_21(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.21',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-RGB->SOD+Depth'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 200
-
-
-
- self.inputs = ['rgb']
- self.outputs = ['semseg', 'depth']
- self.decoder_main_tasks = [['rgb'], ['rgb']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_19
-class cfgv4_0_22(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.22',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-RGB->SOD'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 200
-
-
-
- self.inputs = ['rgb']
- self.outputs = ['semseg']
- self.decoder_main_tasks: List[List[str]] = [['rgb']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_20
-class cfgv4_0_23(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.23',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Depth->SOD'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 200
-
-
-
- self.inputs = ['depth']
- self.outputs = ['semseg']
- self.decoder_main_tasks: List[List[str]] = [['depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_21
-class cfgv4_0_25(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.25',
- datasets_set=2,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Set2'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 200
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_22
-class cfgv4_0_26(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.26',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large'
- self.accum_iter = 2 # <---------------
-
- self.image_size = 448
- self.test_image_size = 448
- self.embed_dim = 6144 * 4
- self.input_patch_size = 32
- self.output_patch_size: int = 64
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 13 # 13 # 14 # 15 # <---------------
- self.val_batch_size = 70 # 70 # 80 # 100 # <---------------
- self.nepochs = 400
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_24
-class cfgv4_0_28(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.28',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Decoder[RGB]'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 200 # 150 # 300 # <---------------
- self.nepochs = 200
-
-
- self.decoder_main_tasks = [['rgb']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
-
- self.train_function_version = 2
-
-# Old class name: cfgv4_1_25
-class cfgv4_0_29(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.29',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Decoder[Depth]'
- self.accum_iter = 2 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # <---------------
- self.val_batch_size = 200 # <---------------
- self.nepochs = 200
-
-
- self.decoder_main_tasks = [['depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
-
- self.train_function_version = 2
-
-# Old class name: cfgv4_1_26
-class cfgv4_0_30(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.30',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-GradAccum[4]'
- self.accum_iter = 4 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 32 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 5
-
-# Old class name: cfgv4_1_27
-class cfgv4_0_31(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.31',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-NoPretrainedBB'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = None
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 200 # 150 # 300 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 2
-
- self.train_function_version = 2
-
-# Old class name: cfgv4_1_28
-class cfgv4_0_32(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.32',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-LargerBatchSize'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 90 # 25 # 32 # <---------------
- self.val_batch_size = 400 # 150 # 300 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
-
-# Old class name: cfgv4_1_29
-class cfgv4_0_33(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.33',
- datasets_set=3,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Kaggle2GPUs-Set3'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # <---------------
- self.val_batch_size = 200 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
-
- self.train_function_version = 2
-
-class cfgv4_0_34(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.34',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-ViTBB'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = 'vit'
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 200 # 150 # 300 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 2
-
- self.train_function_version = 2
-
-class cfgv4_0_35(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.35',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-MAEBB'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = 'mae'
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 200 # 150 # 300 # <---------------
- self.nepochs = 1000
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 2
-
- self.train_function_version = 2
-
-class cfgv4_0_36(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.36',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 300
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_37(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.37',
- datasets_set=4,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Set4'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 300
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_38(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.38',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv4-Base'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 300
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 4
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_39(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.39',
- datasets_set=2,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Set2'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 200
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-# Deprecated, suspended
-class cfgv4_0_40(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.40',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-LabelSmoothing'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
- self.label_smoothing = 0.1
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-# Deprecated, suspended
-class cfgv4_0_41(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.41',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-LabelSmoothing'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
- self.label_smoothing = 0.1
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_42(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.42',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-LabelSmoothing-WarmUpBatchSize'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.warmup_min_batch_size = 5
- self.warmup_epoch_batch_size = 6
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
- self.label_smoothing = 0.1
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_43(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.43',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-v2'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_44(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.44',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-v2-Ubuntu'
- self.accum_iter = 3 # <---------------
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.num_workers = 8
- self.batch_size = 10 # 25 # 32 # <---------------
- self.val_batch_size = 100 # 150 # 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_45(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.45',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-NoPretrainedBB-v2'
-
- '''Learning rate'''
- self.lr = 1e-4
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = None
-
- self.batch_size = 30 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_46(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.46',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-MAEBB-v2'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = 'mae'
-
- self.batch_size = 30 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_47(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.47',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-ViTBB-v2'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- '''Pretrained Backbone'''
- self.pretrained_backbone = 'vit'
-
- self.batch_size = 30 # <---------------
- self.val_batch_size = 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_48(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.48',
- datasets_set=2,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Set2'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_49(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.49',
- datasets_set=4,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-Set4'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_50(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.50',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Base-EarlyStopping'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 25
-
-
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_51(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.51',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_52(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.52',
- datasets_set=5,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base-Set5'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_53(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.53',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base-DecoderDepth6'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 50
- self.decoder_depth = 6
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_54(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.54',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 100
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_55(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.55',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base-MAE'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 30 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 50
- self.pretrained_backbone = 'mae'
- self.decoder_main_tasks = [['rgb', 'depth']]
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_56(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.56',
- datasets_set=6,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Base-Set6'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 28 # 25 # 32 # <---------------
- self.val_batch_size = 300 # 150 # 300 # <---------------
- self.nepochs = 200
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_57(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.57',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Large-Set1'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_58(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.58',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Set1-DecDepth8'
-
- '''MultiMAE'''
- self.decoder_depth = 8
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 22 # <---------------
- self.val_batch_size = 220 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_59(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.59',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large-Set1'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_61(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.61',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Huge-Set1'
-
- '''MultiMAE'''
- self.dim_tokens = 1280
- self.encoder_depth = 32
- self.num_heads = 16
- self.pretrained_backbone = 'huge-mae'
- self.input_patch_size = 14
- self.output_patch_size = 16
- self.embed_dim = 6144
- self.freeze_encoder = True
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 7 # <---------------
- self.val_batch_size = 15 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_62(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.62',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Huge-Set1-GradClip'
-
- '''MultiMAE'''
- self.dim_tokens = 1280
- self.encoder_depth = 32
- self.num_heads = 16
- self.pretrained_backbone = 'huge-mae'
- self.input_patch_size = 14
- self.output_patch_size = 16
- self.embed_dim = 6144
- self.freeze_encoder = True
-
- self.clip_grad = 1.0
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 7 # <---------------
- self.val_batch_size = 15 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_64(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.64',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large-Set1-RGB->SOD'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 15 # <---------------
- self.val_batch_size = 45 # <---------------
- self.nepochs = 50
-
- self.inputs = ['rgb']
- self.outputs = ['semseg']
- self.decoder_main_tasks: List[List[str]] = [['rgb']]
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_65(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.65',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large-Set1-GradClip'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- self.clip_grad = 1.0
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_66(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.66',
- datasets_set=1,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large-Set6-GradClip'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- self.clip_grad = 1.0
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_67(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.67',
- datasets_set=7,
- run_type=run_type.rt
- )
-
- self.description = 'DAv2-Large-Set7-GradClip'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- self.clip_grad = 1.0
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 2
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-class cfgv4_0_68(base_cfg):
- def __init__(self, epoch: Optional[int] = None):
- super().__init__(
- epoch,
- experiment_name = 'exp_v4.0.68',
- datasets_set=1, # 7
- run_type=run_type.rt
- )
-
- self.description = 'DAv5-Large-Set7-GradClip'
-
- '''MultiMAE'''
- self.dim_tokens = 1024
- self.encoder_depth = 24
- self.num_heads = 16
- self.pretrained_backbone = 'large-mae'
-
- self.clip_grad = 1.0
-
- '''Learning rate'''
- self.lr = 1e-5
- self.end_lr = 1e-11
- self.lr_scale = 100
-
- self.batch_size = 8 # <---------------
- self.val_batch_size = 80 # <---------------
- self.nepochs = 50
-
- self.data_augmentation_version = 5
- self.save_checkpoints_after_each_n_epochs = 1
- self.train_function_version = 2
-
-
-arg_cfg: Dict[str, base_cfg] = dict(
- cfgv4_0_18=cfgv4_0_18,
- cfgv4_0_19=cfgv4_0_19,
- cfgv4_0_21=cfgv4_0_21,
- cfgv4_0_22=cfgv4_0_22,
- cfgv4_0_23=cfgv4_0_23,
- cfgv4_0_25=cfgv4_0_25,
- cfgv4_0_26=cfgv4_0_26,
- cfgv4_0_28=cfgv4_0_28,
- cfgv4_0_29=cfgv4_0_29,
- cfgv4_0_30=cfgv4_0_30,
- cfgv4_0_31=cfgv4_0_31,
- cfgv4_0_32=cfgv4_0_32,
- cfgv4_0_33=cfgv4_0_33,
- cfgv4_0_34=cfgv4_0_34,
- cfgv4_0_35=cfgv4_0_35,
- cfgv4_0_36=cfgv4_0_36,
- cfgv4_0_37=cfgv4_0_37,
- cfgv4_0_38=cfgv4_0_38,
- cfgv4_0_39=cfgv4_0_39,
- # cfgv4_0_40=cfgv4_0_40,
- # cfgv4_0_41=cfgv4_0_41,
- cfgv4_0_42=cfgv4_0_42,
- cfgv4_0_43=cfgv4_0_43,
- cfgv4_0_44=cfgv4_0_44,
- cfgv4_0_45=cfgv4_0_45,
- cfgv4_0_46=cfgv4_0_46,
- cfgv4_0_47=cfgv4_0_47,
- cfgv4_0_48=cfgv4_0_48,
- cfgv4_0_49=cfgv4_0_49,
- cfgv4_0_50=cfgv4_0_50,
-
- cfgv4_0_51=cfgv4_0_51, # suspended
- cfgv4_0_52=cfgv4_0_52, # suspended
- cfgv4_0_53=cfgv4_0_53, # suspended
-
- cfgv4_0_54=cfgv4_0_54,
- cfgv4_0_55=cfgv4_0_55,
- cfgv4_0_56=cfgv4_0_56,
- cfgv4_0_57=cfgv4_0_57,
- cfgv4_0_58=cfgv4_0_58,
- cfgv4_0_59=cfgv4_0_59,
- cfgv4_0_61=cfgv4_0_61,
- cfgv4_0_62=cfgv4_0_62,
- cfgv4_0_64=cfgv4_0_64,
- cfgv4_0_65=cfgv4_0_65,
- cfgv4_0_66=cfgv4_0_66,
- cfgv4_0_67=cfgv4_0_67,
- cfgv4_0_68=cfgv4_0_68,
-
- cfg_set_1=cfg_set_1,
- cfg_set_2=cfg_set_2,
- cfg_set_3=cfg_set_3,
- cfg_set_4=cfg_set_4,
-
- cfg_debug_ubuntu=cfg_debug_ubuntu,
- cfg_debug_colab=cfg_debug_colab,
- cfg_debug_kaggle=cfg_debug_kaggle,
-)
-
-configs_dict = dict(
- cfgv4_0_35_epoch136=partial(cfgv4_0_35, epoch=136),
- cfgv4_0_19_epoch175=partial(cfgv4_0_19, epoch=175),
- cfgv4_0_19_epoch285=partial(cfgv4_0_19, epoch=285),
- cfgv4_0_18_epoch180=partial(cfgv4_0_18, epoch=180),
- cfgv4_0_59_epoch49=partial(cfgv4_0_59, epoch=46),
- cfgv4_0_64_epoch36=partial(cfgv4_0_64, epoch=36),
- cfgv4_0_68_epoch10=partial(cfgv4_0_68, epoch=10),
-)
-
-def get_config_by_set_version(set_version: int) -> base_cfg:
- if set_version not in [1,2,3,4]:
- raise Exception(f'Unsupported set version {set_version}')
- return arg_cfg[f'cfg_set_{set_version}']()
-
-def get_config(cfg_name: str, epoch: Optional[int] = None) -> base_cfg:
- return arg_cfg[cfg_name](epoch)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/rxf_src/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/rxf_src/__init__.py
deleted file mode 100644
index 306e232d6f386b26153864601114e162080dcee4..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/rxf/rxf_src/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import label_smoothed_cross_entropy_r3f, sentence_prediction_r3f # noqa
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/huggingface/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/huggingface/__init__.py
deleted file mode 100644
index f7911c2c8edf516855023a285b18935e5389ec02..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/huggingface/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-
-# automatically import any Python files in the models/huggingface/ directory
-models_dir = os.path.dirname(__file__)
-for file in os.listdir(models_dir):
- path = os.path.join(models_dir, file)
- if (
- not file.startswith("_")
- and not file.startswith(".")
- and (file.endswith(".py") or os.path.isdir(path))
- ):
- model_name = file[: file.find(".py")] if file.endswith(".py") else file
- module = importlib.import_module("fairseq.models.huggingface." + model_name)
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/__init__.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/__init__.py
deleted file mode 100644
index 0323b35a0fc2ef21ac417857d9336cc7c8a3b717..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .env import AttrDict
-from .models import Generator
-
-if __name__ == "__main__":
- pass
diff --git a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/utils.py b/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/utils.py
deleted file mode 100644
index 1206244aa2a004d9f653782de798bfef9e5e726b..0000000000000000000000000000000000000000
--- a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/utils.py
+++ /dev/null
@@ -1,555 +0,0 @@
-# %BANNER_BEGIN%
-# ---------------------------------------------------------------------
-# %COPYRIGHT_BEGIN%
-#
-# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL
-#
-# Unpublished Copyright (c) 2020
-# Magic Leap, Inc., All Rights Reserved.
-#
-# NOTICE: All information contained herein is, and remains the property
-# of COMPANY. The intellectual and technical concepts contained herein
-# are proprietary to COMPANY and may be covered by U.S. and Foreign
-# Patents, patents in process, and are protected by trade secret or
-# copyright law. Dissemination of this information or reproduction of
-# this material is strictly forbidden unless prior written permission is
-# obtained from COMPANY. Access to the source code contained herein is
-# hereby forbidden to anyone except current COMPANY employees, managers
-# or contractors who have executed Confidentiality and Non-disclosure
-# agreements explicitly covering such access.
-#
-# The copyright notice above does not evidence any actual or intended
-# publication or disclosure of this source code, which includes
-# information that is confidential and/or proprietary, and is a trade
-# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION,
-# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS
-# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS
-# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND
-# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE
-# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS
-# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE,
-# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART.
-#
-# %COPYRIGHT_END%
-# ----------------------------------------------------------------------
-# %AUTHORS_BEGIN%
-#
-# Originating Authors: Paul-Edouard Sarlin
-# Daniel DeTone
-# Tomasz Malisiewicz
-#
-# %AUTHORS_END%
-# --------------------------------------------------------------------*/
-# %BANNER_END%
-
-from pathlib import Path
-import time
-from collections import OrderedDict
-from threading import Thread
-import numpy as np
-import cv2
-import torch
-import matplotlib.pyplot as plt
-import matplotlib
-matplotlib.use('Agg')
-
-
-class AverageTimer:
- """ Class to help manage printing simple timing of code execution. """
-
- def __init__(self, smoothing=0.3, newline=False):
- self.smoothing = smoothing
- self.newline = newline
- self.times = OrderedDict()
- self.will_print = OrderedDict()
- self.reset()
-
- def reset(self):
- now = time.time()
- self.start = now
- self.last_time = now
- for name in self.will_print:
- self.will_print[name] = False
-
- def update(self, name='default'):
- now = time.time()
- dt = now - self.last_time
- if name in self.times:
- dt = self.smoothing * dt + (1 - self.smoothing) * self.times[name]
- self.times[name] = dt
- self.will_print[name] = True
- self.last_time = now
-
- def print(self, text='Timer'):
- total = 0.
- print('[{}]'.format(text), end=' ')
- for key in self.times:
- val = self.times[key]
- if self.will_print[key]:
- print('%s=%.3f' % (key, val), end=' ')
- total += val
- print('total=%.3f sec {%.1f FPS}' % (total, 1./total), end=' ')
- if self.newline:
- print(flush=True)
- else:
- print(end='\r', flush=True)
- self.reset()
-
-
-class VideoStreamer:
- """ Class to help process image streams. Four types of possible inputs:"
- 1.) USB Webcam.
- 2.) An IP camera
- 3.) A directory of images (files in directory matching 'image_glob').
- 4.) A video file, such as an .mp4 or .avi file.
- """
- def __init__(self, basedir, resize, skip, image_glob, max_length=1000000):
- self._ip_grabbed = False
- self._ip_running = False
- self._ip_camera = False
- self._ip_image = None
- self._ip_index = 0
- self.cap = []
- self.camera = True
- self.video_file = False
- self.listing = []
- self.resize = resize
- self.interp = cv2.INTER_AREA
- self.i = 0
- self.skip = skip
- self.max_length = max_length
- if isinstance(basedir, int) or basedir.isdigit():
- print('==> Processing USB webcam input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(int(basedir))
- self.listing = range(0, self.max_length)
- elif basedir.startswith(('http', 'rtsp')):
- print('==> Processing IP camera input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(basedir)
- self.start_ip_camera_thread()
- self._ip_camera = True
- self.listing = range(0, self.max_length)
- elif Path(basedir).is_dir():
- print('==> Processing image directory input: {}'.format(basedir))
- self.listing = list(Path(basedir).glob(image_glob[0]))
- for j in range(1, len(image_glob)):
- image_path = list(Path(basedir).glob(image_glob[j]))
- self.listing = self.listing + image_path
- self.listing.sort()
- self.listing = self.listing[::self.skip]
- self.max_length = np.min([self.max_length, len(self.listing)])
- if self.max_length == 0:
- raise IOError('No images found (maybe bad \'image_glob\' ?)')
- self.listing = self.listing[:self.max_length]
- self.camera = False
- elif Path(basedir).exists():
- print('==> Processing video input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(basedir)
- self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
- num_frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
- self.listing = range(0, num_frames)
- self.listing = self.listing[::self.skip]
- self.video_file = True
- self.max_length = np.min([self.max_length, len(self.listing)])
- self.listing = self.listing[:self.max_length]
- else:
- raise ValueError('VideoStreamer input \"{}\" not recognized.'.format(basedir))
- if self.camera and not self.cap.isOpened():
- raise IOError('Could not read camera')
-
- def load_image(self, impath):
- """ Read image as grayscale and resize to img_size.
- Inputs
- impath: Path to input image.
- Returns
- grayim: uint8 numpy array sized H x W.
- """
- grayim = cv2.imread(impath, 0)
- if grayim is None:
- raise Exception('Error reading image %s' % impath)
- w, h = grayim.shape[1], grayim.shape[0]
- w_new, h_new = process_resize(w, h, self.resize)
- grayim = cv2.resize(
- grayim, (w_new, h_new), interpolation=self.interp)
- return grayim
-
- def next_frame(self):
- """ Return the next frame, and increment internal counter.
- Returns
- image: Next H x W image.
- status: True or False depending whether image was loaded.
- """
-
- if self.i == self.max_length:
- return (None, False)
- if self.camera:
-
- if self._ip_camera:
- #Wait for first image, making sure we haven't exited
- while self._ip_grabbed is False and self._ip_exited is False:
- time.sleep(.001)
-
- ret, image = self._ip_grabbed, self._ip_image.copy()
- if ret is False:
- self._ip_running = False
- else:
- ret, image = self.cap.read()
- if ret is False:
- print('VideoStreamer: Cannot get image from camera')
- return (None, False)
- w, h = image.shape[1], image.shape[0]
- if self.video_file:
- self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.listing[self.i])
-
- w_new, h_new = process_resize(w, h, self.resize)
- image = cv2.resize(image, (w_new, h_new),
- interpolation=self.interp)
- image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
- else:
- image_file = str(self.listing[self.i])
- image = self.load_image(image_file)
- self.i = self.i + 1
- return (image, True)
-
- def start_ip_camera_thread(self):
- self._ip_thread = Thread(target=self.update_ip_camera, args=())
- self._ip_running = True
- self._ip_thread.start()
- self._ip_exited = False
- return self
-
- def update_ip_camera(self):
- while self._ip_running:
- ret, img = self.cap.read()
- if ret is False:
- self._ip_running = False
- self._ip_exited = True
- self._ip_grabbed = False
- return
-
- self._ip_image = img
- self._ip_grabbed = ret
- self._ip_index += 1
- #print('IPCAMERA THREAD got frame {}'.format(self._ip_index))
-
-
- def cleanup(self):
- self._ip_running = False
-
-# --- PREPROCESSING ---
-
-def process_resize(w, h, resize):
- assert(len(resize) > 0 and len(resize) <= 2)
- if len(resize) == 1 and resize[0] > -1:
- scale = resize[0] / max(h, w)
- w_new, h_new = int(round(w*scale)), int(round(h*scale))
- elif len(resize) == 1 and resize[0] == -1:
- w_new, h_new = w, h
- else: # len(resize) == 2:
- w_new, h_new = resize[0], resize[1]
-
- # Issue warning if resolution is too small or too large.
- if max(w_new, h_new) < 160:
- print('Warning: input resolution is very small, results may vary')
- elif max(w_new, h_new) > 2000:
- print('Warning: input resolution is very large, results may vary')
-
- return w_new, h_new
-
-
-def frame2tensor(frame, device):
- return torch.from_numpy(frame/255.).float()[None, None].to(device)
-
-
-def read_image(path, device, resize, rotation, resize_float):
- image = cv2.imread(str(path), cv2.IMREAD_GRAYSCALE)
- if image is None:
- return None, None, None
- w, h = image.shape[1], image.shape[0]
- w_new, h_new = process_resize(w, h, resize)
- scales = (float(w) / float(w_new), float(h) / float(h_new))
-
- if resize_float:
- image = cv2.resize(image.astype('float32'), (w_new, h_new))
- else:
- image = cv2.resize(image, (w_new, h_new)).astype('float32')
-
- if rotation != 0:
- image = np.rot90(image, k=rotation)
- if rotation % 2:
- scales = scales[::-1]
-
- inp = frame2tensor(image, device)
- return image, inp, scales
-
-
-# --- GEOMETRY ---
-
-
-def estimate_pose(kpts0, kpts1, K0, K1, thresh, conf=0.99999):
- if len(kpts0) < 5:
- return None
-
- f_mean = np.mean([K0[0, 0], K1[1, 1], K0[0, 0], K1[1, 1]])
- norm_thresh = thresh / f_mean
-
- kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None]
- kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None]
-
- E, mask = cv2.findEssentialMat(
- kpts0, kpts1, np.eye(3), threshold=norm_thresh, prob=conf,
- method=cv2.RANSAC)
-
- assert E is not None
-
- best_num_inliers = 0
- ret = None
- for _E in np.split(E, len(E) / 3):
- n, R, t, _ = cv2.recoverPose(
- _E, kpts0, kpts1, np.eye(3), 1e9, mask=mask)
- if n > best_num_inliers:
- best_num_inliers = n
- ret = (R, t[:, 0], mask.ravel() > 0)
- return ret
-
-
-def rotate_intrinsics(K, image_shape, rot):
- """image_shape is the shape of the image after rotation"""
- assert rot <= 3
- h, w = image_shape[:2][::-1 if (rot % 2) else 1]
- fx, fy, cx, cy = K[0, 0], K[1, 1], K[0, 2], K[1, 2]
- rot = rot % 4
- if rot == 1:
- return np.array([[fy, 0., cy],
- [0., fx, w-1-cx],
- [0., 0., 1.]], dtype=K.dtype)
- elif rot == 2:
- return np.array([[fx, 0., w-1-cx],
- [0., fy, h-1-cy],
- [0., 0., 1.]], dtype=K.dtype)
- else: # if rot == 3:
- return np.array([[fy, 0., h-1-cy],
- [0., fx, cx],
- [0., 0., 1.]], dtype=K.dtype)
-
-
-def rotate_pose_inplane(i_T_w, rot):
- rotation_matrices = [
- np.array([[np.cos(r), -np.sin(r), 0., 0.],
- [np.sin(r), np.cos(r), 0., 0.],
- [0., 0., 1., 0.],
- [0., 0., 0., 1.]], dtype=np.float32)
- for r in [np.deg2rad(d) for d in (0, 270, 180, 90)]
- ]
- return np.dot(rotation_matrices[rot], i_T_w)
-
-
-def scale_intrinsics(K, scales):
- scales = np.diag([1./scales[0], 1./scales[1], 1.])
- return np.dot(scales, K)
-
-
-def to_homogeneous(points):
- return np.concatenate([points, np.ones_like(points[:, :1])], axis=-1)
-
-
-def compute_epipolar_error(kpts0, kpts1, T_0to1, K0, K1):
- kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None]
- kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None]
- kpts0 = to_homogeneous(kpts0)
- kpts1 = to_homogeneous(kpts1)
-
- t0, t1, t2 = T_0to1[:3, 3]
- t_skew = np.array([
- [0, -t2, t1],
- [t2, 0, -t0],
- [-t1, t0, 0]
- ])
- E = t_skew @ T_0to1[:3, :3]
-
- Ep0 = kpts0 @ E.T # N x 3
- p1Ep0 = np.sum(kpts1 * Ep0, -1) # N
- Etp1 = kpts1 @ E # N x 3
- d = p1Ep0**2 * (1.0 / (Ep0[:, 0]**2 + Ep0[:, 1]**2)
- + 1.0 / (Etp1[:, 0]**2 + Etp1[:, 1]**2))
- return d
-
-
-def angle_error_mat(R1, R2):
- cos = (np.trace(np.dot(R1.T, R2)) - 1) / 2
- cos = np.clip(cos, -1., 1.) # numercial errors can make it out of bounds
- return np.rad2deg(np.abs(np.arccos(cos)))
-
-
-def angle_error_vec(v1, v2):
- n = np.linalg.norm(v1) * np.linalg.norm(v2)
- return np.rad2deg(np.arccos(np.clip(np.dot(v1, v2) / n, -1.0, 1.0)))
-
-
-def compute_pose_error(T_0to1, R, t):
- R_gt = T_0to1[:3, :3]
- t_gt = T_0to1[:3, 3]
- error_t = angle_error_vec(t, t_gt)
- error_t = np.minimum(error_t, 180 - error_t) # ambiguity of E estimation
- error_R = angle_error_mat(R, R_gt)
- return error_t, error_R
-
-
-def pose_auc(errors, thresholds):
- sort_idx = np.argsort(errors)
- errors = np.array(errors.copy())[sort_idx]
- recall = (np.arange(len(errors)) + 1) / len(errors)
- errors = np.r_[0., errors]
- recall = np.r_[0., recall]
- aucs = []
- for t in thresholds:
- last_index = np.searchsorted(errors, t)
- r = np.r_[recall[:last_index], recall[last_index-1]]
- e = np.r_[errors[:last_index], t]
- aucs.append(np.trapz(r, x=e)/t)
- return aucs
-
-
-# --- VISUALIZATION ---
-
-
-def plot_image_pair(imgs, dpi=100, size=6, pad=.5):
- n = len(imgs)
- assert n == 2, 'number of images must be two'
- figsize = (size*n, size*3/4) if size is not None else None
- _, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi)
- for i in range(n):
- ax[i].imshow(imgs[i], cmap=plt.get_cmap('gray'), vmin=0, vmax=255)
- ax[i].get_yaxis().set_ticks([])
- ax[i].get_xaxis().set_ticks([])
- for spine in ax[i].spines.values(): # remove frame
- spine.set_visible(False)
- plt.tight_layout(pad=pad)
-
-
-def plot_keypoints(kpts0, kpts1, color='w', ps=2):
- ax = plt.gcf().axes
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-
-def plot_matches(kpts0, kpts1, color, lw=1.5, ps=4):
- fig = plt.gcf()
- ax = fig.axes
- fig.canvas.draw()
-
- transFigure = fig.transFigure.inverted()
- fkpts0 = transFigure.transform(ax[0].transData.transform(kpts0))
- fkpts1 = transFigure.transform(ax[1].transData.transform(kpts1))
-
- fig.lines = [matplotlib.lines.Line2D(
- (fkpts0[i, 0], fkpts1[i, 0]), (fkpts0[i, 1], fkpts1[i, 1]), zorder=1,
- transform=fig.transFigure, c=color[i], linewidth=lw)
- for i in range(len(kpts0))]
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-
-def make_matching_plot(image0, image1, kpts0, kpts1, mkpts0, mkpts1,
- color, text, path, show_keypoints=False,
- fast_viz=False, opencv_display=False,
- opencv_title='matches', small_text=[]):
-
- if fast_viz:
- make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, mkpts1,
- color, text, path, show_keypoints, 10,
- opencv_display, opencv_title, small_text)
- return
-
- plot_image_pair([image0, image1])
- if show_keypoints:
- plot_keypoints(kpts0, kpts1, color='k', ps=4)
- plot_keypoints(kpts0, kpts1, color='w', ps=2)
- plot_matches(mkpts0, mkpts1, color)
-
- fig = plt.gcf()
- txt_color = 'k' if image0[:100, :150].mean() > 200 else 'w'
- fig.text(
- 0.01, 0.99, '\n'.join(text), transform=fig.axes[0].transAxes,
- fontsize=15, va='top', ha='left', color=txt_color)
-
- txt_color = 'k' if image0[-100:, :150].mean() > 200 else 'w'
- fig.text(
- 0.01, 0.01, '\n'.join(small_text), transform=fig.axes[0].transAxes,
- fontsize=5, va='bottom', ha='left', color=txt_color)
-
- plt.savefig(str(path), bbox_inches='tight', pad_inches=0)
- plt.close()
-
-
-def make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0,
- mkpts1, color, text, path=None,
- show_keypoints=False, margin=10,
- opencv_display=False, opencv_title='',
- small_text=[]):
- H0, W0 = image0.shape
- H1, W1 = image1.shape
- H, W = max(H0, H1), W0 + W1 + margin
-
- out = 255*np.ones((H, W), np.uint8)
- out[:H0, :W0] = image0
- out[:H1, W0+margin:] = image1
- out = np.stack([out]*3, -1)
-
- if show_keypoints:
- kpts0, kpts1 = np.round(kpts0).astype(int), np.round(kpts1).astype(int)
- white = (255, 255, 255)
- black = (0, 0, 0)
- for x, y in kpts0:
- cv2.circle(out, (x, y), 2, black, -1, lineType=cv2.LINE_AA)
- cv2.circle(out, (x, y), 1, white, -1, lineType=cv2.LINE_AA)
- for x, y in kpts1:
- cv2.circle(out, (x + margin + W0, y), 2, black, -1,
- lineType=cv2.LINE_AA)
- cv2.circle(out, (x + margin + W0, y), 1, white, -1,
- lineType=cv2.LINE_AA)
-
- mkpts0, mkpts1 = np.round(mkpts0).astype(int), np.round(mkpts1).astype(int)
- color = (np.array(color[:, :3])*255).astype(int)[:, ::-1]
- for (x0, y0), (x1, y1), c in zip(mkpts0, mkpts1, color):
- c = c.tolist()
- cv2.line(out, (x0, y0), (x1 + margin + W0, y1),
- color=c, thickness=1, lineType=cv2.LINE_AA)
- # display line end-points as circles
- cv2.circle(out, (x0, y0), 2, c, -1, lineType=cv2.LINE_AA)
- cv2.circle(out, (x1 + margin + W0, y1), 2, c, -1,
- lineType=cv2.LINE_AA)
-
- # Scale factor for consistent visualization across scales.
- sc = min(H / 640., 2.0)
-
- # Big text.
- Ht = int(30 * sc) # text height
- txt_color_fg = (255, 255, 255)
- txt_color_bg = (0, 0, 0)
- for i, t in enumerate(text):
- cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX,
- 1.0*sc, txt_color_bg, 2, cv2.LINE_AA)
- cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX,
- 1.0*sc, txt_color_fg, 1, cv2.LINE_AA)
-
- # Small text.
- Ht = int(18 * sc) # text height
- for i, t in enumerate(reversed(small_text)):
- cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX,
- 0.5*sc, txt_color_bg, 2, cv2.LINE_AA)
- cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX,
- 0.5*sc, txt_color_fg, 1, cv2.LINE_AA)
-
- if path is not None:
- cv2.imwrite(str(path), out)
-
- if opencv_display:
- cv2.imshow(opencv_title, out)
- cv2.waitKey(1)
-
- return out
-
-
-def error_colormap(x):
- return np.clip(
- np.stack([2-x*2, x*2, np.zeros_like(x), np.ones_like(x)], -1), 0, 1)
diff --git a/spaces/Hila/RobustViT/imagenet_eval_robustness_per_class.py b/spaces/Hila/RobustViT/imagenet_eval_robustness_per_class.py
deleted file mode 100644
index dabee92ddd89d29cd4a508c9fc4fa7ad6ab7cc8d..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/imagenet_eval_robustness_per_class.py
+++ /dev/null
@@ -1,343 +0,0 @@
-import argparse
-import os
-import random
-import shutil
-import time
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.parallel
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.optim
-import torch.multiprocessing as mp
-import torch.utils.data
-import torch.utils.data.distributed
-import torchvision.transforms as transforms
-import torchvision.datasets as datasets
-import torchvision.models as models
-
-# Uncomment the expected model below
-
-# ViT
-from ViT.ViT import vit_base_patch16_224 as vit
-# from ViT.ViT import vit_large_patch16_224 as vit
-
-# ViT-AugReg
-# from ViT.ViT_new import vit_small_patch16_224 as vit
-# from ViT.ViT_new import vit_base_patch16_224 as vit
-# from ViT.ViT_new import vit_large_patch16_224 as vit
-
-# DeiT
-# from ViT.ViT import deit_base_patch16_224 as vit
-# from ViT.ViT import deit_small_patch16_224 as vit
-
-from robustness_dataset_per_class import RobustnessDataset
-from objectnet_dataset import ObjectNetDataset
-model_names = sorted(name for name in models.__dict__
- if name.islower() and not name.startswith("__")
- and callable(models.__dict__[name]))
-model_names.append("vit")
-
-parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
-parser.add_argument('--data', metavar='DIR',
- help='path to dataset')
-parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
- help='number of data loading workers (default: 4)')
-parser.add_argument('--epochs', default=150, type=int, metavar='N',
- help='number of total epochs to run')
-parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
- help='manual epoch number (useful on restarts)')
-parser.add_argument('-b', '--batch-size', default=256, type=int,
- metavar='N',
- help='mini-batch size (default: 256), this is the total '
- 'batch size of all GPUs on the current node when '
- 'using Data Parallel or Distributed Data Parallel')
-parser.add_argument('--lr', '--learning-rate', default=5e-4, type=float,
- metavar='LR', help='initial learning rate', dest='lr')
-parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
- help='momentum')
-parser.add_argument('--wd', '--weight-decay', default=0.05, type=float,
- metavar='W', help='weight decay (default: 1e-4)',
- dest='weight_decay')
-parser.add_argument('-p', '--print-freq', default=10, type=int,
- metavar='N', help='print frequency (default: 10)')
-parser.add_argument('--checkpoint', default='', type=str, metavar='PATH',
- help='path to latest checkpoint (default: none)')
-parser.add_argument('--resume', default='', type=str, metavar='PATH',
- help='path to resume checkpoint (default: none)')
-parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
- help='evaluate model on validation set')
-parser.add_argument('--pretrained', dest='pretrained', action='store_true',
- help='use pre-trained model')
-parser.add_argument('--world-size', default=-1, type=int,
- help='number of nodes for distributed training')
-parser.add_argument('--rank', default=-1, type=int,
- help='node rank for distributed training')
-parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
- help='url used to set up distributed training')
-parser.add_argument('--dist-backend', default='nccl', type=str,
- help='distributed backend')
-parser.add_argument('--seed', default=None, type=int,
- help='seed for initializing training. ')
-parser.add_argument('--gpu', default=None, type=int,
- help='GPU id to use.')
-parser.add_argument('--multiprocessing-distributed', action='store_true',
- help='Use multi-processing distributed training to launch '
- 'N processes per node, which has N GPUs. This is the '
- 'fastest way to use PyTorch for either single node or '
- 'multi node data parallel training')
-parser.add_argument("--isV2", default=False, action='store_true',
- help='is dataset imagenet V2.')
-parser.add_argument("--isSI", default=False, action='store_true',
- help='is dataset SI-score.')
-parser.add_argument("--isObjectNet", default=False, action='store_true',
- help='is dataset SI-score.')
-
-
-def main():
- args = parser.parse_args()
-
- if args.seed is not None:
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- cudnn.deterministic = True
- warnings.warn('You have chosen to seed training. '
- 'This will turn on the CUDNN deterministic setting, '
- 'which can slow down your training considerably! '
- 'You may see unexpected behavior when restarting '
- 'from checkpoints.')
-
- if args.gpu is not None:
- warnings.warn('You have chosen a specific GPU. This will completely '
- 'disable data parallelism.')
-
- if args.dist_url == "env://" and args.world_size == -1:
- args.world_size = int(os.environ["WORLD_SIZE"])
-
- args.distributed = args.world_size > 1 or args.multiprocessing_distributed
-
- ngpus_per_node = torch.cuda.device_count()
- if args.multiprocessing_distributed:
- # Since we have ngpus_per_node processes per node, the total world_size
- # needs to be adjusted accordingly
- args.world_size = ngpus_per_node * args.world_size
- # Use torch.multiprocessing.spawn to launch distributed processes: the
- # main_worker process function
- mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
- else:
- # Simply call main_worker function
- main_worker(args.gpu, ngpus_per_node, args)
-
-
-def main_worker(gpu, ngpus_per_node, args):
- global best_acc1
- args.gpu = gpu
-
- if args.gpu is not None:
- print("Use GPU: {} for training".format(args.gpu))
-
- if args.distributed:
- if args.dist_url == "env://" and args.rank == -1:
- args.rank = int(os.environ["RANK"])
- if args.multiprocessing_distributed:
- # For multiprocessing distributed training, rank needs to be the
- # global rank among all the processes
- args.rank = args.rank * ngpus_per_node + gpu
- dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
- world_size=args.world_size, rank=args.rank)
- # create model
- print("=> creating model")
- if args.checkpoint:
- model = vit().cuda()
- checkpoint = torch.load(args.checkpoint)
- model.load_state_dict(checkpoint['state_dict'])
- else:
- model = vit(pretrained=True).cuda()
- print("done")
-
- if not torch.cuda.is_available():
- print('using CPU, this will be slow')
- elif args.distributed:
- # For multiprocessing distributed, DistributedDataParallel constructor
- # should always set the single device scope, otherwise,
- # DistributedDataParallel will use all available devices.
- if args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model.cuda(args.gpu)
- # When using a single GPU per process and per
- # DistributedDataParallel, we need to divide the batch size
- # ourselves based on the total number of GPUs we have
- args.batch_size = int(args.batch_size / ngpus_per_node)
- args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- else:
- model.cuda()
- # DistributedDataParallel will divide and allocate batch_size to all
- # available GPUs if device_ids are not set
- model = torch.nn.parallel.DistributedDataParallel(model)
- elif args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model = model.cuda(args.gpu)
- else:
- # DataParallel will divide and allocate batch_size to all available GPUs
- print("start")
- model = torch.nn.DataParallel(model).cuda()
-
- # optionally resume from a checkpoint
- if args.resume:
- if os.path.isfile(args.resume):
- print("=> loading checkpoint '{}'".format(args.resume))
- if args.gpu is None:
- checkpoint = torch.load(args.resume)
- else:
- # Map model to be loaded to specified single gpu.
- loc = 'cuda:{}'.format(args.gpu)
- checkpoint = torch.load(args.resume, map_location=loc)
- args.start_epoch = checkpoint['epoch']
- best_acc1 = checkpoint['best_acc1']
- if args.gpu is not None:
- # best_acc1 may be from a checkpoint from a different GPU
- best_acc1 = best_acc1.to(args.gpu)
- model.load_state_dict(checkpoint['state_dict'])
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.resume, checkpoint['epoch']))
- else:
- print("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
-
- # Data loading code
-
- top1_per_class = {}
- top5_per_class = {}
- for folder in os.listdir(args.data):
- val_dataset = RobustnessDataset(args.data, folder=folder, isV2=args.isV2, isSI=args.isSI)
- print("len: ", len(val_dataset))
- val_loader = torch.utils.data.DataLoader(
- val_dataset, batch_size=args.batch_size, shuffle=False,
- num_workers=args.workers, pin_memory=True)
- class_name = val_dataset.get_classname()
- top1, top5 = validate(val_loader, model, args)
- top1_per_class[class_name] = top1.item()
- top5_per_class[class_name] = top5.item()
-
- print("overall top1 per class: ", top1_per_class)
- print("overall top5 per class: ", top5_per_class)
-
-def validate(val_loader, model, args):
- batch_time = AverageMeter('Time', ':6.3f')
- losses = AverageMeter('Loss', ':.4e')
- top1 = AverageMeter('Acc@1', ':6.2f')
- top5 = AverageMeter('Acc@5', ':6.2f')
- progress = ProgressMeter(
- len(val_loader),
- [batch_time, losses, top1, top5],
- prefix='Test: ')
-
- # switch to evaluate mode
- model.eval()
-
- with torch.no_grad():
- end = time.time()
- for i, (images, target) in enumerate(val_loader):
- if args.gpu is not None:
- images = images.cuda(args.gpu, non_blocking=True)
- if torch.cuda.is_available():
- target = target.cuda(args.gpu, non_blocking=True)
-
- # compute output
- output = model(images)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, target, topk=(1, 5))
- top1.update(acc1[0], images.size(0))
- top5.update(acc5[0], images.size(0))
-
- # measure elapsed time
- batch_time.update(time.time() - end)
- end = time.time()
-
- if i % args.print_freq == 0:
- progress.display(i)
-
- # TODO: this should also be done with the ProgressMeter
- print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
- .format(top1=top1, top5=top5))
-
- return top1.avg, top5.avg
-
-
-def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
- torch.save(state, filename)
- if is_best:
- shutil.copyfile(filename, 'model_best.pth.tar')
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
- def __init__(self, name, fmt=':f'):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print('\t'.join(entries))
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = '{:' + str(num_digits) + 'd}'
- return '[' + fmt + '/' + fmt.format(num_batches) + ']'
-
-def adjust_learning_rate(optimizer, epoch, args):
- """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
- lr = args.lr * (0.85 ** (epoch // 2))
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
-
-def accuracy(output, target, topk=(1,)):
- """Computes the accuracy over the k top predictions for the specified values of k"""
- with torch.no_grad():
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/utils_display.py b/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/utils_display.py
deleted file mode 100644
index de69c4af6030d4ccf76fd5ea0f8b389f999830a0..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/utils_display.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from dataclasses import dataclass
-
-# These classes are for user facing column names, to avoid having to change them
-# all around the code when a modif is needed
-@dataclass
-class ColumnContent:
- name: str
- type: str
- displayed_by_default: bool
- hidden: bool = False
-
-def fields(raw_class):
- return [v for k, v in raw_class.__dict__.items() if k[:2] != "__" and k[-2:] != "__"]
-
-@dataclass(frozen=True)
-class EloEvalColumn: # Elo evals column
- model = ColumnContent("Model", "markdown", True)
- gpt4 = ColumnContent("GPT-4 (all)", "number", True)
- human_all = ColumnContent("Human (all)", "number", True)
- human_instruct = ColumnContent("Human (instruct)", "number", True)
- human_code_instruct = ColumnContent("Human (code-instruct)", "number", True)
-
-LLAMAS = ["huggingface/llama-7b", "huggingface/llama-13b", "huggingface/llama-30b", "huggingface/llama-65b"]
-
-
-KOALA_LINK = "https://huggingface.co/TheBloke/koala-13B-HF"
-VICUNA_LINK = "https://huggingface.co/lmsys/vicuna-13b-delta-v1.1"
-OASST_LINK = "https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5"
-DOLLY_LINK = "https://huggingface.co/databricks/dolly-v2-12b"
-MODEL_PAGE = "https://huggingface.co/models"
-LLAMA_LINK = "https://ai.facebook.com/blog/large-language-model-llama-meta-ai/"
-VICUNA_LINK = "https://huggingface.co/CarperAI/stable-vicuna-13b-delta"
-ALPACA_LINK = "https://crfm.stanford.edu/2023/03/13/alpaca.html"
-
-
-def model_hyperlink(link, model_name):
- return f'{model_name}'
-
-
-def make_clickable_model(model_name):
- link = f"https://huggingface.co/{model_name}"
-
- if model_name in LLAMAS:
- link = LLAMA_LINK
- model_name = model_name.split("/")[1]
- elif model_name == "HuggingFaceH4/stable-vicuna-13b-2904":
- link = VICUNA_LINK
- model_name = "stable-vicuna-13b"
- elif model_name == "HuggingFaceH4/llama-7b-ift-alpaca":
- link = ALPACA_LINK
- model_name = "alpaca-13b"
- if model_name == "dolly-12b":
- link = DOLLY_LINK
- elif model_name == "vicuna-13b":
- link = VICUNA_LINK
- elif model_name == "koala-13b":
- link = KOALA_LINK
- elif model_name == "oasst-12b":
- link = OASST_LINK
- #else:
- # link = MODEL_PAGE
-
- return model_hyperlink(link, model_name)
-
-def styled_error(error):
- return f"{error}
"
-
-def styled_warning(warn):
- return f"{warn}
"
-
-def styled_message(message):
- return f"{message}
"
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/fp16_optimizer.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/fp16_optimizer.py
deleted file mode 100644
index c59b21cf6b36650a4dd899e62b83a01715d2e2a1..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/fp16_optimizer.py
+++ /dev/null
@@ -1,548 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import defaultdict
-from itertools import chain
-
-import torch
-from fairseq import optim
-from omegaconf import DictConfig
-
-from .dynamic_loss_scaler import DynamicLossScaler
-
-
-class _FP16OptimizerMixin(object):
- def __init__(self, *args, **kwargs):
- # forward __init__ call to the next class in mro(method resolution order)
- super().__init__(*args, **kwargs)
- self._multiply_factor = 1.0
-
- @property
- def has_flat_params(self):
- return torch.is_tensor(self.fp32_params) or (
- isinstance(self.fp32_params, dict)
- and all(torch.is_tensor(t) for t in self.fp32_params.values())
- )
-
- @classmethod
- def build_fp32_params(cls, args, params, flatten=True):
- # create FP32 copy of parameters and grads
- if flatten:
- is_pipeline_parallel = getattr(
- args, "pipeline_model_parallel", False
- ) and getattr(args, "distributed_no_spawn", False)
- total_param_size = sum(p.data.numel() for p in params)
- devices = [torch.cuda.current_device()]
- if is_pipeline_parallel:
- devices = list(set(args.pipeline_devices))
- fp32_params = {}
- for device in devices:
- if is_pipeline_parallel:
- device_param_size = sum(
- p.data.numel() for p in params if p.device.index == device
- )
- device_params = [p for p in params if p.device.index == device]
- else:
- device_param_size = total_param_size
- device_params = params
- fp32_params[device] = (
- device_params[0].new(0).float().new(device_param_size)
- )
- offset = 0
- for p in device_params:
- numel = p.data.numel()
- fp32_params[device][offset : offset + numel].copy_(p.data.view(-1))
- offset += numel
- fp32_params[device] = torch.nn.Parameter(fp32_params[device])
- fp32_params[device].grad = fp32_params[device].data.new(
- device_param_size
- )
- return fp32_params
- else:
- fp32_params = []
- for p in params:
- p32 = torch.nn.Parameter(p.data.float())
- if hasattr(p, 'expert'):
- p32.expert = True
- elif hasattr(p, 'base_expert'):
- p32.base_expert = True
- p32.grad = torch.zeros_like(p32.data)
- if hasattr(p, "param_group"):
- p32.param_group = p.param_group
- fp32_params.append(p32)
- return fp32_params
-
- def state_dict(self):
- """Return the optimizer's state dict."""
- state_dict = self.fp32_optimizer.state_dict()
- if self.scaler is not None:
- state_dict["loss_scale"] = self.scaler.loss_scale
- return state_dict
-
- def load_state_dict(self, state_dict, optimizer_overrides=None):
- """Load an optimizer state dict.
-
- In general we should prefer the configuration of the existing optimizer
- instance (e.g., learning rate) over that found in the state_dict. This
- allows us to resume training from a checkpoint using a new set of
- optimizer args.
- """
- if "loss_scale" in state_dict and self.scaler is not None:
- self.scaler.loss_scale = state_dict["loss_scale"]
- self.fp32_optimizer.load_state_dict(state_dict, optimizer_overrides)
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- if self.scaler is not None:
- loss = self.scaler.scale(loss)
- loss.backward()
- self._needs_sync = True
-
- def _sync_fp16_grads_to_fp32(self):
- if self._needs_sync:
- # copy FP16 grads to FP32
- if self.has_flat_params:
- devices = list(self.fp32_params.keys())
- device_params_dict = defaultdict(list)
- for p in self.fp16_params:
- if p.requires_grad:
- device_params_dict[p.device.index].append(p)
- for device in devices:
- device_params = device_params_dict[device]
- offset = 0
- for p in device_params:
- grad_data = (
- p.grad.data
- if p.grad is not None
- else p.data.new_zeros(p.data.shape)
- )
- numel = grad_data.numel()
- self.fp32_params[device].grad.data[
- offset : offset + numel
- ].copy_(grad_data.view(-1))
- offset += numel
- else:
- for p, p32 in zip(self.fp16_params, self.fp32_params):
- if not p.requires_grad:
- continue
- if p.grad is not None:
- if p32.grad is None:
- p32.grad = p.grad.data.float()
- else:
- p32.grad.data.copy_(p.grad.data)
- else:
- p32.grad = torch.zeros_like(p.data, dtype=torch.float)
-
- self._needs_sync = False
-
- def _sync_fp32_params_to_fp16(self):
- # copy FP32 params back into FP16 model
- if self.has_flat_params:
- devices = list(self.fp32_params.keys())
- device_params_dict = defaultdict(list)
- for p in self.fp16_params:
- device_params_dict[p.device.index].append(p)
- for device in devices:
- device_params = device_params_dict[device]
- offset = 0
- for p in device_params:
- numel = p.data.numel()
- p.data.copy_(
- self.fp32_params[device]
- .data[offset : offset + numel]
- .view_as(p.data)
- )
- offset += numel
- else:
- for p, p32 in zip(self.fp16_params, self.fp32_params):
- if not p.requires_grad:
- continue
- p.data.copy_(p32.data)
-
- def _unscale_grads(self):
- self._sync_fp16_grads_to_fp32()
- if (
- # Skip the multiplication if it's a no-op (i.e., if _multiply_factor
- # is 1.0). At the same time, we want to avoid the device-to-host
- # transfer by comparing it to 1.0. Since _multiply_factor starts as
- # a Python float, we roughly assume that if it's a tensor then it's
- # probably not =1.0 anymore and we do the multiplication. Otherwise
- # we can safely check the value without a D2H transfer.
- torch.is_tensor(self._multiply_factor)
- or self._multiply_factor != 1.0
- ):
- self.fp32_optimizer.multiply_grads(self._multiply_factor)
- self._multiply_factor = 1.0
-
- def multiply_grads(self, c):
- """Multiplies grads by a constant ``c``."""
- self._multiply_factor *= c
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm and updates dynamic loss scaler."""
- self._sync_fp16_grads_to_fp32()
-
- grad_norm = self._multiply_factor * self.fp32_optimizer.clip_grad_norm(
- 0, aggregate_norm_fn
- )
-
- if self.scaler is not None:
- if grad_norm > max_norm > 0.0:
- self._multiply_factor *= max_norm / grad_norm
-
- self.scaler.check_overflow(grad_norm)
- elif max_norm > 0.0:
- clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1)
- self._multiply_factor *= clip_coef
-
- return grad_norm
-
- def step(self, closure=None, groups=None):
- """Performs a single optimization step."""
- self._sync_fp16_grads_to_fp32()
-
- if getattr(self, "supports_step_with_scale", False):
- self.fp32_optimizer.step(closure, scale=(1.0 / self._multiply_factor), groups=groups)
- else:
- self._unscale_grads()
- self.fp32_optimizer.step(closure, groups=groups)
-
- if self.scaler is not None:
- self.scaler.update()
-
- self._sync_fp32_params_to_fp16()
-
- def zero_grad(self):
- """Clears the gradients of all optimized parameters."""
- for p in self.fp16_params:
- p.grad = None
- if self.has_flat_params:
- if torch.is_tensor(self.fp32_params):
- self.fp32_params.grad.zero_()
- elif isinstance(self.fp32_params, dict):
- for fp32_params in self.fp32_params.values():
- fp32_params.grad.zero_()
- else:
- raise RuntimeError("self.fp32_params must be a tensor or dict")
- else:
- for p32 in self.fp32_params:
- if p32.grad is not None:
- p32.grad.zero_()
- self._needs_sync = False
-
- if self.scaler is not None:
- self._multiply_factor = 1.0 / float(self.scaler.loss_scale)
-
-
-class FP16Optimizer(_FP16OptimizerMixin, optim.FairseqOptimizer):
- """
- Wrap an *optimizer* to support FP16 (mixed precision) training.
- """
-
- def __init__(self, cfg: DictConfig, params, fp32_optimizer, fp32_params, **kwargs):
- super().__init__(cfg.optimizer)
- self.fp16_params = params
- self.fp32_optimizer = fp32_optimizer
- self.fp32_params = fp32_params
-
- if getattr(cfg.common, "fp16_scale_window", None) is None:
- if len(cfg.optimization.update_freq) > 1:
- raise ValueError(
- "--fp16-scale-window must be given explicitly when using a "
- "custom --update-freq schedule"
- )
- data_parallel_size = int(
- cfg.distributed_training.distributed_world_size
- / cfg.common.model_parallel_size
- )
- scale_window = int(
- 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0]
- )
- else:
- scale_window = cfg.common.fp16_scale_window
-
- if not getattr(cfg.common, "bf16", False):
- self.scaler = DynamicLossScaler(
- init_scale=cfg.common.fp16_init_scale,
- scale_window=scale_window,
- tolerance=cfg.common.fp16_scale_tolerance,
- threshold=cfg.common.threshold_loss_scale,
- min_loss_scale=cfg.common.min_loss_scale,
- )
- else:
- # disable loss scaling for bfloat16
- self.scaler = None
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- cfg (omegaconf.DictConfig): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- flatten = not getattr(cfg.common, "fp16_no_flatten_grads", False)
- if getattr(cfg.common, "bf16", False):
- flatten = False # mixed precision is faster on TPUs without flat grads
- fp32_params = cls.build_fp32_params(cfg.optimizer, params, flatten=flatten)
- if flatten:
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, [fp32_params])
- else:
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, fp32_params)
- if flatten and not fp32_optimizer.supports_flat_params:
- raise RuntimeError(
- f"chosen optimizer {fp32_optimizer.__class__.__name__} does not support flat params, please set --fp16-no-flatten-grads"
- )
- return cls(cfg, params, fp32_optimizer, fp32_params, **kwargs)
-
- @property
- def optimizer(self):
- return self.fp32_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.fp32_optimizer.optimizer = optimizer
-
- @property
- def lr_scheduler(self):
- return getattr(self.fp32_optimizer, "lr_scheduler", None)
-
- @property
- def optimizer_config(self):
- return self.fp32_optimizer.optimizer_config
-
- def get_lr(self):
- return self.fp32_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.fp32_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.fp32_optimizer.all_reduce_grads(module)
-
- @property
- def supports_flat_params(self):
- return self.fp32_optimizer.supports_flat_params
-
-
-class _MemoryEfficientFP16OptimizerMixin(object):
- def __init__(self, *args, **kwargs):
- # forward __init__ call to the next class in MRO (method resolution order)
- super().__init__(*args, **kwargs)
- self._multiply_factor = 1.0
-
- @property
- def has_flat_params(self):
- return False
-
- def state_dict(self):
- """Return the optimizer's state dict."""
- state_dict = self.wrapped_optimizer.state_dict()
- if self.scaler is not None:
- state_dict["loss_scale"] = self.scaler.loss_scale
- return state_dict
-
- def load_state_dict(self, state_dict, optimizer_overrides=None):
- """Load an optimizer state dict.
-
- In general we should prefer the configuration of the existing optimizer
- instance (e.g., learning rate) over that found in the state_dict. This
- allows us to resume training from a checkpoint using a new set of
- optimizer args.
- """
- if "loss_scale" in state_dict and self.scaler is not None:
- self.scaler.loss_scale = state_dict["loss_scale"]
-
- self.wrapped_optimizer.load_state_dict(state_dict, optimizer_overrides)
-
- # Hack: PyTorch automatically casts the optimizer state to match the
- # type of the current parameters. But with --memory-efficient-fp16 the
- # params are FP16 while the optimizer state is FP32 and we don't want
- # to cast. A workaround is to manually copy back the original state
- # after the optimizer has been loaded.
- if not getattr(self.optimizer, "disable_mem_eff_fp16_loading_hack", False):
- groups = self.optimizer.param_groups
- saved_groups = state_dict["param_groups"]
- id_map = {
- old_id: p
- for old_id, p in zip(
- chain(*(g["params"] for g in saved_groups)),
- chain(*(g["params"] for g in groups)),
- )
- }
- for k, v in state_dict["state"].items():
- if k in id_map:
- param = id_map[k]
- self.optimizer.state[param] = v
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- if self.scaler is not None:
- loss = self.scaler.scale(loss)
- loss.backward()
-
- def _unscale_grads(self):
- if (
- # Skip the multiplication if it's a no-op (i.e., if _multiply_factor
- # is 1.0). At the same time, we want to avoid the device-to-host
- # transfer by comparing it to 1.0. Since _multiply_factor starts as
- # a Python float, we roughly assume that if it's a tensor then it's
- # probably not =1.0 anymore and we do the multiplication. Otherwise
- # we can safely check the value without a D2H transfer.
- torch.is_tensor(self._multiply_factor)
- or self._multiply_factor != 1.0
- ):
- self.wrapped_optimizer.multiply_grads(self._multiply_factor)
- self._multiply_factor = 1.0
-
- def multiply_grads(self, c):
- """Multiplies grads by a constant *c*."""
- self._multiply_factor *= c
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm and updates dynamic loss scaler."""
- max_norm = float(max_norm)
- grad_norm = self._multiply_factor * self.wrapped_optimizer.clip_grad_norm(
- 0, aggregate_norm_fn
- )
-
- if self.scaler is not None:
- grad_norm_cpu = float(grad_norm)
- if grad_norm_cpu > max_norm > 0.0:
- self._multiply_factor *= max_norm / grad_norm_cpu
-
- # detect overflow and adjust loss scale
- self.scaler.check_overflow(grad_norm_cpu)
- elif max_norm > 0.0:
- clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1)
- self._multiply_factor *= clip_coef
-
- return grad_norm
-
- def step(self, closure=None, groups=None):
- """Performs a single optimization step."""
- if getattr(self, "supports_step_with_scale", False):
- # NOTE(msb) optimizer divides by scale factor
- self.wrapped_optimizer.step(closure, scale=(1.0 / self._multiply_factor), groups=groups)
- else:
- self._unscale_grads()
- self.wrapped_optimizer.step(closure, groups=groups)
-
- if self.scaler is not None:
- self.scaler.update()
-
- def zero_grad(self):
- """Clears the gradients of all optimized parameters."""
- self.wrapped_optimizer.zero_grad()
- if self.scaler is not None:
- self._multiply_factor = 1.0 / float(self.scaler.loss_scale)
- else:
- self._multiply_factor = 1.0
-
- @property
- def supports_flat_params(self):
- return self.wrapped_optimizer.supports_flat_params
-
-
-class MemoryEfficientFP16Optimizer(
- _MemoryEfficientFP16OptimizerMixin, optim.FairseqOptimizer
-):
- """
- Wrap an *optimizer* to support FP16 (mixed precision) training.
-
- Compared to :class:`fairseq.optim.FP16Optimizer`, this version does not
- maintain an FP32 copy of the model. We instead expect the optimizer to
- convert the gradients to FP32 internally and sync the results back to the
- FP16 model params. This significantly reduces memory usage but slightly
- increases the time spent in the optimizer.
-
- Since this wrapper depends on specific functionality in the wrapped
- optimizer (i.e., on-the-fly conversion of grads to FP32), only certain
- optimizers can be wrapped. This is determined by the
- *supports_memory_efficient_fp16* property.
- """
-
- def __init__(
- self, cfg: DictConfig, params, optimizer, allow_unsupported=False, **kwargs
- ):
- if not allow_unsupported and not optimizer.supports_memory_efficient_fp16:
- raise ValueError(
- "Unsupported optimizer: {}".format(optimizer.__class__.__name__)
- )
-
- super().__init__(getattr(cfg, "optimizer", None))
- self.wrapped_optimizer = optimizer
-
- if getattr(cfg.common, "fp16_scale_window", None) is None:
- if len(cfg.optimization.update_freq) > 1:
- raise ValueError(
- "--fp16-scale-window must be given explicitly when using a "
- "custom --update-freq schedule"
- )
- data_parallel_size = int(
- cfg.distributed_training.distributed_world_size
- / cfg.common.model_parallel_size
- )
- scale_window = int(
- 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0]
- )
- else:
- scale_window = cfg.common.fp16_scale_window
-
- if not getattr(cfg.common, "bf16", False):
- self.scaler = DynamicLossScaler(
- init_scale=cfg.common.fp16_init_scale,
- scale_window=scale_window,
- tolerance=cfg.common.fp16_scale_tolerance,
- threshold=cfg.common.threshold_loss_scale,
- min_loss_scale=cfg.common.min_loss_scale,
- )
- else:
- # disable loss scaling for bfloat16
- self.scaler = None
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- args (argparse.Namespace): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- fp16_optimizer = optim.build_optimizer(cfg.optimizer, params)
- return cls(cfg, params, fp16_optimizer, **kwargs)
-
- @property
- def optimizer(self):
- return self.wrapped_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.wrapped_optimizer.optimizer = optimizer
-
- @property
- def optimizer_config(self):
- return self.wrapped_optimizer.optimizer_config
-
- @property
- def lr_scheduler(self):
- return getattr(self.wrapped_optimizer, "lr_scheduler", None)
-
- def get_lr(self):
- return self.wrapped_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.wrapped_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.wrapped_optimizer.all_reduce_grads(module)
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_base_model.py b/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_base_model.py
deleted file mode 100644
index 9f7993a15e585526135d1ede094f4dcff47f64db..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_base_model.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import torch
-from collections import Counter
-from os import path as osp
-from torch import distributed as dist
-from tqdm import tqdm
-
-from basicsr.metrics import calculate_metric
-from basicsr.utils import get_root_logger, imwrite, tensor2img
-from basicsr.utils.dist_util import get_dist_info
-from basicsr.utils.registry import MODEL_REGISTRY
-from .sr_model import SRModel
-
-
-@MODEL_REGISTRY.register()
-class VideoBaseModel(SRModel):
- """Base video SR model."""
-
- def dist_validation(self, dataloader, current_iter, tb_logger, save_img):
- dataset = dataloader.dataset
- dataset_name = dataset.opt['name']
- with_metrics = self.opt['val']['metrics'] is not None
- # initialize self.metric_results
- # It is a dict: {
- # 'folder1': tensor (num_frame x len(metrics)),
- # 'folder2': tensor (num_frame x len(metrics))
- # }
- if with_metrics:
- if not hasattr(self, 'metric_results'): # only execute in the first run
- self.metric_results = {}
- num_frame_each_folder = Counter(dataset.data_info['folder'])
- for folder, num_frame in num_frame_each_folder.items():
- self.metric_results[folder] = torch.zeros(
- num_frame, len(self.opt['val']['metrics']), dtype=torch.float32, device='cuda')
- # initialize the best metric results
- self._initialize_best_metric_results(dataset_name)
- # zero self.metric_results
- rank, world_size = get_dist_info()
- if with_metrics:
- for _, tensor in self.metric_results.items():
- tensor.zero_()
-
- metric_data = dict()
- # record all frames (border and center frames)
- if rank == 0:
- pbar = tqdm(total=len(dataset), unit='frame')
- for idx in range(rank, len(dataset), world_size):
- val_data = dataset[idx]
- val_data['lq'].unsqueeze_(0)
- val_data['gt'].unsqueeze_(0)
- folder = val_data['folder']
- frame_idx, max_idx = val_data['idx'].split('/')
- lq_path = val_data['lq_path']
-
- self.feed_data(val_data)
- self.test()
- visuals = self.get_current_visuals()
- result_img = tensor2img([visuals['result']])
- metric_data['img'] = result_img
- if 'gt' in visuals:
- gt_img = tensor2img([visuals['gt']])
- metric_data['img2'] = gt_img
- del self.gt
-
- # tentative for out of GPU memory
- del self.lq
- del self.output
- torch.cuda.empty_cache()
-
- if save_img:
- if self.opt['is_train']:
- raise NotImplementedError('saving image is not supported during training.')
- else:
- if 'vimeo' in dataset_name.lower(): # vimeo90k dataset
- split_result = lq_path.split('/')
- img_name = f'{split_result[-3]}_{split_result[-2]}_{split_result[-1].split(".")[0]}'
- else: # other datasets, e.g., REDS, Vid4
- img_name = osp.splitext(osp.basename(lq_path))[0]
-
- if self.opt['val']['suffix']:
- save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, folder,
- f'{img_name}_{self.opt["val"]["suffix"]}.png')
- else:
- save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, folder,
- f'{img_name}_{self.opt["name"]}.png')
- imwrite(result_img, save_img_path)
-
- if with_metrics:
- # calculate metrics
- for metric_idx, opt_ in enumerate(self.opt['val']['metrics'].values()):
- result = calculate_metric(metric_data, opt_)
- self.metric_results[folder][int(frame_idx), metric_idx] += result
-
- # progress bar
- if rank == 0:
- for _ in range(world_size):
- pbar.update(1)
- pbar.set_description(f'Test {folder}: {int(frame_idx) + world_size}/{max_idx}')
- if rank == 0:
- pbar.close()
-
- if with_metrics:
- if self.opt['dist']:
- # collect data among GPUs
- for _, tensor in self.metric_results.items():
- dist.reduce(tensor, 0)
- dist.barrier()
- else:
- pass # assume use one gpu in non-dist testing
-
- if rank == 0:
- self._log_validation_metric_values(current_iter, dataset_name, tb_logger)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- logger = get_root_logger()
- logger.warning('nondist_validation is not implemented. Run dist_validation.')
- self.dist_validation(dataloader, current_iter, tb_logger, save_img)
-
- def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger):
- # ----------------- calculate the average values for each folder, and for each metric ----------------- #
- # average all frames for each sub-folder
- # metric_results_avg is a dict:{
- # 'folder1': tensor (len(metrics)),
- # 'folder2': tensor (len(metrics))
- # }
- metric_results_avg = {
- folder: torch.mean(tensor, dim=0).cpu()
- for (folder, tensor) in self.metric_results.items()
- }
- # total_avg_results is a dict: {
- # 'metric1': float,
- # 'metric2': float
- # }
- total_avg_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()}
- for folder, tensor in metric_results_avg.items():
- for idx, metric in enumerate(total_avg_results.keys()):
- total_avg_results[metric] += metric_results_avg[folder][idx].item()
- # average among folders
- for metric in total_avg_results.keys():
- total_avg_results[metric] /= len(metric_results_avg)
- # update the best metric result
- self._update_best_metric_result(dataset_name, metric, total_avg_results[metric], current_iter)
-
- # ------------------------------------------ log the metric ------------------------------------------ #
- log_str = f'Validation {dataset_name}\n'
- for metric_idx, (metric, value) in enumerate(total_avg_results.items()):
- log_str += f'\t # {metric}: {value:.4f}'
- for folder, tensor in metric_results_avg.items():
- log_str += f'\t # {folder}: {tensor[metric_idx].item():.4f}'
- if hasattr(self, 'best_metric_results'):
- log_str += (f'\n\t Best: {self.best_metric_results[dataset_name][metric]["val"]:.4f} @ '
- f'{self.best_metric_results[dataset_name][metric]["iter"]} iter')
- log_str += '\n'
-
- logger = get_root_logger()
- logger.info(log_str)
- if tb_logger:
- for metric_idx, (metric, value) in enumerate(total_avg_results.items()):
- tb_logger.add_scalar(f'metrics/{metric}', value, current_iter)
- for folder, tensor in metric_results_avg.items():
- tb_logger.add_scalar(f'metrics/{metric}/{folder}', tensor[metric_idx].item(), current_iter)
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/spec_gen.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/spec_gen.py
deleted file mode 100644
index 9476395adab6fa841fde10c05fbb92902310ebd4..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/spec_gen.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from data_utils import TextAudioSpeakerLoader
-import json
-from tqdm import tqdm
-
-from utils import HParams
-
-config_path = 'configs/config.json'
-with open(config_path, "r") as f:
- data = f.read()
-config = json.loads(data)
-hps = HParams(**config)
-
-train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps)
-test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps)
-eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps)
-
-for _ in tqdm(train_dataset):
- pass
-for _ in tqdm(eval_dataset):
- pass
-for _ in tqdm(test_dataset):
- pass
\ No newline at end of file
diff --git a/spaces/Illumotion/Koboldcpp/.devops/main-rocm.Dockerfile b/spaces/Illumotion/Koboldcpp/.devops/main-rocm.Dockerfile
deleted file mode 100644
index 789deff6dc8c1d44a308d6631af6e7d845641372..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/.devops/main-rocm.Dockerfile
+++ /dev/null
@@ -1,44 +0,0 @@
-ARG UBUNTU_VERSION=22.04
-
-# This needs to generally match the container host's environment.
-ARG ROCM_VERSION=5.6
-
-# Target the CUDA build image
-ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete
-
-FROM ${BASE_ROCM_DEV_CONTAINER} as build
-
-# Unless otherwise specified, we make a fat build.
-# List from https://github.com/ggerganov/llama.cpp/pull/1087#issuecomment-1682807878
-# This is mostly tied to rocBLAS supported archs.
-ARG ROCM_DOCKER_ARCH=\
- gfx803 \
- gfx900 \
- gfx906 \
- gfx908 \
- gfx90a \
- gfx1010 \
- gfx1030 \
- gfx1100 \
- gfx1101 \
- gfx1102
-
-COPY requirements.txt requirements.txt
-
-RUN pip install --upgrade pip setuptools wheel \
- && pip install -r requirements.txt
-
-WORKDIR /app
-
-COPY . .
-
-# Set nvcc architecture
-ENV GPU_TARGETS=${ROCM_DOCKER_ARCH}
-# Enable ROCm
-ENV LLAMA_HIPBLAS=1
-ENV CC=/opt/rocm/llvm/bin/clang
-ENV CXX=/opt/rocm/llvm/bin/clang++
-
-RUN make
-
-ENTRYPOINT [ "/app/main" ]
diff --git a/spaces/Illumotion/Koboldcpp/examples/llama-bench/README.md b/spaces/Illumotion/Koboldcpp/examples/llama-bench/README.md
deleted file mode 100644
index d02824bfa8d2fceeb032364cb2de3b1725150e41..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/llama-bench/README.md
+++ /dev/null
@@ -1,271 +0,0 @@
-# llama.cpp/example/llama-bench
-
-Performance testing tool for llama.cpp.
-
-## Table of contents
-
-1. [Syntax](#syntax)
-2. [Examples](#examples)
- 1. [Text generation with different models](#text-generation-with-different-models)
- 2. [Prompt processing with different batch sizes](#prompt-processing-with-different-batch-sizes)
- 3. [Different numbers of threads](#different-numbers-of-threads)
- 4. [Different numbers of layers offloaded to the GPU](#different-numbers-of-layers-offloaded-to-the-gpu)
-3. [Output formats](#output-formats)
- 1. [Markdown](#markdown)
- 2. [CSV](#csv)
- 3. [JSON](#json)
- 4. [SQL](#sql)
-
-## Syntax
-
-```
-usage: ./llama-bench [options]
-
-options:
- -h, --help
- -m, --model (default: models/7B/ggml-model-q4_0.gguf)
- -p, --n-prompt (default: 512)
- -n, --n-gen (default: 128)
- -b, --batch-size (default: 512)
- --memory-f32 <0|1> (default: 0)
- -t, --threads (default: 16)
- -ngl N, --n-gpu-layers (default: 99)
- -mg i, --main-gpu (default: 0)
- -mmq, --mul-mat-q <0|1> (default: 1)
- -ts, --tensor_split
- -r, --repetitions (default: 5)
- -o, --output (default: md)
- -v, --verbose (default: 0)
-
-Multiple values can be given for each parameter by separating them with ',' or by specifying the parameter multiple times.
-```
-
-llama-bench can perform two types of tests:
-
-- Prompt processing (pp): processing a prompt in batches (`-p`)
-- Text generation (tg): generating a sequence of tokens (`-n`)
-
-With the exception of `-r`, `-o` and `-v`, all options can be specified multiple times to run multiple tests. Each pp and tg test is run with all combinations of the specified options. To specify multiple values for an option, the values can be separated by commas (e.g. `-n 16,32`), or the option can be specified multiple times (e.g. `-n 16 -n 32`).
-
-Each test is repeated the number of times given by `-r`, and the results are averaged. The results are given in average tokens per second (t/s) and standard deviation. Some output formats (e.g. json) also include the individual results of each repetition.
-
-For a description of the other options, see the [main example](../main/README.md).
-
-## Examples
-
-### Text generation with different models
-
-```sh
-$ ./llama-bench -m models/7B/ggml-model-q4_0.gguf -m models/13B/ggml-model-q4_0.gguf -p 0 -n 128,256,512
-```
-
-| model | size | params | backend | ngl | test | t/s |
-| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------- | ---------------: |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | tg 128 | 132.19 ± 0.55 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | tg 256 | 129.37 ± 0.54 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | tg 512 | 123.83 ± 0.25 |
-| llama 13B mostly Q4_0 | 6.86 GiB | 13.02 B | CUDA | 99 | tg 128 | 82.17 ± 0.31 |
-| llama 13B mostly Q4_0 | 6.86 GiB | 13.02 B | CUDA | 99 | tg 256 | 80.74 ± 0.23 |
-| llama 13B mostly Q4_0 | 6.86 GiB | 13.02 B | CUDA | 99 | tg 512 | 78.08 ± 0.07 |
-
-### Prompt processing with different batch sizes
-
-```sh
-$ ./llama-bench -n 0 -p 1024 -b 128,256,512,1024
-```
-
-| model | size | params | backend | ngl | n_batch | test | t/s |
-| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | ---------- | ---------------: |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | 128 | pp 1024 | 1436.51 ± 3.66 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | 256 | pp 1024 | 1932.43 ± 23.48 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | 512 | pp 1024 | 2254.45 ± 15.59 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | 1024 | pp 1024 | 2498.61 ± 13.58 |
-
-### Different numbers of threads
-
-```sh
-$ ./llama-bench -n 0 -n 16 -p 64 -t 1,2,4,8,16,32
-```
-
-| model | size | params | backend | threads | test | t/s |
-| ------------------------------ | ---------: | ---------: | ---------- | ---------: | ---------- | ---------------: |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 1 | pp 64 | 6.17 ± 0.07 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 1 | tg 16 | 4.05 ± 0.02 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 2 | pp 64 | 12.31 ± 0.13 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 2 | tg 16 | 7.80 ± 0.07 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 4 | pp 64 | 23.18 ± 0.06 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 4 | tg 16 | 12.22 ± 0.07 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 8 | pp 64 | 32.29 ± 1.21 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 8 | tg 16 | 16.71 ± 0.66 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 16 | pp 64 | 33.52 ± 0.03 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 16 | tg 16 | 15.32 ± 0.05 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 32 | pp 64 | 59.00 ± 1.11 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CPU | 32 | tg 16 | 16.41 ± 0.79 ||
-
-### Different numbers of layers offloaded to the GPU
-
-```sh
-$ ./llama-bench -ngl 10,20,30,31,32,33,34,35
-```
-
-| model | size | params | backend | ngl | test | t/s |
-| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------- | ---------------: |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 10 | pp 512 | 373.36 ± 2.25 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 10 | tg 128 | 13.45 ± 0.93 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 20 | pp 512 | 472.65 ± 1.25 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 20 | tg 128 | 21.36 ± 1.94 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 30 | pp 512 | 631.87 ± 11.25 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 30 | tg 128 | 40.04 ± 1.82 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 31 | pp 512 | 657.89 ± 5.08 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 31 | tg 128 | 48.19 ± 0.81 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 32 | pp 512 | 688.26 ± 3.29 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 32 | tg 128 | 54.78 ± 0.65 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 33 | pp 512 | 704.27 ± 2.24 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 33 | tg 128 | 60.62 ± 1.76 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 34 | pp 512 | 881.34 ± 5.40 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 34 | tg 128 | 71.76 ± 0.23 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 35 | pp 512 | 2400.01 ± 7.72 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 35 | tg 128 | 131.66 ± 0.49 |
-
-## Output formats
-
-By default, llama-bench outputs the results in markdown format. The results can be output in other formats by using the `-o` option.
-
-### Markdown
-
-```sh
-$ ./llama-bench -o md
-```
-
-| model | size | params | backend | ngl | test | t/s |
-| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------- | ---------------: |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | pp 512 | 2368.80 ± 93.24 |
-| llama 7B mostly Q4_0 | 3.56 GiB | 6.74 B | CUDA | 99 | tg 128 | 131.42 ± 0.59 |
-
-### CSV
-
-```sh
-$ ./llama-bench -o csv
-```
-
-```csv
-build_commit,build_number,cuda,opencl,metal,gpu_blas,blas,cpu_info,gpu_info,model_filename,model_type,model_size,model_n_params,n_batch,n_threads,f16_kv,n_gpu_layers,main_gpu,mul_mat_q,tensor_split,n_prompt,n_gen,test_time,avg_ns,stddev_ns,avg_ts,stddev_ts
-"3469684","1275","1","0","0","1","1","13th Gen Intel(R) Core(TM) i9-13900K","NVIDIA GeForce RTX 3090 Ti","models/7B/ggml-model-q4_0.gguf","llama 7B mostly Q4_0","3825065984","6738415616","512","16","1","99","0","1","0.00","512","0","2023-09-23T12:09:01Z","212155977","732372","2413.341687","8.305961"
-"3469684","1275","1","0","0","1","1","13th Gen Intel(R) Core(TM) i9-13900K","NVIDIA GeForce RTX 3090 Ti","models/7B/ggml-model-q4_0.gguf","llama 7B mostly Q4_0","3825065984","6738415616","512","16","1","99","0","1","0.00","0","128","2023-09-23T12:09:02Z","969320879","2728399","132.052051","0.371342"
-```
-
-### JSON
-
-```sh
-$ ./llama-bench -o json
-```
-
-```json
-[
- {
- "build_commit": "3469684",
- "build_number": 1275,
- "cuda": true,
- "opencl": false,
- "metal": false,
- "gpu_blas": true,
- "blas": true,
- "cpu_info": "13th Gen Intel(R) Core(TM) i9-13900K",
- "gpu_info": "NVIDIA GeForce RTX 3090 Ti",
- "model_filename": "models/7B/ggml-model-q4_0.gguf",
- "model_type": "llama 7B mostly Q4_0",
- "model_size": 3825065984,
- "model_n_params": 6738415616,
- "n_batch": 512,
- "n_threads": 16,
- "f16_kv": true,
- "n_gpu_layers": 99,
- "main_gpu": 0,
- "mul_mat_q": true,
- "tensor_split": "0.00",
- "n_prompt": 512,
- "n_gen": 0,
- "test_time": "2023-09-23T12:09:57Z",
- "avg_ns": 212365953,
- "stddev_ns": 985423,
- "avg_ts": 2410.974041,
- "stddev_ts": 11.163766,
- "samples_ns": [ 213837238, 211635853, 212328053, 211329715, 212698907 ],
- "samples_ts": [ 2394.34, 2419.25, 2411.36, 2422.75, 2407.16 ]
- },
- {
- "build_commit": "3469684",
- "build_number": 1275,
- "cuda": true,
- "opencl": false,
- "metal": false,
- "gpu_blas": true,
- "blas": true,
- "cpu_info": "13th Gen Intel(R) Core(TM) i9-13900K",
- "gpu_info": "NVIDIA GeForce RTX 3090 Ti",
- "model_filename": "models/7B/ggml-model-q4_0.gguf",
- "model_type": "llama 7B mostly Q4_0",
- "model_size": 3825065984,
- "model_n_params": 6738415616,
- "n_batch": 512,
- "n_threads": 16,
- "f16_kv": true,
- "n_gpu_layers": 99,
- "main_gpu": 0,
- "mul_mat_q": true,
- "tensor_split": "0.00",
- "n_prompt": 0,
- "n_gen": 128,
- "test_time": "2023-09-23T12:09:59Z",
- "avg_ns": 977425219,
- "stddev_ns": 9268593,
- "avg_ts": 130.965708,
- "stddev_ts": 1.238924,
- "samples_ns": [ 984472709, 974901233, 989474741, 970729355, 967548060 ],
- "samples_ts": [ 130.019, 131.295, 129.362, 131.86, 132.293 ]
- }
-]
-```
-
-### SQL
-
-SQL output is suitable for importing into a SQLite database. The output can be piped into the `sqlite3` command line tool to add the results to a database.
-
-```sh
-$ ./llama-bench -o sql
-```
-
-```sql
-CREATE TABLE IF NOT EXISTS test (
- build_commit TEXT,
- build_number INTEGER,
- cuda INTEGER,
- opencl INTEGER,
- metal INTEGER,
- gpu_blas INTEGER,
- blas INTEGER,
- cpu_info TEXT,
- gpu_info TEXT,
- model_filename TEXT,
- model_type TEXT,
- model_size INTEGER,
- model_n_params INTEGER,
- n_batch INTEGER,
- n_threads INTEGER,
- f16_kv INTEGER,
- n_gpu_layers INTEGER,
- main_gpu INTEGER,
- mul_mat_q INTEGER,
- tensor_split TEXT,
- n_prompt INTEGER,
- n_gen INTEGER,
- test_time TEXT,
- avg_ns INTEGER,
- stddev_ns INTEGER,
- avg_ts REAL,
- stddev_ts REAL
-);
-
-INSERT INTO test (build_commit, build_number, cuda, opencl, metal, gpu_blas, blas, cpu_info, gpu_info, model_filename, model_type, model_size, model_n_params, n_batch, n_threads, f16_kv, n_gpu_layers, main_gpu, mul_mat_q, tensor_split, n_prompt, n_gen, test_time, avg_ns, stddev_ns, avg_ts, stddev_ts) VALUES ('3469684', '1275', '1', '0', '0', '1', '1', '13th Gen Intel(R) Core(TM) i9-13900K', 'NVIDIA GeForce RTX 3090 Ti', 'models/7B/ggml-model-q4_0.gguf', 'llama 7B mostly Q4_0', '3825065984', '6738415616', '512', '16', '1', '99', '0', '1', '0.00', '512', '0', '2023-09-23T12:10:30Z', '212693772', '743623', '2407.240204', '8.409634');
-INSERT INTO test (build_commit, build_number, cuda, opencl, metal, gpu_blas, blas, cpu_info, gpu_info, model_filename, model_type, model_size, model_n_params, n_batch, n_threads, f16_kv, n_gpu_layers, main_gpu, mul_mat_q, tensor_split, n_prompt, n_gen, test_time, avg_ns, stddev_ns, avg_ts, stddev_ts) VALUES ('3469684', '1275', '1', '0', '0', '1', '1', '13th Gen Intel(R) Core(TM) i9-13900K', 'NVIDIA GeForce RTX 3090 Ti', 'models/7B/ggml-model-q4_0.gguf', 'llama 7B mostly Q4_0', '3825065984', '6738415616', '512', '16', '1', '99', '0', '1', '0.00', '0', '128', '2023-09-23T12:10:31Z', '977925003', '4037361', '130.891159', '0.537692');
-```
diff --git a/spaces/Illumotion/Koboldcpp/examples/quantize-stats/quantize-stats.cpp b/spaces/Illumotion/Koboldcpp/examples/quantize-stats/quantize-stats.cpp
deleted file mode 100644
index dd76b1ceef134d2cdafe01c9c458a6bd32ee2abb..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/quantize-stats/quantize-stats.cpp
+++ /dev/null
@@ -1,424 +0,0 @@
-#define LLAMA_API_INTERNAL
-#include "build-info.h"
-#include "common.h"
-#include "ggml.h"
-#include "llama.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include