Among Us is one of the most popular and addictive multiplayer games of 2021. It is a game of teamwork and betrayal, where you have to work together with your crewmates to complete tasks on a spaceship, while avoiding being killed by an impostor who is secretly among you. But how can you download a license key for Among Us and enjoy the full version of the game? In this article, we will explain what Among Us is, why you need a license key, how to get one, and how to use it.
-Among Us is a game developed by Innersloth, an indie game studio based in Washington, USA. It was released in 2018, but it gained massive popularity in 2020 thanks to streamers and influencers who played it online. The game can be played online or over local WiFi with 4 to 15 players. One or more players are randomly assigned as impostors, who can sabotage the ship, vent through hidden passages, and kill crewmates. The rest of the players are crewmates, who have to work together to complete tasks and find the impostors. The game has four different maps to choose from: The Skeld, MIRA HQ, Polus, and the Airship. The game also has various game modes, such as Classic or Hide n Seek. The game is available on Android, iOS, PC, and console platforms .
- A license key is a string of characters that is used to activate a software application or product. It is a way of verifying that the user has purchased or obtained the software legally and has the right to use it. A license key can also provide information about the software version, features, expiration date, and other details. A license key is usually required to unlock the full functionality of a software application or product. For example, some software applications may have limited features or time restrictions in their trial or demo versions, which can be removed by entering a valid license key.
- In the case of Among Us, a license key is needed to activate the full version of the game on PC or console platforms. The full version of the game allows you to play online or offline with up to 15 players, customize your character and game settings, access all four maps and game modes, and enjoy cross-platform play with other devices. The full version of the game also includes 33 Steam achievements that you can unlock by playing. The full version of the game costs $4.99 on Steam, but you can also get it for free by using a license key generator.
- The easiest and safest way to get a license key for Among Us is to buy the game from official platforms such as Steam, Google Play Store, App Store, or Nintendo eShop. By buying the game from official platforms, you can support the developers and ensure that you get a valid and secure license key that works with your device. You can also enjoy updates, bug fixes, new features, and customer support from the developers. Buying the game from official platforms also reduces the risk of getting malware, viruses, or other harmful software that may come with pirated or cracked versions of the game.
- Use a License Key Generator
-Another way to get a license key for Among Us is to use a license key generator. A license key generator is a software tool that can create random and unique license keys for various software applications or products. Some license key generators are designed specifically for certain games or programs, while others can generate license keys for multiple games or programs. A license key generator can be downloaded from various websites or platforms, such as YouTube, Reddit, or Discord. However, using a license key generator has some risks and drawbacks that you should be aware of.
-How to download license key for among us on PC
-Download license key for among us free
-Download license key for among us steam
-Download license key for among us gamehag
-Download license key for among us g2a
-Download license key for among us cdkeys
-Download license key for among us android
-Download license key for among us ios
-Download license key for among us online
-Download license key for among us crack
-Download license key for among us generator
-Download license key for among us no survey
-Download license key for among us reddit
-Download license key for among us youtube
-Download license key for among us hack
-Download license key for among us mod
-Download license key for among us apk
-Download license key for among us mac
-Download license key for among us windows 10
-Download license key for among us switch
-Download license key for among us xbox one
-Download license key for among us ps4
-Download license key for among us epic games
-Download license key for among us origin
-Download license key for among us bluestacks
-Download license key for among us emulator
-Download license key for among us pc game
-Download license key for among us full version
-Download license key for among us latest update
-Download license key for among us airship map
-Download license key for among us skins bundle
-Download license key for among us pets bundle
-Download license key for among us hats bundle
-Download license key for among us mini crewmate bundle
-Download license key for among us stickmin pet bundle
-Download license key for among us brainslug pet bundle
-Download license key for among us bedcrab pet bundle
-Download license key for among us hamster pet bundle
-Download license key for among us mira hq skins dlc
-Download license key for among us polus skins dlc
-Download license key for among us all you can eat
-Download license key for among us special edition
-Download license key for among us super edition
-Download license key for among us season 1 starter pack
-Download license key for among us uno
-Download license key for among us uno ultimate edition
-Download license key for among us uno flip dlc
-Download license key for among us overcooked
-Download license key for among us overcooked 2
- What is a License Key Generator and How Does It Work?
-A license key generator is a software tool that uses an algorithm to create random and unique license keys for various software applications or products. The algorithm can be based on the serial number, product code, or other information of the software application or product. The algorithm can also be based on the user's device information, such as the IP address, MAC address, or hardware ID. The license key generator can then output a license key that matches the criteria of the software application or product. The user can then enter the license key in the software application or product to activate it.
- Pros and Cons of Using a License Key Generator
-Using a license key generator has some pros and cons that you should consider before using it. Here are some of them:
-
-
-Pros
-Cons
-
-
-- You can get a license key for Among Us for free without paying anything.
-- You may get a license key that is invalid, expired, or already used by someone else.
-
-
-- You can get a license key for Among Us quickly and easily without waiting for delivery or verification.
-- You may get a license key that is incompatible with your device or platform.
-
-
-- You can get a license key for Among Us without providing any personal or financial information.
-- You may get malware, viruses, or other harmful software that can damage your device or steal your data.
-
-
-- You can get a license key for Among Us without supporting the developers or publishers of the game.
-- You may violate the terms of service, privacy policy, or intellectual property rights of the developers or publishers of the game.
-
-
- As you can see, using a license key generator has more cons than pros. Therefore, we do not recommend using a license key generator to get a license key for Among Us. Instead, we suggest buying the game from official platforms to support the developers and enjoy the game safely and legally.
- How to Use a License Key for Among Us
-Enter the License Key in the Game Settings
-If you have obtained a valid and compatible license key for Among Us, you can use it to activate the full version of the game on your device. To do this, you need to follow these steps:
-
-Launch the game on your device and go to the main menu.
-Click on the gear icon in the top right corner to open the game settings.
-Click on the "Enter License Key" button in the bottom left corner to open the license key input screen.
-Type or paste your license key in the text box and click on the "Activate" button.
-If your license key is valid and compatible, you will see a confirmation message that says "License Key Activated".
-Click on the "OK" button to close the confirmation message and return to the game settings.
-You have successfully activated the full version of Among Us on your device.
-
- Enjoy the Full Features of the Game
-Now that you have activated the full version of Among Us on your device, you can enjoy all the features and benefits that it offers. You can play online or offline with up to 15 players, customize your character and game settings, access all four maps and game modes, and enjoy cross-platform play with other devices. You can also unlock 33 Steam achievements by playing. You can have fun and suspenseful games with your friends or strangers online, or create your own rules and scenarios with your own settings. You can also join various communities and groups of Among Us players online, such as Discord servers, Reddit forums, YouTube channels, Twitch streams, and more. You can share your experiences, tips, tricks, memes, fan art, theories, and more with other fans of the game. You can also keep up with the latest news, updates, events, and announcements from the developers of Among Us.
Conclusion
-In conclusion, Among Us is a fun and suspenseful multiplayer game that you can play with your friends or strangers online. However, to enjoy the full version of the game on PC or console platforms, you need a license key to activate it. You can get a license key for Among Us by buying the game from official platforms or by using a license key generator. However, we recommend buying the game from official platforms to support the developers and avoid any risks or drawbacks that may come with using a license key generator. Once you have a valid and compatible license key for Among Us, you can enter it in the game settings and enjoy all the features and benefits that the full version of the game offers. We hope this article has helped you learn how to download a license key for Among Us and have fun playing the game.
- FAQs
-Q: Can I play Among Us for free on PC or console platforms?
-A: No, you cannot play Among Us for free on PC or console platforms. You need to buy the game from official platforms or use a license key to activate the full version of the game. However, you can play Among Us for free on Android or iOS devices by downloading the game from Google Play Store or App Store.
- Q: How can I find a license key generator for Among Us?
-A: You can find a license key generator for Among Us by searching online on various websites or platforms, such as YouTube, Reddit, or Discord. However, we do not recommend using a license key generator for Among Us as it may have some risks and drawbacks that can harm your device or violate the rights of the developers.
- Q: How can I check if my license key for Among Us is valid and compatible?
-A: You can check if your license key for Among Us is valid and compatible by entering it in the game settings and seeing if it activates the full version of the game. If your license key is invalid, expired, or already used, you will see an error message that says "License Key Invalid". If your license key is incompatible with your device or platform, you will see an error message that says "License Key Incompatible".
- Q: How can I update my version of Among Us after activating it with a license key?
-A: You can update your version of Among Us after activating it with a license key by downloading and installing the latest updates from official platforms such as Steam, Google Play Store, App Store, or Nintendo eShop. You can also check for updates in the game settings and see if there are any new features or bug fixes available.
- Q: How can I contact the developers of Among Us if I have any questions or issues with the game?
-A: You can contact the developers of Among Us if you have any questions or issues with the game by visiting their official website, where you can find their email address, social media accounts, and support page. You can also join their Discord server, where you can chat with other players and developers, report bugs, give feedback, and get help.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Apex Legends Mobile Everything You Need to Know About the APK Version.md b/spaces/fatiXbelha/sd/Apex Legends Mobile Everything You Need to Know About the APK Version.md
deleted file mode 100644
index 088b3127d23e4551105a07b311c377d994313536..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Apex Legends Mobile Everything You Need to Know About the APK Version.md
+++ /dev/null
@@ -1,187 +0,0 @@
-
-Apex Legends APK: How to Download and Play the Mobile Version of the Popular Battle Royale Game
- If you are a fan of battle royale games, you have probably heard of Apex Legends , the free-to-play hero shooter game that has taken the gaming world by storm. Developed by Respawn Entertainment and published by Electronic Arts, Apex Legends is set in the same sci-fi universe as the Titanfall series, where players compete in squads of three to be the last team standing in a map filled with weapons, loot, and enemies.
- Apex Legends was originally released for PlayStation 4, Xbox One, and PC in February 2019, and has since amassed over 100 million players worldwide. In May 2020, EA announced that they were working on a mobile version of the game, which would be optimized for touchscreen devices and offer unique content and gameplay modes. The mobile version, called Apex Legends Mobile , is currently in beta testing in select regions, and is expected to launch globally in 2023.
-apex legends apk Download Zip 🗹 https://urllie.com/2uNzWC
- If you are eager to try out Apex Legends Mobile on your Android device, you might be wondering how to download and play the game. In this article, we will show you how to get the Apex Legends APK file, which is an application package that contains all the necessary files to install and run the game on your phone. We will also tell you about the features, tips, tricks, and FAQs of Apex Legends Mobile, so you can enjoy the game to the fullest.
- How to Download Apex Legends APK for Android Devices
- The first step to play Apex Legends Mobile on your Android device is to download the APK file from a reliable source. You can use the link below to download the latest version of Apex Legends APK from APKCombo, a trusted website that offers safe and secure downloads of various apps and games.
- Apex Legends APK (Android Game) - Free Download - APKCombo
- Once you have downloaded the APK file, which is about 3.61 GB in size, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.
- How to Install and Play Apex Legends APK on Your Phone
- After enabling unknown sources, you can proceed to install Apex Legends APK on your phone. To do this, follow these steps:
-
-Locate the downloaded APK file on your device using a file manager app or your browser's download history.
-Tap on the file and select Install. You might see a warning message that says "This type of file can harm your device". Ignore it and tap OK.
-Wait for the installation process to complete. It might take a few minutes depending on your device's performance.
- Once the installation is done, you can launch the game by tapping on the Apex Legends icon on your home screen or app drawer.
-You will be asked to grant some permissions to the game, such as access to your storage, microphone, and location. Tap Allow to proceed.
-You will also be asked to log in with your EA account or create a new one. This is required to play the game online and sync your progress across devices. You can also link your Facebook or Google account to your EA account for easier access.
-After logging in, you will see the main menu of the game, where you can choose your game mode, customize your legend, view your stats, and more.
-Tap on Play to start a match. You will be matched with two other players to form a squad. You can also invite your friends to join your squad if they have the game installed on their devices.
-Before the match begins, you will be able to select your legend from a roster of 16 characters, each with their own unique abilities and playstyles. You can also see the abilities of your squad mates and communicate with them using voice chat or text chat.
-Once the match starts, you and your squad will be dropped into a large map with up to 19 other squads. Your goal is to survive and eliminate the other squads while looting weapons, armor, ammo, and other items along the way.
-The last squad standing wins the match and becomes the champion of the Apex Games.
-
- Congratulations! You have successfully installed and played Apex Legends Mobile on your Android device. Now, let's take a look at some of the features that make this game different from other battle royale games.
-apex legends mobile apk download
-apex legends apk obb
-apex legends apk mod
-apex legends apk android
-apex legends apk ios
-apex legends apk for pc
-apex legends apk free download
-apex legends apk offline
-apex legends apk latest version
-apex legends apk data
-apex legends apk pure
-apex legends apk mirror
-apex legends apk revdl
-apex legends apk rexdl
-apex legends apk uptodown
-apex legends apk no verification
-apex legends apk and obb download
-apex legends apk beta
-apex legends apk hack
-apex legends apk game
-apex legends apk full version
-apex legends apk 2023
-apex legends apk update
-apex legends apk size
-apex legends apk file download
-apex legends mobile beta apk download
-apex legends mobile mod apk download
-apex legends mobile hack apk download
-apex legends mobile offline apk download
-apex legends mobile latest version apk download
-apex legends mobile data download apk
-apex legends mobile pure download apk
-apex legends mobile mirror download apk
-apex legends mobile revdl download apk
-apex legends mobile rexdl download apk
-apex legends mobile uptodown download apk
-apex legends mobile no verification download apk
-how to download apex legends mobile apk and obb
-how to install apex legends mobile apk and obb
-how to play apex legends mobile offline with apk
-how to update apex legends mobile with apk
-how to get beta access for apex legends mobile with apk
-how to hack apex legends mobile with mod apk
-how to get free coins in apex legends mobile with hack apk
-how to fix lag in apex legends mobile with modded apk
-how to play with friends in apex legends mobile with hacked apk
- Apex Legends Mobile Features: What Makes It Different from Other Battle Royale Games?
- Apex Legends Mobile is not just a port of the PC and console version of Apex Legends. It is a standalone game that has been designed specifically for mobile devices, with some unique features and innovations that make it stand out from the crowd. Here are some of them:
- Strategic Gameplay with Iconic Legends
- One of the main attractions of Apex Legends Mobile is the diverse and dynamic cast of legends that you can play as. Each legend has their own personality, backstory, and abilities that can change the course of the game. For example, Bangalore is a professional soldier who can use smoke grenades and artillery strikes to create cover and deal damage. Lifeline is a combat medic who can heal herself and her allies with her drone and revive them faster with her shield. Wraith is a mysterious interdimensional traveler who can create portals and phase out of danger. And so on.
- Choosing the right legend for your playstyle and strategy is crucial, as each one has their own strengths and weaknesses. You also need to consider how your legend synergizes with your squad mates and how they counter or complement the enemy legends. For instance, Bloodhound is a tracker who can reveal enemy locations and movements with their scan and ultimate ability. They work well with Caustic, who can trap enemies with his gas canisters and grenades. However, they are vulnerable to Crypto, who can hack and disable their abilities with his drone.
- The game also features a ping system that lets you communicate with your squad without using voice chat. You can ping locations, enemies, items, and more with just a tap of a button. You can also use contextual voice lines that convey your intentions and emotions to your team. For example, you can say "I need ammo" or "I'm down" or "Good job" with just a few taps.
- Team-Based Multiplayer Hero Shooter
- Another feature that sets Apex Legends Mobile apart from other battle royale games is that it is a team-based multiplayer hero shooter, rather than a solo or duo survival game. This means that you have to work together with your squad mates to survive and win, rather than go it alone or betray them for personal gain.
- The game encourages teamwork and coordination by giving each squad member a role and a responsibility. For example, one squad member is designated as the jumpmaster, who decides where and when to drop from the dropship at the start of the match. Another squad member is designated as the kill leader, who has the most kills in the match and gets a bounty on their head. The third squad member is designated as the champion, who won the previous match and gets extra XP for winning again.
- The game also rewards teamwork by giving you bonuses for assisting or reviving your squad mates, sharing loot with them, or completing challenges together. You also get extra XP for playing with friends or joining clubs, which are groups of players who share similar interests or goals.
- High-Oct High-Octane Battle Royale Competition
- Apex Legends Mobile is not for the faint of heart. It is a fast-paced and action-packed game that requires quick reflexes, sharp aim, and tactical thinking. The game features a variety of weapons, items, and vehicles that you can use to fight your way to victory. You can also use the environment to your advantage, such as sliding down hills, jumping off cliffs, or ziplining across buildings.
- The game also has a unique feature called the ring, which is a shrinking circle of electricity that forces the players to move closer together as the match progresses. The ring deals damage to anyone who stays outside of it, and the damage increases as the match goes on. The ring adds an element of tension and urgency to the game, as you have to balance between looting, fighting, and moving.
- The game also has a feature called the respawn beacon, which allows you to bring back your fallen squad mates if you can recover their banner within a limited time. The respawn beacon is a risky but rewarding move, as you have to expose yourself to enemy fire and alert them of your location. However, it can also turn the tide of the battle if you can revive your squad and regroup.
- Mobile First Adaptations and Innovations
- As mentioned earlier, Apex Legends Mobile is not just a port of the PC and console version of Apex Legends. It is a game that has been built from the ground up for mobile devices, with some adaptations and innovations that make it more suitable and enjoyable for mobile gamers. Here are some of them:
-
-The game has a simplified and intuitive user interface that allows you to access all the functions and features of the game with ease. You can customize your controls, sensitivity, graphics, and sound settings to suit your preferences.
-The game has a streamlined and optimized performance that ensures smooth and stable gameplay on most Android devices. You can choose between low, medium, or high graphics settings depending on your device's capabilities.
-The game has a smaller map size and shorter match duration than the PC and console version of Apex Legends. This makes the game more fast-paced and exciting, as well as more convenient for mobile gamers who have limited time or battery life.
-The game has some exclusive content and gameplay modes that are only available on mobile devices. For example, you can play in ranked mode, which matches you with players of similar skill level and rewards you with points and rewards based on your performance. You can also play in special events, which offer unique challenges and rewards for a limited time.
-
- These are some of the features that make Apex Legends Mobile different from other battle royale games on mobile devices. Now that you know what to expect from the game, let's move on to some tips and tricks that can help you survive and win in the Apex Games.
- Apex Legends Mobile Tips and Tricks: How to Survive and Win in the Apex Games?
- Apex Legends Mobile is a game that requires skill, strategy, and teamwork to succeed. It is not enough to just shoot your way through the enemies. You also need to use your legend's abilities, communicate with your squad, loot smartly, and move wisely. Here are some tips and tricks that can help you improve your gameplay and increase your chances of becoming the champion:
- Choose Your Legend Wisely and Use Their Abilities Effectively
- As we mentioned before, each legend has their own unique abilities and playstyles that can change the course of the game. Therefore, it is important to choose a legend that suits your preferences and complements your squad. You can try out different legends in training mode or casual mode before committing to one in ranked mode or special events.
- Once you have chosen your legend, you need to learn how to use their abilities effectively. Each legend has three abilities: a passive ability that is always active, a tactical ability that has a cooldown timer, and an ultimate ability that charges up over time or by using ultimate accelerants. You need to know when and how to use each ability depending on the situation.
- For example, Mirage is a trickster who can create holographic decoys of himself to confuse and distract enemies. His passive ability allows him to turn invisible when he is knocked down or when he revives an ally. His tactical ability allows him to send out a decoy that mimics his movements or stands still. His ultimate ability allows him to deploy a team of decoys that run in different directions while he turns invisible.
- You can use Mirage's abilities to escape from danger, lure enemies into traps, or flank them from behind. However, you also need to be careful not to reveal yourself by shooting or making noise while invisible. You also need to be aware of the enemy's abilities and how they can counter or expose your decoys. For example, Bloodhound can scan and reveal your location, Crypto can hack and destroy your decoys, and Caustic can gas and damage you while you are invisible.
- Similarly, you need to learn how to use the abilities of other legends and how they interact with each other. You can also use the firing range mode to practice your skills and test out different combinations of legends and weapons.
- Communicate and Coordinate with Your Squad Mates
- Apex Legends Mobile is a team-based game that requires communication and coordination with your squad mates to win. You need to work together as a unit, rather than as individuals, to survive and eliminate the enemies. You also need to support each other, share resources, and revive each other when needed.
- The game offers various ways to communicate with your squad mates, such as voice chat, text chat, and ping system. You can use these tools to convey information, such as enemy locations, loot suggestions, attack plans, or danger warnings. You can also use them to express your emotions, such as gratitude, apology, or encouragement.
- However, communication is not enough. You also need to coordinate with your squad mates and follow a common strategy. You need to decide where to drop, where to move, when to fight, when to retreat, and when to use your abilities. You also need to adapt to the changing situations and react accordingly.
- For example, if you are playing as Lifeline, you need to coordinate with your squad mates and heal them when they are low on health or shield. You also need to use your ultimate ability wisely and call in a care package that contains high-tier loot for your squad. However, you also need to be careful not to attract unwanted attention from the enemies or expose yourself while using your drone or shield.
- Similarly, if you are playing as Gibraltar, you need to coordinate with your squad mates and protect them with your dome shield when they are under fire or reviving someone. You also need to use your ultimate ability effectively and bombard the enemies with a barrage of missiles. However, you also need to be aware of friendly fire and avoid hurting your squad mates or yourself with your own missiles.
- Loot Smartly and Manage Your Inventory
- Looting is an essential part of Apex Legends Mobile, as it allows you to find weapons, armor, ammo, and other items that can help you survive and fight better. However, looting is not just about grabbing everything you see. You also need to loot smartly and manage your inventory wisely.
- The game features a variety of weapons that fall into different categories, such as pistols, shotguns, SMGs, rifles, snipers, and LMGs. Each weapon has its own stats, such as damage, fire rate, recoil, magazine size, and range. Each weapon also has its own ammo type, such as light ammo, heavy ammo, energy ammo, or shotgun ammo.
- You need to choose a weapon that suits your playstyle and strategy. For example, if you like close-range combat, you might want to use a shotgun or an SMG. If you prefer long-range combat, you might want to use a sniper or a rifle. You also need to consider the ammo availability and compatibility of your weapons. For example, if you use two weapons that use the same ammo type, you might run out of ammo faster, but you also save inventory space. If you use two weapons that use different ammo types, you might have more ammo variety, but you also need more inventory space.
- You also need to find attachments for your weapons, such as sights, barrels, magazines, and stocks. Attachments can improve the performance of your weapons, such as increasing accuracy, stability, reload speed, or fire mode. However, not all attachments are compatible with all weapons. You need to find the right attachments for your weapons and swap them when necessary.
- Besides weapons and attachments, you also need to find armor and health items. Armor can protect you from damage and increase your shield capacity. Health items can restore your health and shield when you are injured. There are different levels of armor and health items, ranging from common (white) to legendary (gold). Higher level items offer more protection and benefits than lower level items.
- You need to manage your inventory carefully and prioritize the items that you need the most. You have a limited inventory space that can be expanded by finding backpacks. You can also drop or swap items that you don't need or want. You can also share items with your squad mates or request items from them using the ping system.
- Be Aware of Your Surroundings and Use Cover
- Another tip to survive and win in Apex Legends Mobile is to be aware of your surroundings and use cover. The game features a large and diverse map that has different terrains, structures, and landmarks. Each location has its own advantages and disadvantages, such as visibility, loot availability, enemy activity, and ring proximity.
- You need to be aware of your surroundings and use them to your advantage. For example, you can use high ground to get a better view of the area and snipe enemies from afar. You can also use low ground to hide from enemies and ambush them from below. You can also use buildings to loot safely and defend yourself from attacks.
- You also need to use cover whenever possible. Cover can protect you from enemy fire and give you time to heal or reload. You can use natural cover, such as rocks, trees, or hills. You can also use artificial cover, such as walls, doors, or crates. You can also create cover using your legend's abilities, such as Gibraltar's dome shield or Rampart's amped wall.
- However, you also need to be careful not to stay in one place for too long or expose yourself too much. You might attract enemy attention or get flanked by other squads. You also need to watch out for the ring and move accordingly. The ring can damage you if you stay outside of it, and it can also force you into unfavorable situations if you are not prepared.
- Master the Movement System and Slide, Jump, and Zip Around the Map
- The final tip to survive and win in Apex Legends Mobile is to master the movement system and slide, jump, and zip around the map. The game features a fluid and responsive movement system that allows you to traverse the map quickly and creatively. You can run, crouch, slide, jump, climb, swim, and zip across the map with ease.
- You need to master the movement system and use it to your advantage. For example, you can slide down hills or slopes to gain momentum and speed. You can also slide while shooting or reloading to dodge enemy fire or surprise them with your mobility. You can also jump over obstacles or gaps to reach higher places or escape from danger.
- You can also use ziplines to travel across long distances or reach inaccessible areas. Ziplines are scattered around the map and can be used by anyone. You can also create ziplines using Pathfinder's ultimate ability or find them in care packages. You can shoot while ziplining or jump off at any point.
- The movement system is one of the most fun and exciting aspects of Apex Legends Mobile. It allows you to explore the map in different ways and gives you an edge over your enemies if you know how to use it well.
- Apex Legends Mobile FAQs: Everything You Need to Know About the Game
- By now, you should have a good idea of what Apex Legends Mobile is all about and how to play it on your Android device. However, you might still have some questions about the game that we haven't covered yet. Here are some of the most frequently asked questions about Apex Legends Mobile:
- Is Apex Legends Mobile Free to Play?
- Yes, Apex Legends Mobile is free to play for everyone who has an Android device that meets the minimum requirements to run the game. You can download and play the game without spending any money. However, the game also offers some optional in-game purchases, such as coins, skins, and battle passes, that can enhance your gameplay experience or customize your appearance. You can buy these items with real money or earn them by playing the game and completing challenges.
- Is Apex Legends Mobile Cross-Play Compatible with Other Platforms?
- No, Apex Legends Mobile is not cross-play compatible with other platforms, such as PC, PlayStation 4, Xbox One, or Nintendo Switch. This means that you can only play with and against other players who are using Android devices. This is to ensure a fair and balanced gameplay experience for everyone, as different platforms have different advantages and disadvantages, such as controls, graphics, and performance.
- What are the Minimum Requirements to Run Apex Legends Mobile on Your Device?
- The minimum requirements to run Apex Legends Mobile on your Android device are as follows:
-
-OS: Android 6.0 or higher
-RAM: 3 GB or higher
-CPU: Snapdragon 625 or equivalent
-GPU: Adreno 506 or equivalent
-Storage: 4 GB or higher
-
- If your device meets these requirements, you should be able to run the game smoothly and enjoyably. However, if your device does not meet these requirements, you might experience some issues, such as lag, crashes, or errors. You might also not be able to download or install the game at all.
- How Can I Update Apex Legends Mobile to the Latest Version?
- To update Apex Legends Mobile to the latest version, you need to follow these steps:
-
-Go to the Google Play Store and search for Apex Legends Mobile.
-Tap on the game icon and select Update.
-Wait for the update to download and install.
-Launch the game and enjoy the new features and improvements.
-
- If you have downloaded the game from APKCombo or another source, you need to follow these steps:
-
-Go to APKCombo and search for Apex Legends Mobile.
-Tap on the game icon and select Download APK.
-Wait for the APK file to download.
-Locate the downloaded APK file on your device and tap on it.
-Select Install and overwrite the existing version of the game.
-Launch the game and enjoy the new features and improvements.
-
- You should always update your game to the latest version to ensure optimal performance and security. You should also check for updates regularly, as the game developers are constantly working on adding new content and fixing bugs.
- How Can I Contact EA Support if I Have Any Issues with Apex Legends Mobile?
- If you have any issues with Apex Legends Mobile, such as technical problems, account issues, or feedback suggestions, you can contact EA support for help. You can use one of these methods to contact EA support:
-
-Email: You can send an email to help@ea.com with your issue details and screenshots if possible.
-Phone: You can call EA support at 1-866-543-5435 (US) or +44 203 0141818 (UK) or find your local number here: EA Help: Contact Us .
-Live Chat: You can chat with an EA advisor online by visiting this link: EA Help: Contact Us .
-Social Media: You can reach out to EA support on Twitter (@EAHelp) or Facebook (EA Help).
-
- You should provide as much information as possible about your issue, such as your device model, OS version, game version, error message, and steps to reproduce. You should also be polite and patient when contacting EA support, as they are trying their best to assist you.
- Conclusion
- Apex Legends Mobile is a mobile version of the popular battle royale game Apex Legends that offers a thrilling and immersive gameplay experience for Android users. The game features a diverse and dynamic cast of legends that you can play as, each with their own unique abilities and playstyles. The game also features a team-based multiplayer hero shooter gameplay that requires skill, strategy, and teamwork to survive and win. The game also features a fluid and responsive movement system that allows you to slide, jump, and zip around the map. The game also features a mobile first design that adapts and innovates for touchscreen devices.
- If you want to play Apex Legends Mobile on your Android device, you need to download and install the Apex Legends APK file from a reliable source, such as APKCombo. You also need to enable unknown sources on your device and grant some permissions to the game. You also need to log in with your EA account or create a new one. You also need to update the game to the latest version whenever possible.
- Once you have done all that, you can enjoy the game and have fun with your squad mates. You can also use some of the tips and tricks that we have shared in this article to improve your gameplay and increase your chances of becoming the champion. You can also contact EA support if you have any issues or feedback about the game.
- We hope that this article has helped you learn everything you need to know about Apex Legends Mobile and how to play it on your Android device. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!
- FAQs
- Q: How can I play Apex Legends Mobile on iOS devices?
-A: Apex Legends Mobile is currently only available for Android devices, but EA has confirmed that they are working on an iOS version of the game as well. The iOS version is expected to launch in 2023, along with the global launch of the game. You can pre-register for the iOS version on the official website of Apex Legends Mobile: Apex Legends Mobile - Official EA Site .
- Q: How can I play Apex Legends Mobile with a controller or a keyboard and mouse?
-A: Apex Legends Mobile does not officially support controllers or keyboards and mice, as it is designed for touchscreen devices. However, some players have reported that they have been able to use third-party apps or devices to connect their controllers or keyboards and mice to their Android devices and play the game with them. However, this is not recommended, as it might cause compatibility issues, performance problems, or bans from EA.
- Q: How can I get free coins, skins, or battle passes in Apex Legends Mobile?
-A: The only legitimate way to get free coins, skins, or battle passes in Apex Legends Mobile is to play the game and complete challenges that reward you with these items. You can also earn coins by watching ads or completing surveys on some third-party websites or apps that partner with EA. However, you should be careful not to fall for scams or hacks that claim to give you free coins, skins, or battle passes in exchange for your personal information, account details, or money. These are fraudulent and might harm your device, steal your identity, or compromise your account.
- Q: How can I report a bug, a glitch, or a cheater in Apex Legends Mobile?
-A: If you encounter a bug, a glitch, or a cheater in Apex Legends Mobile, you can report it to EA support using one of the methods that we have mentioned above. You can also report it on the official forums or social media pages of Apex Legends Mobile, where the developers and moderators are active and responsive. You should provide as much evidence as possible, such as screenshots, videos, or logs, to help them investigate and resolve the issue.
- Q: How can I join the beta testing of Apex Legends Mobile?
-A: The beta testing of Apex Legends Mobile is currently limited to select regions and devices, such as India and the Philippines. If you live in one of these regions and have a compatible device, you can join the beta testing by pre-registering on the Google Play Store and waiting for an invitation from EA. If you don't live in one of these regions or have an incompatible device, you will have to wait until the game launches globally in 2023.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Car Parking Multiplayer 4.8.6.9 The Ultimate Open-World Car Simulator for Android.md b/spaces/fatiXbelha/sd/Car Parking Multiplayer 4.8.6.9 The Ultimate Open-World Car Simulator for Android.md
deleted file mode 100644
index 6541ce72691a4ade74cc9f2d850d521155d6449e..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Car Parking Multiplayer 4.8.6.9 The Ultimate Open-World Car Simulator for Android.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-Car Parking Multiplayer APK 4.8 6.9: A Review
-If you are looking for a game that combines car parking, racing, and simulation, then you might want to check out Car Parking Multiplayer APK 4.8 6.9. This is an open-world car parking simulator game for Android devices that lets you compete, customize, and explore with other players. In this article, we will review the game and its features, as well as show you how to download and install it on your device.
-car parking multiplayer apk 4.8 6.9 Download File 🗹 https://urllie.com/2uNylD
- What is Car Parking Multiplayer?
-An open-world car parking simulator game for Android devices
-Car Parking Multiplayer is a game developed by olzhass, a company that specializes in creating realistic car games for mobile platforms. The game is designed to simulate the experience of parking, driving, racing, and tuning cars in an open-world environment. You can choose from over a hundred cars, from classics to sports cars, and modify them to your liking. You can also walk around the world, interact with various gas stations and car services, and even use voice chat to communicate with other players.
- A game that lets you compete, customize, and explore with other players
-Car Parking Multiplayer is not just a parking game, as its name implies. It also offers various modes and challenges that test your skills and creativity. You can compete against real players in multiplayer racing, exchange cars with them, or even drive them off the road. You can also turn on the "Police Mode" and lead the cops on a thrilling chase, or try to evade them if you are on the run. You can also join or create your own friend list, and invite them to play with you in private or public servers.
- What are the features of Car Parking Multiplayer APK 4.8 6.9?
-Improved graphics and performance
-The latest version of Car Parking Multiplayer APK 4.8 6.9 has improved its graphics and performance, making the game more realistic and smooth. The game now supports high-resolution textures, dynamic shadows, and realistic lighting effects. The game also runs faster and more stable on most devices, thanks to the optimization of the code and the reduction of bugs and glitches.
- New cars, modes, and challenges
-The latest version of Car Parking Multiplayer APK 4.8 6.9 has also added new cars, modes, and challenges to keep the game fresh and exciting. You can now drive new cars such as Lamborghini Urus, BMW M5 F90, Mercedes-Benz G63 AMG, and more. You can also try new modes such as Drift Mode, Drag Mode, Off-Road Mode, and more. You can also challenge yourself with new levels such as Airport Parking, City Parking, Garage Parking, and more.
-car parking multiplayer android game download 4.8 6.9
-car parking multiplayer apk mod 4.8 6.9 unlimited money
-car parking multiplayer latest version 4.8 6.9 free download
-car parking multiplayer simulator 4.8 6.9 for pc
-car parking multiplayer online 4.8 6.9 with friends
-car parking multiplayer hack 4.8 6.9 no root
-car parking multiplayer update 4.8 6.9 new features
-car parking multiplayer cheats 4.8 6.9 unlock all cars
-car parking multiplayer gameplay 4.8 6.9 review
-car parking multiplayer tips and tricks 4.8 6.9 guide
-car parking multiplayer custom cars 4.8 6.9 tutorial
-car parking multiplayer police mode 4.8 6.9 chase
-car parking multiplayer graphics settings 4.8 6.9 best performance
-car parking multiplayer open world 4.8 6.9 map
-car parking multiplayer challenges 4.8 6.9 levels
-car parking multiplayer tuning and upgrade 4.8 6.9 options
-car parking multiplayer skins and decals 4.8 6.9 how to get
-car parking multiplayer gas stations and services 4.8 6.9 locations
-car parking multiplayer races and competitions 4.8 6.9 rewards
-car parking multiplayer filehippo download link 4.8 6.9 safe
-car parking multiplayer olzhass developer website 4.8 6.9 contact
-car parking multiplayer file size and requirements 4.8 6.9 compatible devices
-car parking multiplayer bugs and glitches 4.8 6.9 fix
-car parking multiplayer support and feedback 4.8 6.9 email
-car parking multiplayer community and forum 4.8 6.9 join
-car parking multiplayer videos and screenshots 4.8 6.9 share
-car parking multiplayer rating and reviews 4.8 6.9 stars
-car parking multiplayer alternatives and similar games 4.8 6.9 compare
-car parking multiplayer news and updates 4.8 6.9 subscribe
-car parking multiplayer faq and help center 4.8 6.9 answers
-car parking multiplayer codes and coupons 4.8 6.9 redeem
-car parking multiplayer offers and deals 4.8 6.9 save money
-car parking multiplayer premium and vip membership 4.8 6.9 benefits
-car parking multiplayer fun and entertainment value 4.8 6.9 enjoy
-car parking multiplayer pros and cons analysis of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app version of the game app
- Realistic car interiors and services
-The latest version of Car Parking Multiplayer APK 4.8 6.9 has also enhanced the realism of the car interiors and services. You can now see the detailed dashboard, steering wheel, pedals, and gears of each car. You can also use the indicators, headlights, horn, and other functions of the car. You can also visit various car services such as car wash, repair shop, tuning shop, and gas station. You can even get out of your car and walk around the world.
- How to download and install Car Parking Multiplayer APK 4.8 6.9?
-Download from a trusted source
-To download Car Parking Multiplayer APK 4.8 6.9, you need to find a trusted source that offers the latest and safe version of the game. You can use the link below to download the game from our website, which is verified and secure. Alternatively, you can search for other websites that provide the game, but make sure to check their reviews and ratings before downloading.
- Enable unknown sources on your device
-To install Car Parking Multiplayer APK 4.8 6.9, you need to enable unknown sources on your device. This is because the game is not available on the official Google Play Store, and you need to allow your device to install apps from other sources. To do this, go to your device settings, then security, then unknown sources, and turn it on. You may also need to grant some permissions to the game when installing it.
- Install the APK file and enjoy the game
-To install Car Parking Multiplayer APK 4.8 6.9, you need to locate the downloaded APK file on your device storage, and tap on it to start the installation process. Follow the instructions on the screen, and wait for the installation to finish. Once done, you can launch the game from your app drawer or home screen, and enjoy the game.
- What are the pros and cons of Car Parking Multiplayer APK 4.8 6.9?
-Pros
-Fun and addictive gameplay
-One of the pros of Car Parking Multiplayer APK 4.8 6.9 is that it offers fun and addictive gameplay that will keep you entertained for hours. The game has a lot of variety and challenge in its modes and levels, as well as a realistic physics engine that makes driving and parking more enjoyable. The game also has a lot of replay value, as you can always try new cars, customizations, and servers.
- Variety of cars and customizations
-Another pro of Car Parking Multiplayer APK 4.8 6.9 is that it offers a variety of cars and customizations that will suit your preferences and style. The game has over a hundred cars to choose from, ranging from classics to sports cars, and each car has its own unique features and performance. You can also customize your car with different colors, stickers, wheels, spoilers, exhausts, and more. You can even create your own car with the in-game editor.
- Multiplayer and social features
-A third pro of Car Parking Multiplayer APK 4.8 6.9 is that it offers multiplayer and social features that will make your gaming experience more fun and interactive. The game lets you compete against real players in multiplayer racing, exchange cars with them, or even drive them off the road. You can also join or create your own friend list, and invite them to play with you in private or public servers. You can also use voice chat to communicate with other players, or send them messages and emojis.
- Cons
-In-app purchases can be expensive
-One of the cons of Car Parking Multiplayer APK 4.8 6.9 is that it has in-app purchases that can be expensive for some players. The game offers some premium features such as unlimited money, premium cars, VIP status, and more that require real money to buy. These features can give you an advantage over other players, or make your gaming experience more enjoyable, but they can also cost a lot of money if you are not careful.
- Some bugs and glitches may occur
-Another con of Car Parking Multiplayer APK 4.8 6.9 is that it may have some bugs and glitches that may affect your gaming experience. The game is still in development, and it may not be compatible with all devices or operating systems. Some players have reported issues such as crashing, freezing, lagging, or losing progress in the game. These issues may be fixed in future updates, but they can also be frustrating if they happen frequently.
- Requires a stable internet connection
-A third con of Car Parking Multiplayer APK 4.8 6.9 is that it requires a stable internet connection to play. The game is mainly online, and it needs a good network connection to load the game data, connect to the servers, and interact with other players. If you have a slow or unstable internet connection, you may experience lag, disconnection, or loss of data in the game. You may also not be able to access some features or modes that are only available online.
- Conclusion
-Car Parking Multiplayer APK 4.8 6.9 is a game that offers a realistic and fun car parking simulator experience for Android devices. The game lets you choose from over a hundred cars, customize them to your liking, and drive them in an open-world environment. You can also compete, cooperate, and communicate with other players in various modes and challenges. The game has improved its graphics and performance, as well as added new cars, modes, and levels. However, the game also has some drawbacks, such as expensive in-app purchases, some bugs and glitches, and a requirement for a stable internet connection. Overall, Car Parking Multiplayer APK 4.8 6.9 is a game that is worth trying if you are a fan of car games, especially if you like parking, racing, and simulation.
- FAQs
-Here are some frequently asked questions about Car Parking Multiplayer APK 4.8 6.9:
-
-Question Answer
-Is Car Parking Multiplayer APK 4.8 6.9 free to play? Yes, Car Parking Multiplayer APK 4.8 6.9 is free to download and play. However, the game also offers some in-app purchases that can enhance your gaming experience or give you an edge over other players.
-Is Car Parking Multiplayer APK 4.8 6.9 safe to download and install? Yes, Car Parking Multiplayer APK 4.8 6.9 is safe to download and install if you use a trusted source such as our website or other reputable websites. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain viruses or malware that can harm your device or steal your data.
-Can I play Car Parking Multiplayer APK 4.8 6.9 offline? No, Car Parking Multiplayer APK 4.8 6.9 is an online game that requires a stable internet connection to play. You cannot play the game offline or without a network connection.
-Can I play Car Parking Multiplayer APK 4.8 6.9 on PC? No, Car Parking Multiplayer APK 4.8 6.9 is designed for Android devices only. You cannot play the game on PC or other platforms.
-How can I contact the developers of Car Parking Multiplayer APK 4.8 6.9? You can contact the developers of Car Parking Multiplayer APK 4.8 6.9 by sending them an email at olzhass@gmail.com or by visiting their Facebook page at https://www.facebook.com/olzhassgames/.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Classic APK and Relive the Golden Age of Gaming.md b/spaces/fatiXbelha/sd/Download Classic APK and Relive the Golden Age of Gaming.md
deleted file mode 100644
index 1bddc4db08c62d0974ccc6d9fd2353e2dc2734ec..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Classic APK and Relive the Golden Age of Gaming.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-What is a Classic APK and Why You Should Try It
-If you are a fan of retro games, you might have heard of classic APKs. These are files that allow you to play old-school games on your modern Android device. But what exactly are classic APKs, and how do you use them? In this article, we will explain what an APK file is, what a classic APK is, and some examples of classic APKs you can download and enjoy.
- What is an APK File?
-An APK file is a package file that contains the code, resources, and metadata of an Android application. It is similar to an EXE file on Windows or a DMG file on Mac. You can install an APK file on your Android device by downloading it from a trusted source, such as the Google Play Store or a reputable website.
-classic apk Download File → https://urllie.com/2uNwzE
- How to Install an APK File on Your Android Device
-Before you can install an APK file on your Android device, you need to enable the option to install apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. Then, you can download the APK file from your preferred source and open it with a file manager app. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to finish. You can then launch the app from your app drawer or home screen.
- What is a Classic APK?
-A classic APK is an APK file that contains a classic game that has been ported or emulated for Android devices. These games are usually from older consoles or arcade machines, such as the NES, SNES, Sega Genesis, or Atari. Some classic APKs are official releases from the original developers or publishers, while others are fan-made projects or modifications.
- The Benefits of Classic APKs
-There are many benefits of playing classic games on your Android device using classic APKs. Some of them are:
-
-You can enjoy nostalgic games that you grew up with or missed out on.
-You can experience games that are no longer available or compatible with modern devices.
-You can save space and money by not having to buy or maintain old consoles or cartridges.
-You can customize the controls, graphics, sound, and other settings to your liking.
-You can access cheats, mods, hacks, and other features that enhance the gameplay.
-
- The Risks of Classic APKs
-However, there are also some risks involved in using classic APKs. Some of them are:
-
-You may violate the intellectual property rights of the original developers or publishers if you download or distribute classic APKs without their permission.
-You may expose your device to malware, viruses, spyware, or other harmful software if you download classic APKs from untrusted sources.
-You may encounter bugs, glitches, crashes, or compatibility issues if you use poorly made or outdated classic APKs.
-You may lose your progress or data if you uninstall or update the classic APK without backing it up.
-
- Some Examples of Classic APKs You Can Download
-There are many classic games that have been converted into classic APKs for Android devices. Here are some examples of popular and well-made classic APKs that you can download and play:
- Sonic the Hedgehog™ Classic
-This is the official release of the original Sonic the Hedgehog game from SEGA for Android devices. You can play as Sonic, Tails, or Knuckles and speed through seven zones filled with loops, springs, enemies, and bosses. You can also unlock new features such as Time Attack mode, online leaderboards, achievements, and more.
- Super Mario Run This is the official release of the classic Super Mario game from Nintendo for Android devices. You can control Mario as he runs automatically through various worlds and levels. You can tap the screen to make him jump, spin, wall-jump, and perform other actions. You can also compete with other players in online modes, create your own kingdom, and unlock new characters and outfits.
-ClassicBoy Pro APK download
-ClassicBoy Pro emulator games
-ClassicBoy Pro APK XAPK
-ClassicBoy Pro APK Android 5.0+
-ClassicBoy Pro APK free
-ClassicBoy Pro APK latest version
-ClassicBoy Pro APK 6.7.8
-ClassicBoy Pro games database
-ClassicBoy Pro gestures controller
-ClassicBoy Pro sensor controller
-ClassicBoy Pro plugins download
-ClassicBoy Pro game cheats function
-ClassicBoy Pro retro video games
-ClassicBoy Pro gamepad input
-ClassicBoy Pro game running speed
-ClassicBoy Pro ROMs scanner
-ClassicBoy Pro games emulator
-ClassicBoy Pro arcade game
-ClassicBoy Pro PCSX-ReARMed plugin
-ClassicBoy Pro Beetle-PSX plugin
-ClassicBoy Pro Mupen64Plus plugin
-ClassicBoy Pro VBA-M plugin
-ClassicBoy Pro mGBA plugin
-ClassicBoy Pro Desmume plugin
-ClassicBoy Pro MelondS plugin
-ClassicBoy Pro Snes9x plugin
-ClassicBoy Pro FCEUmm plugin
-ClassicBoy Pro Genplus plugin
-ClassicBoy Pro Yabause plugin
-ClassicBoy Pro FB Alpha plugin
-ClassicBoy Pro MAME arcade plugin
-ClassicBoy Pro NeoPop plugin
-ClassicBoy Pro NeoCD plugin
-ClassicBoy Pro Stella plugin
-ClassicBoy Pro Beetle-PCE plugin
-ClassicBoy Pro Cygne plugin
-classic apk games list
-classic apk games download free
-classic apk games for android 10+
-classic apk games offline mode
-classic apk games with cheats codes
-classic apk games with controller support
-classic apk games with multiplayer mode
-classic apk games with high graphics
-classic apk games with low storage
-classic apk games with no ads
- Tetris®
-This is the official release of the classic Tetris game from EA for Android devices. You can play the iconic puzzle game that involves stacking and clearing blocks of different shapes and colors. You can choose from various modes, such as Marathon, Sprint, Ultra, and more. You can also challenge yourself with daily missions, earn rewards, and customize your game with themes and avatars.
- Conclusion
-Classic APKs are a great way to enjoy classic games on your Android device. They offer many benefits, such as nostalgia, compatibility, convenience, customization, and fun. However, they also come with some risks, such as legal issues, security threats, technical problems, and data loss. Therefore, you should be careful when downloading and using classic APKs. Make sure you get them from trusted sources, respect the rights of the original developers or publishers, and backup your data regularly. If you do that, you can have a blast playing classic games on your Android device.
- FAQs
-Here are some frequently asked questions about classic APKs:
-
-Q: How do I find classic APKs?
-A: You can find classic APKs on various websites, such as APKPure, APKMirror, APKMonk, and more. However, you should always check the reviews, ratings, comments, and permissions of the APK files before downloading them. You should also scan them with an antivirus app before installing them.
-Q: How do I uninstall classic APKs?
-A: You can uninstall classic APKs like any other app on your Android device. Go to Settings > Apps > Classic APK > Uninstall and confirm your action. Alternatively, you can long-press the app icon on your home screen or app drawer and drag it to the Uninstall option.
-Q: How do I update classic APKs?
-A: Some classic APKs may have an update option within the app itself. Others may require you to download the latest version of the APK file from the source website and install it over the existing one. However, you should always backup your data before updating any classic APK.
-Q: Are classic APKs legal?
-A: The legality of classic APKs depends on the source and the content of the APK file. Some classic APKs are official releases from the original developers or publishers who own the rights to the game. Others are fan-made projects or modifications that may infringe on the intellectual property rights of the original developers or publishers. Therefore, you should always respect the rights of the original developers or publishers and use classic APKs at your own risk.
-Q: Are classic APKs safe?
-A: The safety of classic APKs depends on the source and the quality of the APK file. Some classic APKs are well-made and tested by reputable developers or publishers who ensure that they are free of malware, viruses, spyware, or other harmful software. Others are poorly made or outdated by unknown or untrustworthy developers or publishers who may insert malicious code or software into the APK file. Therefore, you should always be careful when downloading and using classic APKs and protect your device with an antivirus app.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download JDK 20 for Windows Server 2016 64-bit The Latest Java SE Platform.md b/spaces/fatiXbelha/sd/Download JDK 20 for Windows Server 2016 64-bit The Latest Java SE Platform.md
deleted file mode 100644
index 3021fc32d9dfc23a7159789521277c8961c01f8f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download JDK 20 for Windows Server 2016 64-bit The Latest Java SE Platform.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-How to Download Java JDK for Windows Server 2016 64 Bit
-If you are a developer or a system administrator who needs to run Java applications on Windows Server 2016, you will need to install Java Development Kit (JDK) on your system. JDK is a software package that provides the tools and libraries necessary for developing, testing, and running Java programs. In this article, we will show you how to download and install JDK for Windows Server 2016 64 bit in a few easy steps.
-download java jdk for windows server 2016 64 bit Download ☑ https://urllie.com/2uNzM3
-What is Java JDK and why do you need it?
-Java is a popular programming language that can run on various platforms, such as Windows, Linux, Mac OS, and Android. Java applications are compiled into bytecode, which can be executed by a Java Virtual Machine (JVM). JVM is a software component that interprets and executes the bytecode on a specific platform.
-JDK is a software package that contains the following components:
-
-JRE (Java Runtime Environment): This is the core component that provides the JVM and other essential libraries for running Java applications.
-Java Compiler: This is a tool that converts Java source code into bytecode.
-Java Debugger: This is a tool that helps you find and fix errors in your Java code.
-Java Shell: This is an interactive tool that allows you to execute Java statements and expressions without compiling them.
-Java Documentation: This is a collection of HTML files that describe the features and functions of the Java language and its APIs.
-
-You need JDK if you want to do any of the following tasks:
-
-Develop Java applications using an IDE (Integrated Development Environment) or a text editor.
-Compile, debug, and run Java applications from the command line.
-Create executable JAR files that can be distributed and run on other systems.
-Use advanced tools and libraries for developing web, desktop, mobile, or embedded applications.
-
-Java JDK features and benefits
-JDK offers many features and benefits for developers and system administrators, such as:
-download java jdk 8 for windows server 2016 64 bit
-download java jdk 11 for windows server 2016 64 bit
-download java jdk for windows server 2016 r2 64 bit
-download java jdk for windows server 2016 standard 64 bit
-download java jdk for windows server 2016 datacenter 64 bit
-download java jdk for windows server 2016 essentials 64 bit
-download java jdk for windows server 2016 core 64 bit
-download java jdk for windows server 2016 offline installer
-download java jdk for windows server 2016 zip file
-download java jdk for windows server 2016 msi installer
-how to download java jdk for windows server 2016 64 bit
-where to download java jdk for windows server 2016 64 bit
-best site to download java jdk for windows server 2016 64 bit
-free download java jdk for windows server 2016 64 bit
-latest version of java jdk for windows server 2016 64 bit download
-oracle java jdk for windows server 2016 64 bit download
-openjdk for windows server 2016 64 bit download
-amazon corretto java jdk for windows server 2016 64 bit download
-ibm java jdk for windows server 2016 64 bit download
-azul zulu java jdk for windows server 2016 64 bit download
-adoptopenjdk for windows server 2016 64 bit download
-bellsoft liberica java jdk for windows server 2016 64 bit download
-graalvm ce java jdk for windows server 2016 64 bit download
-sap machine java jdk for windows server 2016 64 bit download
-red hat openjdk for windows server 2016 64 bit download
-eclipse openj9 java jdk for windows server 2016 64 bit download
-microsoft build of openjdk for windows server 2016 64 bit download
-alibaba dragonwell java jdk for windows server 2016 64 bit download
-jetbrains runtime java jdk for windows server 2016 64 bit download
-trava openjdk for windows server 2016 64 bit download
-install java jdk on windows server 2016 64 bit
-update java jdk on windows server 2016 64 bit
-uninstall java jdk on windows server 2016 64 bit
-configure java jdk on windows server 2016 64 bit
-set path for java jdk on windows server 2016 64 bit
-set environment variables for java jdk on windows server 2016
-
-Cross-platform compatibility: You can write Java code once and run it on any platform that supports JVM.
-Performance and reliability: JVM optimizes the execution of bytecode and ensures that your applications run fast and stable.
-Security and privacy: JVM enforces strict rules and policies to prevent unauthorized access or modification of your data and resources.
-Modularity and scalability: You can organize your code into modules that can be reused and updated independently.
-Rich set of APIs: You can use various APIs (Application Programming Interfaces) that provide ready-made solutions for common tasks, such as networking, database access, user interface, graphics, sound, etc.
-
-Java JDK system requirements and compatibility
-Before you download and install JDK, you should check the following system requirements and compatibility issues:
-
-You need administrator privileges to install JDK on Windows Server 2016.
-You need at least 420 MB of free disk space to install JDK.
-You need at least 128 MB of RAM to run JDK tools. Runtime Environment (build 17+35-2724) Java HotSpot(TM) 64-Bit Server VM (build 17+35-2724, mixed mode, sharing)
-This means that JDK 17 is installed and working on your system. You can also check the installation directory by typing where java and pressing Enter. You should see something like this:
- C:\Users\Administrator>where java C:\Program Files\Java\jdk-17\bin\java.exe
-This means that JDK is installed in C:\Program Files\Java\jdk-17 directory.
-Step 2: Add the JDK bin directory to the PATH variable
-The PATH variable is a system variable that tells Windows where to look for executable files, such as java.exe. To make sure that Windows can find JDK tools from any location, you need to add the JDK bin directory to the PATH variable. To do that, follow these steps:
-
-Open the Control Panel by pressing Windows + X keys and selecting Control Panel from the menu.
-Click on System and Security and then click on System.
-Click on Advanced system settings on the left panel.
-Click on Environment Variables button at the bottom of the System Properties window.
-In the Environment Variables window, under System variables, find the variable named PATH and select it. Then click on Edit button.
-In the Edit environment variable window, click on New button and type C:\Program Files\Java\jdk-17\bin in the text box. Then click on OK button.
-Click on OK button in the Environment Variables window and then click on OK button in the System Properties window.
-
-You have successfully added the JDK bin directory to the PATH variable.
-Step 3: Set the JAVA_HOME variable to point to the JDK installation directory
-The JAVA_HOME variable is a user-defined variable that tells other applications and tools where JDK is installed on your system. To set the JAVA_HOME variable, follow these steps:
-
-Open the Environment Variables window as described in Step 2.
-In the Environment Variables window, under User variables for Administrator (or your username), click on New button.
-In the New User Variable window, type JAVA_HOME in the Variable name text box and C:\Program Files\Java\jdk-17 in the Variable value text box. Then click on OK button.
-Click on OK button in the Environment Variables window and then click on OK button in the System Properties window.
-
-You have successfully set the JAVA_HOME variable.
-Conclusion and FAQs
-In this article, we have shown you how to download and install JDK for Windows Server 2016 64 bit. We have also explained what JDK is and why you need it, as well as how to verify the installation and set up the environment variables for it. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.
-Here are some frequently asked questions about JDK:
-
-Question Answer
-What is the difference between JDK and JRE? JDK stands for Java Development Kit and JRE stands for Java Runtime Environment. JDK contains JRE plus other tools and libraries for developing Java applications. JRE contains only the JVM and essential libraries for running Java applications. You need JDK if you want to create or modify Java applications, but you only need JRE if you want to run them.
-How do I update JDK to a newer version? To update JDK to a newer version, you need to download and install the new version from Oracle website as described in this article. You may also need to update the PATH and JAVA_HOME variables accordingly. You can uninstall the old version of JDK if you don't need it anymore, but be careful not to delete any files or folders that are used by other applications or tools.
-How do I uninstall JDK from my system? To uninstall JDK from your system, you need to follow these steps:
-Open the Control Panel and click on Programs and Features.
-Find the JDK version that you want to uninstall and click on Uninstall button.
-Follow the instructions provided by the uninstaller and confirm your choice.
-Delete the JDK installation directory and any shortcuts or files associated with it.
-Remove the JDK bin directory from the PATH variable and delete the JAVA_HOME variable as described in this article.
-
-How do I switch between different versions of JDK on my system? If you have multiple versions of JDK installed on your system, you can switch between them by changing the PATH and JAVA_HOME variables as described in this article. You can also use a tool like JEnv or Jabba to manage multiple Java versions on your system.
-Where can I find more information and resources about JDK? You can find more information and resources about JDK on the following websites:
-
-
-Oracle Java SE Documentation : This is the official documentation for JDK, where you can find tutorials, guides, reference manuals, API specifications, and more.
-Oracle Java SE Downloads : This is the official download page for JDK, where you can find the latest and previous versions of JDK for various platforms.
-Oracle Java SE Support : This is the official support page for JDK, where you can find FAQs, forums, blogs, newsletters, webinars, and more.
-Java Platform Group Blog : This is the official blog for JDK, where you can find news, updates, tips, tricks, and best practices for JDK.
-Java Magazine : This is a free online magazine for Java developers, where you can find articles, interviews, reviews, quizzes, and more.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Love Island The Game Mod APK 4.8.8 and Live Your Own Romance Story.md b/spaces/fatiXbelha/sd/Download Love Island The Game Mod APK 4.8.8 and Live Your Own Romance Story.md
deleted file mode 100644
index b0eda27cf6f70c36519e624deef1a2820fa79f86..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Love Island The Game Mod APK 4.8.8 and Live Your Own Romance Story.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-Love Island: The Game Mod APK 4.8.8 - A Fun and Flirty Simulation Game
-Do you love watching reality TV shows like Love Island, where you can see people finding love and drama in a tropical paradise? If so, you will love playing Love Island: The Game, a simulation game that lets you create your own character and experience the thrill of being on the show. You can choose your partner, flirt with other islanders, make friends or enemies, and compete in challenges to win the ultimate prize.
-love island the game mod apk 4.8.8 DOWNLOAD ⇒⇒⇒ https://urllie.com/2uNvF0
-But what if you want to have more control over your choices and actions in the game? What if you want to access premium features without spending real money? Well, you can do that by downloading Love Island: The Game Mod APK 4.8.8, a modified version of the game that gives you unlimited premium choices, free outfits and accessories, and no ads. In this article, we will tell you more about this amazing mod apk and how to download and install it on your device.
-What is Love Island: The Game?
-Love Island: The Game is a simulation game based on the popular reality TV show Love Island, where a group of singles live together in a villa and try to find love and win money. The game was developed by Fusebox Games and released in 2018 for Android and iOS devices. The game has over 10 million downloads on Google Play Store and has received positive reviews from players and critics.
-Features of Love Island: The Game
-Love Island: The Game has many features that make it fun and addictive to play. Here are some of them:
-Customize your character
-You can create your own character by choosing your name, gender, appearance, style, and personality. You can also change your look anytime by buying new outfits and accessories from the in-game store.
-Choose your partner
-You can choose who you want to couple up with from a variety of attractive islanders. You can also switch partners or dump them if you are not happy with them. You can also explore different relationships with different people and see how they react to your actions.
-Interact with other islanders
-You can chat, flirt, gossip, argue, or bond with other islanders in the villa. You can also influence their opinions and decisions by making choices that affect the storyline. You can also make friends or enemies depending on how you treat them.
-love island the game premium choices mod apk
-download love island the game 4.8.8 mod unlimited gems
-love island the game hack apk 4.8.8 free download
-love island the game mod apk latest version 4.8.8
-love island the game 4.8.8 mod apk android 1
-how to install love island the game mod apk 4.8.8
-love island the game mod apk 4.8.8 no root
-love island the game 4.8.8 mod apk unlimited passes
-love island the game mod apk 4.8.8 online
-love island the game 4.8.8 mod apk revdl
-love island the game mod apk 4.8.8 for ios
-love island the game 4.8.8 mod apk obb
-love island the game mod apk 4.8.8 rexdl
-love island the game 4.8.8 mod apk offline
-love island the game mod apk 4.8.8 unlocked everything
-love island the game 4.8.8 mod apk unlimited money
-love island the game mod apk 4.8.8 vip
-love island the game 4.8.8 mod apk happy mod
-love island the game mod apk 4.8.8 update
-love island the game 4.8.8 mod apk old version
-love island the game mod apk 4.8.8 cheats
-love island the game 4.8.8 mod apk full version
-love island the game mod apk 4.8.8 original
-love island the game 4.8.8 mod apk data
-love island the game mod apk 4.8.8 cracked
-Play mini-games and challenges
-You can participate in mini-games and challenges that test your skills and compatibility with your partner. You can also win rewards and prizes that can help you in the game.
-Why download Love Island: The Game Mod APK 4.8.8?
-While Love Island: The Game is free to play, it has some limitations that can affect your gaming experience. For example, some choices are locked behind a premium currency called gems, which you have to buy with real money or earn by watching ads or completing tasks. Also, some outfits and accessories are expensive and require gems or coins to purchase them. Moreover, the game has ads that can interrupt your gameplay.
-If you want to enjoy the game without these restrictions, you should download Love Island: The Game Mod APK 4.8.8 , a modified version of the game that gives you many advantages and benefits. Here are some of them:
-Unlimited premium choices
-With this mod apk, you can make any choice you want without worrying about the cost. You will have unlimited gems to unlock all the premium choices that can affect the outcome of the game. You can also use gems to buy more coins, which you can use to buy more outfits and accessories.
-Free outfits and accessories
-With this mod apk, you can also get all the outfits and accessories for free. You can dress up your character in any style you like and impress your partner and other islanders. You can also change your look anytime you want without spending any money.
-No ads and no root required
-With this mod apk, you can also enjoy the game without any ads. You will not see any annoying pop-ups or banners that can interrupt your gameplay. You can also install this mod apk without rooting your device, which means you do not have to risk damaging your device or voiding its warranty.
-How to download and install Love Island: The Game Mod APK 4.8.8?
-If you are interested in downloading and installing Love Island: The Game Mod APK 4.8.8, you can follow these simple steps:
-Step 1: Download the mod apk file from a trusted source
-You can download the mod apk file from a trusted source like [this one]. Make sure you download the latest version of the mod apk, which is 4.8.8 as of now. You can also check the file size and the permissions required before downloading it.
-Step 2: Enable unknown sources on your device
-Before you can install the mod apk file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
-Step 3: Install the mod apk file and enjoy the game
-Once you have downloaded and enabled unknown sources, you can install the mod apk file by tapping on it and following the instructions. After the installation is complete, you can open the game and enjoy all the features of Love Island: The Game Mod APK 4.8.8.
-Conclusion
-Love Island: The Game is a fun and flirty simulation game that lets you experience the thrill of being on a reality TV show. You can create your own character, choose your partner, interact with other islanders, and compete in challenges to win the ultimate prize. However, if you want to have more control over your choices and actions in the game, you should download Love Island: The Game Mod APK 4.8.8, a modified version of the game that gives you unlimited premium choices, free outfits and accessories, and no ads. You can download and install this mod apk easily by following the steps we have provided in this article.
-FAQs
-Here are some frequently asked questions about Love Island: The Game Mod APK 4.8.8:
-
-Is Love Island: The Game Mod APK 4.8.8 safe to use?
-Yes, Love Island: The Game Mod APK 4.8.8 is safe to use as long as you download it from a trusted source like [this one]. You do not have to worry about any viruses or malware infecting your device or any personal data being stolen.
-Will Love Island: The Game Mod APK 4.8.8 work on my device?
-Love Island: The Game Mod APK 4.8.8 should work on most Android devices that have Android 5.0 or higher versions installed. However, some devices may not be compatible with this mod apk due to different specifications or settings.
-Will Love Island: The Game Mod APK 4.8.8 affect my progress in the game?
-No, Love Island: The Game Mod APK 4.8.8 will not affect your progress in the game as it does not modify or delete any data from your original game account. You can still play the game normally with your existing account or create a new one if you want.
-Can I update Love Island: The Game Mod APK 4.8.8?
-Yes, you can update Love Island: The Game Mod APK 4.8.8 whenever there is a new version available from the same source you downloaded it from. However, you should always backup your data before updating to avoid any potential issues or errors.
-How can I contact the developer of Love Island: The Game Mod APK 4.8.8?
-If you have any questions, feedback, or suggestions about Love Island: The Game Mod APK 4.8.8, you can contact the developer of this mod apk by visiting their website or social media pages. You can also leave a comment or a review on the download page of this mod apk.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Dumb Ways to Die Hack Download and Enjoy the Game with Unlimited Money.md b/spaces/fatiXbelha/sd/Dumb Ways to Die Hack Download and Enjoy the Game with Unlimited Money.md
deleted file mode 100644
index b6c19725a7ee48b39e77d7c32f3e7630d914640a..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Dumb Ways to Die Hack Download and Enjoy the Game with Unlimited Money.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-How to Download Dumb Ways to Die Unlimited Money Mod
- If you are a fan of playing retro games, you might have heard of Dumb Ways to Die, a hilarious and addictive game that challenges you to avoid various silly deaths. In this article, we will show you how to download dumb ways to die unlimited money mod, which gives you access to more characters, mini-games, lives, and coins. We will also explain what dumb ways to die is, why you might want to download the mod, and what are the benefits and risks of modded games.
-download dumb ways to die unlimited money Download Zip »»» https://urllie.com/2uNIYW
- What is Dumb Ways to Die?
- A fun and quirky casual game
- Dumb Ways to Die is a casual game that was created by Metro Trains in 2016 as a public service announcement to promote rail safety. The game features a series of cute and clumsy characters who face various life-threatening situations, such as poking a bear, eating glue, or running across train tracks. Your goal is to tap, swipe, or tilt your device to help them survive as long as possible. The game is full of humor, gore, and catchy music that will keep you entertained for hours.
- The original and the sequel
- The original Dumb Ways to Die game was a huge success, with over 200 million downloads and millions of fans around the world. It spawned a sequel called Dumb Ways to Die 2: The Games, which introduced more characters, locations, and mini-games. The sequel also added a multiplayer mode where you can compete with other players online. Both games are available for free on Android and iOS devices, as well as on web browsers .
- Why Download Dumb Ways to Die Unlimited Money Mod?
- Unlock more characters and mini-games
- One of the reasons why you might want to download dumb ways to die unlimited money mod is to unlock more content in the game. Both games have dozens of characters and mini-games that you can unlock by collecting coins or completing achievements. However, some of them are quite expensive or hard to get. With the mod, you can get unlimited coins and unlock all the characters and mini-games without spending any real money or time.
- Enjoy unlimited lives and coins
- Another reason why you might want to download dumb ways to die unlimited money mod is to enjoy unlimited lives and coins in the game. Both games have a limited number of lives that you can use per session. If you run out of lives, you have to wait for them to regenerate or buy more with coins. This can be frustrating if you want to play longer or beat your high score. With the mod, you can get unlimited lives and coins and play as much as you want without any interruptions.
- How to Download Dumb Ways to Die Unlimited Money Mod?
- Find a reliable mod source
- The first step to download dumb ways to die unlimited money mod is to find a reliable mod source that offers the latest version of the mod for your device. There are many websites that claim to provide modded games, but not all of them are trustworthy or safe. Some of them may contain malware, viruses, or outdated files that can harm your device or compromise your data. To avoid this, you should do some research before downloading any mod from an unknown source. You can check the reviews, ratings, comments, and feedback from other users who have downloaded the mod before. You can also use antivirus software or online scanners to scan the mod file for any potential threats.
- Follow the installation instructions Follow the installation instructions
- The second step to download dumb ways to die unlimited money mod is to follow the installation instructions provided by the mod source. The installation process may vary depending on the type of mod and the device you are using. Generally, you will need to uninstall the original game from your device, download the mod file, enable unknown sources in your settings, and install the mod file. Some mods may also require additional steps, such as granting permissions, verifying your identity, or using a third-party app. You should follow the instructions carefully and make sure you have enough storage space and battery life on your device.
- What are the Benefits and Risks of Modded Games?
- Benefits: enhanced graphics, gameplay, and features
- Modded games are games that have been modified by users or developers to alter or improve their graphics, gameplay, or features. Some of the benefits of modded games are that they can enhance your gaming experience by adding new elements, such as characters, levels, modes, items, or effects. They can also make the game more fun, challenging, or realistic by changing the difficulty, physics, or mechanics. Modded games can also give you more freedom and creativity by allowing you to customize the game according to your preferences and style.
-download dumb ways to die mod apk with unlimited currency
-how to get free money in dumb ways to die game
-dumb ways to die hack apk download for android
-dumb ways to die cheats and tips for unlimited coins
-download dumb ways to die 2 mod apk with unlimited money
-dumb ways to die unlimited money mediafire link
-dumb ways to die mod apk latest version free download
-dumb ways to die game online play with unlimited money
-dumb ways to die 3 mod apk download with unlimited coins
-how to hack dumb ways to die with lucky patcher
-download dumb ways to draw mod apk with unlimited money
-dumb ways to die original mod apk unlimited currency
-dumb ways to die no ads apk download with unlimited money
-dumb ways to die offline mod apk free download
-dumb ways to die all characters unlocked mod apk
-download dumb ways to dash mod apk with unlimited money
-dumb ways to die hack tool no survey no password
-dumb ways to die pc game download with unlimited money
-dumb ways to die bluestacks emulator with unlimited currency
-dumb ways to die ios hack download with unlimited money
-download dumb ways jr zany's hospital mod apk with unlimited money
-dumb ways to die online generator for unlimited coins
-dumb ways to die mod menu apk download free
-dumb ways to die 2 hack apk download for android
-dumb ways to die 2 cheats and tips for unlimited tokens
-download dumb ways to die 2 mod apk with unlimited money and tokens
-dumb ways to die 2 hack tool no survey no password
-dumb ways to die 2 pc game download with unlimited money and tokens
-dumb ways to die 2 bluestacks emulator with unlimited currency and tokens
-dumb ways to die 2 ios hack download with unlimited money and tokens
-download dumb ways jr boffo's breakfast mod apk with unlimited money and tokens
-dumb ways jr zany's hospital hack apk download for android
-dumb ways jr zany's hospital cheats and tips for unlimited coins and tokens
-download dumb ways jr zany's hospital mod apk with unlimited money and tokens
-dumb ways jr zany's hospital hack tool no survey no password
-dumb ways jr zany's hospital pc game download with unlimited money and tokens
-dumb ways jr zany's hospital bluestacks emulator with unlimited currency and tokens
-dumb ways jr zany's hospital ios hack download with unlimited money and tokens
-download loopy's train set mod apk with unlimited money and tokens
-loopy's train set hack apk download for android
-loopy's train set cheats and tips for unlimited coins and tokens
-loopy's train set mod apk latest version free download
-loopy's train set hack tool no survey no password
-loopy's train set pc game download with unlimited money and tokens
-loopy's train set bluestacks emulator with unlimited currency and tokens
-loopy's train set ios hack download with unlimited money and tokens
- Risks: malware, viruses, bans, and legal issues
- Modded games are not without risks, however. Some of the risks of modded games are that they can expose your device or data to malware, viruses, or other harmful software that can damage your system or steal your information. They can also cause compatibility or performance issues with your device or the original game, such as crashes, glitches, or errors. Modded games can also get you banned from the official game servers or platforms if they detect that you are using an unauthorized or illegal version of the game. Modded games can also violate the terms and conditions of the original game developers or publishers, which can result in legal actions or penalties.
- Conclusion
- Dumb Ways to Die is a fun and quirky casual game that tests your reflexes and skills in avoiding various silly deaths. If you want to enjoy more content and features in the game, you can download dumb ways to die unlimited money mod, which gives you unlimited coins and lives and unlocks all the characters and mini-games. However, you should be careful when downloading and installing any modded game, as they may come with some risks and drawbacks. You should always use a reliable mod source, follow the installation instructions, and be aware of the potential consequences of using modded games.
- FAQs
- Here are some frequently asked questions about dumb ways to die unlimited money mod:
-
-
-Question
-Answer
-
-
-Is dumb ways to die unlimited money mod safe?
-Dumb ways to die unlimited money mod is not officially endorsed or supported by Metro Trains or any other entity involved in the creation of the original game. Therefore, it is not guaranteed to be safe or secure. You should only download it from a trusted mod source and scan it for any malware or viruses before installing it on your device.
-
-
-Will dumb ways to die unlimited money mod work on my device?
-Dumb ways to die unlimited money mod is designed to work on Android and iOS devices that support the original game. However, it may not be compatible with all devices or versions of the game. You should check the requirements and specifications of the mod before downloading it and make sure your device meets them.
-
-
-Can I play dumb ways to die unlimited money mod online?
-Dumb ways to die unlimited money mod allows you to play both offline and online modes of the game. However, you should be careful when playing online, as you may get banned from the official game servers or platforms if they detect that you are using a modded version of the game. You should also respect other players and avoid cheating or exploiting the game.
-
-
-How do I update dumb ways to die unlimited money mod?
-Dumb ways to die unlimited money mod may not be updated regularly or automatically by the mod source. Therefore, you may need to check for updates manually or download a new version of the mod whenever there is a new update for the original game. You should also backup your progress and data before updating or uninstalling the mod.
-
-
-Where can I find more information about dumb ways to die unlimited money mod?
-You can find more information about dumb ways to die unlimited money mod by visiting the website or social media pages of the mod source. You can also read reviews, ratings, comments, and feedback from other users who have downloaded the mod before. You can also contact the mod source directly if you have any questions, issues, or suggestions regarding the mod.
-
-
- I hope you enjoyed this article and learned how to download dumb ways to die unlimited money mod. If you did, please share it with your friends and leave a comment below. Thank you for reading and have a great day!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/vggface.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/vggface.py
deleted file mode 100644
index 0a822079e3a67ae3292e8c5c413abe0d33999561..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/vggface.py
+++ /dev/null
@@ -1,150 +0,0 @@
-
-import torch
-import torch.nn as nn
-
-
-class Vgg_face_dag(nn.Module):
-
- def __init__(self):
- super(Vgg_face_dag, self).__init__()
- self.meta = {'mean': [129.186279296875, 104.76238250732422, 93.59396362304688],
- 'std': [1, 1, 1],
- 'imageSize': [224, 224, 3]}
- self.conv1_1 = nn.Conv2d(3, 64, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu1_1 = nn.ReLU(inplace=True)
- self.conv1_2 = nn.Conv2d(64, 64, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu1_2 = nn.ReLU(inplace=True)
- self.pool1 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
- self.conv2_1 = nn.Conv2d(64, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu2_1 = nn.ReLU(inplace=True)
- self.conv2_2 = nn.Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu2_2 = nn.ReLU(inplace=True)
- self.pool2 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
- self.conv3_1 = nn.Conv2d(128, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu3_1 = nn.ReLU(inplace=True)
- self.conv3_2 = nn.Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu3_2 = nn.ReLU(inplace=True)
- self.conv3_3 = nn.Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu3_3 = nn.ReLU(inplace=True)
- self.pool3 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
- self.conv4_1 = nn.Conv2d(256, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu4_1 = nn.ReLU(inplace=True)
- self.conv4_2 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu4_2 = nn.ReLU(inplace=True)
- self.conv4_3 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu4_3 = nn.ReLU(inplace=True)
- self.pool4 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
- self.conv5_1 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu5_1 = nn.ReLU(inplace=True)
- self.conv5_2 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu5_2 = nn.ReLU(inplace=True)
- self.conv5_3 = nn.Conv2d(512, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
- self.relu5_3 = nn.ReLU(inplace=True)
- self.pool5 = nn.MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=0, dilation=1, ceil_mode=False)
- self.fc6 = nn.Linear(in_features=25088, out_features=4096, bias=True)
- self.relu6 = nn.ReLU(inplace=True)
- self.dropout6 = nn.Dropout(p=0.5)
- self.fc7 = nn.Linear(in_features=4096, out_features=4096, bias=True)
- self.relu7 = nn.ReLU(inplace=True)
- self.dropout7 = nn.Dropout(p=0.5)
- self.fc8 = nn.Linear(in_features=4096, out_features=2622, bias=True)
-
- def forward(self, x0):
- x1 = self.conv1_1(x0)
- x2 = self.relu1_1(x1)
- x3 = self.conv1_2(x2)
- x4 = self.relu1_2(x3)
- x5 = self.pool1(x4)
- x6 = self.conv2_1(x5)
- x7 = self.relu2_1(x6)
- x8 = self.conv2_2(x7)
- x9 = self.relu2_2(x8)
- x10 = self.pool2(x9)
- x11 = self.conv3_1(x10)
- x12 = self.relu3_1(x11)
- x13 = self.conv3_2(x12)
- x14 = self.relu3_2(x13)
- x15 = self.conv3_3(x14)
- x16 = self.relu3_3(x15)
- x17 = self.pool3(x16)
- x18 = self.conv4_1(x17)
- x19 = self.relu4_1(x18)
- x20 = self.conv4_2(x19)
- x21 = self.relu4_2(x20)
- x22 = self.conv4_3(x21)
- x23 = self.relu4_3(x22)
- x24 = self.pool4(x23)
- x25 = self.conv5_1(x24)
- x26 = self.relu5_1(x25)
- x27 = self.conv5_2(x26)
- x28 = self.relu5_2(x27)
- x29 = self.conv5_3(x28)
- x30 = self.relu5_3(x29)
- x31_preflatten = self.pool5(x30)
- x31 = x31_preflatten.view(x31_preflatten.size(0), -1)
- x32 = self.fc6(x31)
- x33 = self.relu6(x32)
- x34 = self.dropout6(x33)
- x35 = self.fc7(x34)
- x36 = self.relu7(x35)
- x37 = self.dropout7(x36)
- x38 = self.fc8(x37)
- return x38
-
-
-def vgg_face_dag(weights_path=None, **kwargs):
- """
- load imported model instance
-
- Args:
- weights_path (str): If set, loads model weights from the given path
- """
- model = Vgg_face_dag()
- if weights_path:
- state_dict = torch.load(weights_path)
- model.load_state_dict(state_dict)
- return model
-
-
-class VGGFaceFeats(Vgg_face_dag):
- def forward(self, x0):
- x1 = self.conv1_1(x0)
- x2 = self.relu1_1(x1)
- x3 = self.conv1_2(x2)
- x4 = self.relu1_2(x3)
- x5 = self.pool1(x4)
- x6 = self.conv2_1(x5)
- x7 = self.relu2_1(x6)
- x8 = self.conv2_2(x7)
- x9 = self.relu2_2(x8)
- x10 = self.pool2(x9)
- x11 = self.conv3_1(x10)
- x12 = self.relu3_1(x11)
- x13 = self.conv3_2(x12)
- x14 = self.relu3_2(x13)
- x15 = self.conv3_3(x14)
- x16 = self.relu3_3(x15)
- x17 = self.pool3(x16)
- x18 = self.conv4_1(x17)
- x19 = self.relu4_1(x18)
- x20 = self.conv4_2(x19)
- x21 = self.relu4_2(x20)
- x22 = self.conv4_3(x21)
- x23 = self.relu4_3(x22)
- x24 = self.pool4(x23)
- x25 = self.conv5_1(x24)
- # x26 = self.relu5_1(x25)
- # x27 = self.conv5_2(x26)
- # x28 = self.relu5_2(x27)
- # x29 = self.conv5_3(x28)
- # x30 = self.relu5_3(x29)
- # x31_preflatten = self.pool5(x30)
- # x31 = x31_preflatten.view(x31_preflatten.size(0), -1)
- # x32 = self.fc6(x31)
- # x33 = self.relu6(x32)
- # x34 = self.dropout6(x33)
- # x35 = self.fc7(x34)
- # x36 = self.relu7(x35)
- # x37 = self.dropout7(x36)
- # x38 = self.fc8(x37)
- return x1, x6, x11, x18, x25
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chessmen APK A Fun and Educational Game for All Ages - Chessmen Club.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chessmen APK A Fun and Educational Game for All Ages - Chessmen Club.md
deleted file mode 100644
index 9095017849b5b0d02bfa4ddf5cec187443365c72..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chessmen APK A Fun and Educational Game for All Ages - Chessmen Club.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-Chessmen APK: A Fun and Challenging Puzzle Game for Android
- If you are looking for a new and exciting puzzle game to play on your Android device, you might want to check out Chessmen APK. This game is based on the classic board game of chess, but with a twist. Instead of playing against another player, you have to swap the positions of the white and black pieces on the board. Sounds easy, right? Well, not quite. You have to do it in as few moves as possible, while following the rules of chess. Are you up for the challenge?
- What is Chessmen APK?
- Chessmen APK is a puzzle game developed by Brodski Software, a company that specializes in creating games and apps for Android devices. The game was released in 2014 and has received positive reviews from users and critics alike. It has been downloaded over 100,000 times from the Google Play Store, where it has a rating of 4.4 out of 5 stars.
-chessmen apk Download ✯ https://gohhs.com/2uPqFU
- The concept and gameplay of Chessmen APK
- The concept of Chessmen APK is simple but ingenious. You are presented with a chessboard with white and black pieces on it. Your goal is to swap the positions of the white and black pieces, so that the white pieces are where the black pieces were, and vice versa. You can only move one piece at a time, following the rules of chess. For example, you can only move a rook horizontally or vertically, a bishop diagonally, a knight in an L-shape, etc. You can also capture an enemy piece by moving your piece to its square, but you cannot capture your own piece. You have to complete each level in as few moves as possible, which is shown by a counter at the top of the screen. The game has 60 levels of increasing difficulty, ranging from easy to expert.
- The features and benefits of Chessmen APK
- Chessmen APK is not just a fun and challenging puzzle game, but also a great way to improve your chess skills and knowledge. Here are some of the features and benefits of playing Chessmen APK:
-
-It helps you practice your chess moves and learn how each piece moves.
-It tests your logic, strategy, planning, and problem-solving skills.
-It stimulates your brain and enhances your memory and concentration.
-It offers a variety of levels and puzzles to suit different skill levels and preferences.
-It has simple and intuitive controls and graphics.
-It is free to download and play, with no ads or in-app purchases.
-
- How to download and install Chessmen APK on your device?
- If you are interested in playing Chessmen APK, you can easily download and install it on your Android device. Here are the steps to do so:
- The steps to download and install Chessmen APK
-
-Go to https://apkpure.com/chessmen-club/com.brodski.android.chessmen/download/12-APK , which is a trusted source for downloading Android apps.
-Click on the green "Download APK" button to start downloading the file.
-Once the download is complete, open the file manager app on your device and locate the downloaded file.
-Tap on the file to install it. You may need to enable "Unknown Sources" in your device settings to allow the installation of apps from unknown sources.
-Follow the on-screen instructions to complete the installation.
-Launch the app and enjoy playing Chessmen APK!
-
- The requirements and compatibility of Chessmen APK
- Before you download and install Chessmen APK, you should make sure that your device meets the following requirements and compatibility:
-
-Your device should have Android 2.3 or higher.
-Your device should have at least 10 MB of free storage space.
-Your device should have a stable internet connection to download the app.
-Your device should support touch screen and sound features.
-
- How to play and improve your skills in Chessmen APK?
- Now that you have downloaded and installed Chessmen APK, you are ready to play and improve your skills in this game. Here are some tips and tricks to help you out:
- The rules and objectives of Chessmen APK
- The rules and objectives of Chessmen APK are simple but challenging. Here are the main points to remember:
-
-You have to swap the positions of the white and black pieces on the board, so that the white pieces are where the black pieces were, and vice versa.
-You can only move one piece at a time, following the rules of chess.
-You can capture an enemy piece by moving your piece to its square, but you cannot capture your own piece.
-You have to complete each level in as few moves as possible, which is shown by a counter at the top of the screen.
-You can undo your last move by tapping on the undo button at the bottom of the screen.
-You can restart the level by tapping on the restart button at the bottom of the screen.
-You can skip a level by tapping on the skip button at the bottom of the screen, but you will lose one star for doing so.
-You can earn up to three stars for each level, depending on how many moves you used to complete it.
-You can view your progress and achievements by tapping on the menu button at the top left corner of the screen.
-
- The tips and strategies for Chessmen APK
- Chessmen APK is a game that requires logic, strategy, planning, and problem-solving skills. Here are some tips and strategies to help you master this game:
-chessmen club apk download
-chessmen club android game
-chessmen club latest version apk
-chessmen club 1.0.9 apk
-chessmen club puzzle game apk
-chessmen apk free download
-chessmen apk for android
-chessmen apk mod
-chessmen apk offline
-chessmen apk full version
-chessmen game apk
-chessmen game android
-chessmen game download apk
-chessmen game mod apk
-chessmen game offline apk
-chessmen app apk
-chessmen app download
-chessmen app android
-chessmen app mod apk
-chessmen app offline
-chessmen club app apk
-chessmen club app download
-chessmen club app android
-chessmen club app mod apk
-chessmen club app offline
-download chessmen apk
-download chessmen club apk
-download chessmen game apk
-download chessmen app apk
-download chessmen mod apk
-download chessmen offline apk
-download chessmen full version apk
-download chessmen latest version apk
-download chessmen club latest version apk
-download chessmen club mod apk
-download chessmen club offline apk
-download chessmen club full version apk
-download chessmen club android game
-download chessmen club puzzle game apk
-install chessmen apk
-install chessmen club apk
-install chessmen game apk
-install chessmen app apk
-install chessmen mod apk
-install chessmen offline apk
-install chessmen full version apk
-install chessmen latest version apk
-install chessmen club latest version apk
-install chessmen club mod apk
-install chessmen club offline apk
-
-Plan your moves ahead. Before you make a move, think about how it will affect the board and what moves you will need to make afterwards. Try to visualize the final position of the pieces and work backwards from there.
-Use your pieces wisely. Each piece has its own advantages and disadvantages. For example, a rook can move far and fast, but it can only move horizontally or vertically. A bishop can move diagonally, but it can only stay on one color of squares. A knight can jump over other pieces, but it has a limited range. A pawn can only move forward, but it can promote to a more powerful piece if it reaches the end of the board. A king can move in any direction, but it is vulnerable to checkmate. A queen can move in any direction, but it is valuable and should be protected. Try to use your pieces according to their strengths and weaknesses.
-Look for patterns and shortcuts. Some levels have symmetrical or repetitive patterns that can help you solve them faster. For example, if you see two identical groups of pieces on opposite sides of the board, you can swap them with each other in one move. Or if you see a row or column of pieces that are all of one color, you can move them all together in one move. Look for these patterns and shortcuts and use them to your advantage.
-Learn from your mistakes. If you get stuck or make a wrong move, don't give up or get frustrated. Instead, try to analyze what went wrong and how you can avoid it in the future. You can also use the undo button to correct your mistake or the restart button to try again from scratch. You can also skip a level if you find it too hard, but remember that you will lose one star for doing so.
-
- Why should you play Chessmen APK?
- Chessmen APK is not only a fun and challenging puzzle game, but also a great way to learn more about chess and its benefits for your brain. Here are some reasons why you should play Chessmen APK:
- The brain benefits of playing chess
- Playing chess has been proven to have many positive effects on your brain and mental health. Here are some of them:
-
-It improves your memory, concentration, and attention span. Chess requires you to remember the positions and movements of the pieces, as well as the rules and strategies of the game. This helps you to enhance your short-term and long-term memory, as well as your focus and alertness.
-It develops your logic, reasoning, and problem-solving skills. Chess involves making logical and rational decisions based on the analysis of the board and the possible outcomes of each move. This helps you to improve your critical thinking, deductive reasoning, and problem-solving skills.
-It boosts your creativity and imagination. Chess encourages you to explore different possibilities and scenarios, as well as to come up with original and innovative solutions. This helps you to stimulate your creativity and imagination, as well as your lateral thinking and divergent thinking skills.
-It enhances your emotional intelligence and social skills. Chess teaches you to be patient, disciplined, respectful, and humble. It also helps you to cope with stress, frustration, failure, and success. It also helps you to interact with other players, communicate effectively, and cooperate with others.
-
- The history and trivia of chess
- Playing chess also helps you to learn more about the history and trivia of this fascinating game. Here are some interesting facts about chess:
-
-Chess is one of the oldest and most popular games in the world. It is believed to have originated in India in the 6th century AD, and then spread to Persia, Arabia, Europe, and Asia. It has been played by kings, queens, nobles, scholars, artists, and many other famous people throughout history.
-Chess is also known as the "game of kings" or the "royal game". This is because it was often used as a way of teaching warfare, strategy, and diplomacy to royalty and nobility. It was also considered a symbol of status, intelligence, and culture.
-Chess has many variations and adaptations. There are different types of chess pieces, boards, rules, and formats. For example, there are chess variants that use three-dimensional boards, multiple players, random setups, fairy pieces, etc. There are also chess puzzles, chess problems, chess compositions, chess tournaments, chess ratings, chess engines, etc.
-Chess has inspired many other games and fields of study. For example, there are games that are based on or influenced by chess, such as checkers, draughts, shogi, xiangqi, etc. There are also fields of study that use chess as a model or a tool, such as mathematics, computer science, artificial intelligence, psychology, etc.
-
- Conclusion
- Chessmen APK is a fun and challenging puzzle game for Android devices that is based on the classic board game of chess. It is a great way to practice your chess skills and knowledge, as well as to improve your brain functions and mental health. It is also a great way to learn more about the history and trivia of chess. Chessmen APK is free to download and play from the Google Play Store or from https://apkpure.com/chessmen-club/com.brodski.android.chessmen/download/12-APK . If you are looking for a new and exciting puzzle game to play on your Android device, download Chessmen APK today and see if you can swap the positions of the white and black pieces on the board in as few moves as possible!
- Frequently Asked Questions
- Here are some frequently asked questions about Chessmen APK:
- Q: How many levels are there in Chessmen APK?
-A: There are 60 levels in Chessmen APK, ranging from easy to expert. You can view your progress and achievements by tapping on the menu button at the top left corner of the screen.
- Q: How can I earn more stars in Chessmen APK?
-A: You can earn up to three stars for each level, depending on how many moves you used to complete it. The fewer moves you use, the more stars you earn. You can also earn bonus stars by completing certain achievements, such as finishing a level without capturing any piece, finishing a level with only one piece left, etc.
- Q: What are the benefits of playing Chessmen APK?
-A: Playing Chessmen APK is not only a fun and challenging puzzle game, but also a great way to improve your chess skills and knowledge, as well as your brain functions and mental health. It helps you practice your chess moves and learn how each piece moves. It tests your logic, strategy, planning, and problem-solving skills. It stimulates your brain and enhances your memory and concentration. It boosts your creativity and imagination. It enhances your emotional intelligence and social skills. It also helps you learn more about the history and trivia of chess.
- Q: Is Chessmen APK safe to download and play?
-A: Yes, Chessmen APK is safe to download and play. It does not contain any viruses, malware, spyware, or other harmful elements. It also does not require any special permissions or access to your device. It does not collect or share any personal or sensitive information from you or your device. It does not have any ads or in-app purchases that may interfere with your gameplay or privacy.
- Q: Where can I get more information or support for Chessmen APK?
-A: If you have any questions, feedback, suggestions, or issues regarding Chessmen APK, you can contact the developer by email at brodski.software@gmail.com . You can also visit their website at https://brodski-software.de/ or their Facebook page at https://www.facebook.com/BrodskiSoftware/ for more information or support.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/College Brawl The Most Violent and Fun Game for iOS and Android Users.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/College Brawl The Most Violent and Fun Game for iOS and Android Users.md
deleted file mode 100644
index cd8958f1cb3bb710a7d9abd52b83fe5562b44cfe..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/College Brawl The Most Violent and Fun Game for iOS and Android Users.md
+++ /dev/null
@@ -1,153 +0,0 @@
-
-College Brawl APK Apple: How to Download and Play this Violent Fighting Game
- If you are looking for a fun and exciting game that will test your fighting skills and your sense of humor, you might want to check out College Brawl. This is a beat 'em up game for adults where you have to fight your way through different levels to reclaim your belongings from a notorious gang. In this article, we will tell you what College Brawl is, how to download and install it on your Apple devices, and how to play it like a pro.
- What is College Brawl?
- A brief introduction to the game and its features
- College Brawl is a violent fighting arcade game with a college theme that was released in 2021. It is developed by Redjih, a small indie studio that also created other games such as Zombie Survival and Superhero Simulator. The game is available for both Android and iOS devices, but you need to download an APK file to install it on your Apple devices.
-college brawl apk apple Download Zip ————— https://gohhs.com/2uPrP4
- College Brawl has many features that make it an entertaining and addictive game, such as:
-
-Stunning graphics and animations that create a realistic and immersive experience.
-Various characters and enemies that have different abilities and personalities.
-A wide range of weapons and items that you can use to fight or heal yourself.
-A hilarious dialogue and voice acting that will make you laugh out loud.
-A catchy soundtrack and sound effects that enhance the mood and the action.
-A simple and intuitive control system that allows you to perform combos and special moves.
-A challenging difficulty level that will keep you on your toes.
-A rewarding system that gives you coins, gems, and trophies for completing levels and achievements.
-A leaderboard and a social media integration that let you compete and share your progress with other players.
-
- The plot and the gameplay of College Brawl
- The plot of College Brawl is simple but engaging. You play as a college student who has been robbed by a gang of thugs who took everything from you, including your clothes, your money, your phone, your car, and even your girlfriend. You decide to take revenge by fighting your way through different locations such as the campus, the dorms, the cafeteria, the library, the gym, the club, and the gang's hideout. Along the way, you will encounter various enemies such as bullies, cheerleaders, nerds, jocks, teachers, security guards, bikers, strippers, bouncers, bosses, and more. You will also meet some allies who will help you or join you in your quest.
- The gameplay of College Brawl is fast-paced and fun. You have to use your fists, feet, weapons, items, and special moves to defeat your enemies. You can also interact with the environment by throwing objects or using traps. You have a health bar that depletes when you get hit or when you use items. You can restore your health by picking up food or drinks or by using medkits. You also have a rage bar that fills up when you deal or receive damage. When it is full, you can unleash a powerful attack that can wipe out multiple enemies at once. You can also collect coins, gems, and trophies that you can use to buy or upgrade your weapons, items, and skills. You can also unlock new characters and outfits by completing levels and achievements.
- How to Download College Brawl APK for Apple Devices?
- The steps to download and install College Brawl APK on iOS devices
- If you want to play College Brawl on your iPhone or iPad, you need to download and install an APK file, which is a package file format for Android applications. Here are the steps to do so:
-How to install college brawl ios and android
-College brawl mobile download free
-College brawl mod apk for android and ios
-College brawl game review and rating
-College brawl beat 'em up arcade game
-College brawl fighting game for adults
-College brawl violent and hot gameplay
-College brawl tips and tricks
-College brawl cheats and hacks
-College brawl best characters and weapons
-College brawl walkthrough and guide
-College brawl levels and stages
-College brawl enemies and bosses
-College brawl girls and gangs
-College brawl plot and story
-College brawl graphics and sound
-College brawl duration and modes
-College brawl latest version and update
-College brawl system requirements and compatibility
-College brawl download link and instructions
-College brawl online multiplayer mode
-College brawl offline single player mode
-College brawl co-op and versus mode
-College brawl achievements and rewards
-College brawl challenges and missions
-College brawl fun and addictive game
-College brawl pros and cons
-College brawl features and benefits
-College brawl alternatives and similar games
-College brawl feedback and testimonials
-College brawl support and contact
-College brawl bugs and issues
-College brawl news and announcements
-College brawl fan community and forum
-College brawl videos and screenshots
-College brawl questions and answers
-College brawl problems and solutions
-College brawl suggestions and recommendations
-College brawl complaints and compliments
-College brawl refund policy and terms of service
-How to uninstall college brawl ios and android
-How to backup college brawl data and progress
-How to restore college brawl purchases and settings
-How to play college brawl on pc or mac
-How to stream college brawl on twitch or youtube
-How to record college brawl gameplay or voice
-How to share college brawl with friends or family
-How to get more coins or gems in college brawl
-How to unlock more outfits or items in college brawl
-How to improve college brawl performance or speed
-
-Go to a trusted website that provides College Brawl APK download links, such as [APKPure] or [APKMirror].
-Find the latest version of College Brawl APK and tap on the download button.
-Wait for the download to finish and then locate the file on your device.
-Before you install the file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > General > Device Management and trust the profile of the app.
-Tap on the file and follow the instructions to install it on your device.
-Launch the app and enjoy playing College Brawl.
-
- The benefits and risks of using College Brawl APK
- Using College Brawl APK has some benefits and risks that you should be aware of. Here are some of them:
-
-
-Benefits
-Risks
-
-
-You can play College Brawl for free without paying any fees or subscriptions.
-You may encounter some bugs or glitches that affect the performance or the security of the app.
-
-
-You can access all the features and content of the game without any restrictions or limitations.
-You may violate the terms and conditions of the game or the app store and face legal consequences.
-
-
-You can update the app manually whenever a new version is available.
-You may expose your device to malware or viruses that can harm your data or your privacy.
-
-
- Therefore, you should use College Brawl APK at your own risk and discretion. We recommend that you only download it from reputable sources and scan it with an antivirus software before installing it.
- How to Play College Brawl on Apple Devices?
- The controls and the tips for playing College Brawl
- Playing College Brawl on your Apple devices is easy and fun. You just need to use the touch screen to control your character and perform various actions. Here are some of the basic controls and tips for playing College Brawl:
-
-To move your character, use the virtual joystick on the left side of the screen.
-To punch, kick, or use a weapon, tap on the attack button on the right side of the screen.
-To block or dodge an enemy's attack, swipe on the screen in any direction.
-To use a special move, tap on the rage button when it is full.
-To pick up or throw an object, tap on it when you are near it.
-To use an item, tap on its icon on the top of the screen.
-To pause or resume the game, tap on the menu button on the top right corner of the screen.
-To change your character, outfit, weapon, item, or skill, go to the shop menu and select what you want to buy or equip.
-
- Some tips for playing College Brawl are:
-
-Try to avoid getting surrounded by enemies. Use your movement and dodging skills to create some space and attack them one by one.
-Use different weapons and items depending on the situation. Some weapons are more effective against certain enemies than others. Some items can heal you, boost your stats, or damage your enemies.
-Use your special move wisely. It can deal a lot of damage to multiple enemies at once, but it also consumes your health. Save it for when you really need it or when you have enough health to spare.
-Complete levels and achievements to earn more coins, gems, and trophies. You can use them to buy or upgrade your weapons, items, skills, characters, and outfits. You can also unlock new levels and modes by completing certain achievements.
-Compete with other players on the leaderboard and share your progress on social media. You can see how you rank among other players based on your score, level, trophies, etc. You can also post screenshots or videos of your gameplay on Facebook, Twitter, Instagram, or YouTube and show off your skills and achievements.
-
- The challenges and the rewards of playing College Brawl
- Playing College Brawl is not a walk in the park. You will face many challenges and obstacles that will test your patience and perseverance. Some of the challenges are:
-
-The enemies are tough and smart. They will not hesitate to gang up on you, use weapons or items, or call for backup. They will also adapt to your moves and try to counter them.
-The levels are long and hard. You will have to fight through many waves of enemies and bosses before you can reach the end. You will also have to deal with environmental hazards and traps that can hurt you or help you.
-The game is unpredictable and random. You never know what you will encounter in each level. The enemies, weapons, items, and objects can change every time you play. You will also face some surprises and twists that will keep you on your toes.
-
- However, playing College Brawl is also very rewarding and satisfying. You will enjoy many benefits and advantages that will make you feel proud and happy. Some of the rewards are:
-
-The game is fun and funny. You will have a blast beating up your enemies and listening to their witty remarks and reactions. You will also laugh at the absurd situations and scenarios that you will encounter.
-The game is creative and original. You will appreciate the unique and diverse characters, enemies, weapons, items, and locations that you will see in the game. You will also admire the quality and the style of the graphics, animations, sounds, and music.
-The game is challenging and rewarding. You will feel a sense of accomplishment and satisfaction when you overcome the difficulties and complete the levels. You will also enjoy the rewards and the recognition that you will receive for your efforts.
-
- Conclusion
- College Brawl is a violent fighting arcade game with a college theme that is available for both Android and iOS devices. You can download and install it on your Apple devices by using an APK file from a trusted website. You can play it by using the touch screen to control your character and perform various actions. You can also customize your character, buy or upgrade your weapons, items, and skills, compete with other players, and share your progress on social media. College Brawl is a fun and exciting game that will test your fighting skills and your sense of humor. If you are looking for a game that will make you laugh and sweat at the same time, you should give College Brawl a try.
- FAQs
- Is College Brawl free to play?
- Yes, College Brawl is free to play. However, it contains ads and in-app purchases that can enhance your gaming experience.
- Is College Brawl suitable for children?
- No, College Brawl is not suitable for children. It contains graphic violence, blood, gore, profanity, sexual references, alcohol, drugs, gambling, etc. It is rated 17+ by the app store.
- How to update College Brawl APK on Apple devices?
- To update College Brawl APK on your Apple devices, you need to download and install the latest version of the APK file from the same website that you used before. You may need to delete the old version of the app before installing the new one.
- How to uninstall College Brawl APK on Apple devices?
- To uninstall College Brawl APK on your Apple devices, you need to go to Settings > General > Device Management and delete the profile of the app. Then, you need to go to your home screen and tap and hold on the app icon until it wiggles. Then, tap on the X button to delete it.
- Where to find more information about College Brawl?
- You can find more information about College Brawl by visiting its official website [here] or by following its social media accounts on Facebook [here], Twitter [here], Instagram [here], or YouTube [here].
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffffu/bing/src/lib/isomorphic/browser.ts b/spaces/fffffu/bing/src/lib/isomorphic/browser.ts
deleted file mode 100644
index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000
--- a/spaces/fffffu/bing/src/lib/isomorphic/browser.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-'use client'
-
-const debug = console.info.bind(console)
-
-class WebSocketAlias extends WebSocket {
- constructor(address: string | URL, ...args: any) {
- super(address)
- }
-}
-
-export default { fetch, WebSocket: WebSocketAlias, debug }
diff --git a/spaces/flax-community/Multilingual-VQA/apps/model/__init__.py b/spaces/flax-community/Multilingual-VQA/apps/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/flax-community/Multilingual-VQA/sections/pretraining/intro.md b/spaces/flax-community/Multilingual-VQA/sections/pretraining/intro.md
deleted file mode 100644
index 3d2f561b4dcc824869b070b5a57366dbff7a537f..0000000000000000000000000000000000000000
--- a/spaces/flax-community/Multilingual-VQA/sections/pretraining/intro.md
+++ /dev/null
@@ -1 +0,0 @@
-We follow an approach similar to [VisualBERT](https://arxiv.org/abs/1908.03557). Instead of using a FasterRCNN to get image features, we use a CLIP Vision (ViT transformer) encoder. The pre-training task is text-only MLM (Masked Language Modeling). We mask only the text tokens and try to predict the masked tokens. The VisualBERT authors also use a sentence-image matching task where two captions are matched against an image, but we skip this for the sake of simplicity.
diff --git a/spaces/flowers-team/SocialAISchool/README_old.md b/spaces/flowers-team/SocialAISchool/README_old.md
deleted file mode 100644
index 9b0f6c57ff7bfc062d706d176e47276f096d98ac..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/README_old.md
+++ /dev/null
@@ -1,215 +0,0 @@
-# Embodied acting and speaking
-
-This code was based on these repositories:
-
-[`gym-minigrid`](https://github.com/maximecb/gym-minigrid)
-
-[`torch-ac`](https://github.com/lcswillems/torch-ac)
-
-[`rl-starter-files`](add_url)
-
-## Features
-
-- **Script to train**, including:
- - Log in txt, CSV and Tensorboard
- - Save model
- - Stop and restart training
- - Use A2C or PPO algorithms
-- **Script to visualize**, including:
- - Act by sampling or argmax
- - Save as Gif
-- **Script to evaluate**, including:
- - Act by sampling or argmax
- - List the worst performed episodes
-
-## Installation
-
-### Option 1
-
-[comment]: <> todo: add this part
-[comment]: <> (Clone the repo)
-
-[comment]: <> (```)
-
-[comment]: <> (git clone https://gitlab.inria.fr/gkovac/act-and-speak.git)
-
-[comment]: <> (```)
-Create and activate your conda env
-```
-conda create --name act_and_speak python=3.6
-conda activate act_and_speak
-```
-Install the required packages
-```
-pip install -r requirements.txt
-pip install -e torch-ac
-pip install -e gym-minigrid --use-feature=2020-resolver
-```
-
-### Option 2
-Alternative use the conda yaml file:
-```
-TODO:
-```
-
-## Example of use
-
-Train, visualize and evaluate an agent on the `MiniGrid-DoorKey-5x5-v0` environment:
-
-
-
-1. Train the agent on the `MiniGrid-DoorKey-5x5-v0` environment with PPO algorithm:
-
-```
-python3 -m scripts.train --algo ppo --env MiniGrid-DoorKey-5x5-v0 --model DoorKey --save-interval 10 --frames 80000
-```
-
-
-
-2. Visualize agent's behavior:
-
-```
-python3 -m scripts.visualize --env MiniGrid-DoorKey-5x5-v0 --model DoorKey
-```
-
-
-
-3. Evaluate agent's performance:
-
-```
-python3 -m scripts.evaluate --env MiniGrid-DoorKey-5x5-v0 --model DoorKey
-```
-
-
-
-**Note:** More details on the commands are given below.
-
-## Other examples
-
-### Handle textual instructions
-
-In the `GoToDoor` environment, the agent receives an image along with a textual instruction. To handle the latter, add `--text` to the command:
-
-```
-python3 -m scripts.train --algo ppo --env MiniGrid-GoToDoor-5x5-v0 --model GoToDoor --text --save-interval 10 --frames 1000000
-```
-
-
-
-### Handle dialogue with multi a multi headed agent
-
-In the `GoToDoorTalk` environment, the agent receives an image along with the dialogue. To handle the latter, add `--dialogue` and, to use the multi headed agent, add `--multi-headed-agent` to the command:
-
-```
-python3 -m scripts.train --algo ppo --env MiniGrid-GoToDoorTalk-5x5-v0 --model GoToDoorMultiHead --dialogue --multi-headed-agent --save-interval 10 --frames 1000000
-```
-
-### Add memory
-
-In the `RedBlueDoors` environment, the agent has to open the red door then the blue one. To solve it efficiently, when it opens the red door, it has to remember it. To add memory to the agent, add `--recurrence X` to the command:
-
-```
-python3 -m scripts.train --algo ppo --env MiniGrid-RedBlueDoors-6x6-v0 --model RedBlueDoors --recurrence 4 --save-interval 10 --frames 1000000
-```
-
-
-
-## Files
-
-This package contains:
-- scripts to:
- - train an agent \
- in `script/train.py` ([more details](#scripts-train))
- - visualize agent's behavior \
- in `script/visualize.py` ([more details](#scripts-visualize))
- - evaluate agent's performances \
- in `script/evaluate.py` ([more details](#scripts-evaluate))
-- a default agent's model \
-in `model.py` ([more details](#model))
-- utilitarian classes and functions used by the scripts \
-in `utils`
-
-These files are suited for [`gym-minigrid`](https://github.com/maximecb/gym-minigrid) environments and [`torch-ac`](https://github.com/lcswillems/torch-ac) RL algorithms. They are easy to adapt to other environments and RL algorithms by modifying:
-- `model.py`
-- `utils/format.py`
-
-scripts/train.py
-
-An example of use:
-
-```bash
-python3 -m scripts.train --algo ppo --env MiniGrid-DoorKey-5x5-v0 --model DoorKey --save-interval 10 --frames 80000
-```
-
-The script loads the model in `storage/DoorKey` or creates it if it doesn't exist, then trains it with the PPO algorithm on the MiniGrid DoorKey environment, and saves it every 10 updates in `storage/DoorKey`. It stops after 80 000 frames.
-
-**Note:** You can define a different storage location in the environment variable `PROJECT_STORAGE`.
-
-More generally, the script has 2 required arguments:
-- `--algo ALGO`: name of the RL algorithm used to train
-- `--env ENV`: name of the environment to train on
-
-and a bunch of optional arguments among which:
-- `--recurrence N`: gradient will be backpropagated over N timesteps. By default, N = 1. If N > 1, a LSTM is added to the model to have memory.
-- `--text`: a GRU is added to the model to handle text input.
-- ... (see more using `--help`)
-
-During training, logs are printed in your terminal (and saved in text and CSV format):
-
-
-
-**Note:** `U` gives the update number, `F` the total number of frames, `FPS` the number of frames per second, `D` the total duration, `rR:μσmM` the mean, std, min and max reshaped return per episode, `F:μσmM` the mean, std, min and max number of frames per episode, `H` the entropy, `V` the value, `pL` the policy loss, `vL` the value loss and `∇` the gradient norm.
-
-During training, logs are also plotted in Tensorboard:
-
-
-
-scripts/visualize.py
-
-An example of use:
-
-```
-python3 -m scripts.visualize --env MiniGrid-DoorKey-5x5-v0 --model DoorKey
-```
-
-
-
-In this use case, the script displays how the model in `storage/DoorKey` behaves on the MiniGrid DoorKey environment.
-
-More generally, the script has 2 required arguments:
-- `--env ENV`: name of the environment to act on.
-- `--model MODEL`: name of the trained model.
-
-and a bunch of optional arguments among which:
-- `--argmax`: select the action with highest probability
-- ... (see more using `--help`)
-
-scripts/evaluate.py
-
-An example of use:
-
-```
-python3 -m scripts.evaluate --env MiniGrid-DoorKey-5x5-v0 --model DoorKey
-```
-
-
-
-In this use case, the script prints in the terminal the performance among 100 episodes of the model in `storage/DoorKey`.
-
-More generally, the script has 2 required arguments:
-- `--env ENV`: name of the environment to act on.
-- `--model MODEL`: name of the trained model.
-
-and a bunch of optional arguments among which:
-- `--episodes N`: number of episodes of evaluation. By default, N = 100.
-- ... (see more using `--help`)
-
-model.py
-
-The default model is discribed by the following schema:
-
-
-
-By default, the memory part (in red) and the langage part (in blue) are disabled. They can be enabled by setting to `True` the `use_memory` and `use_text` parameters of the model constructor.
-
-This model can be easily adapted to your needs.
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/wrappers.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/wrappers.py
deleted file mode 100644
index 8aebf67bf52355a513f21756ee74fe510902d075..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/bricks/wrappers.py
+++ /dev/null
@@ -1,180 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-r"""Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/wrappers.py # noqa: E501
-
-Wrap some nn modules to support empty tensor input. Currently, these wrappers
-are mainly used in mask heads like fcn_mask_head and maskiou_heads since mask
-heads are trained on only positive RoIs.
-"""
-import math
-
-import torch
-import torch.nn as nn
-from torch.nn.modules.utils import _pair, _triple
-
-from .registry import CONV_LAYERS, UPSAMPLE_LAYERS
-
-if torch.__version__ == 'parrots':
- TORCH_VERSION = torch.__version__
-else:
- # torch.__version__ could be 1.3.1+cu92, we only need the first two
- # for comparison
- TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2])
-
-
-def obsolete_torch_version(torch_version, version_threshold):
- return torch_version == 'parrots' or torch_version <= version_threshold
-
-
-class NewEmptyTensorOp(torch.autograd.Function):
-
- @staticmethod
- def forward(ctx, x, new_shape):
- ctx.shape = x.shape
- return x.new_empty(new_shape)
-
- @staticmethod
- def backward(ctx, grad):
- shape = ctx.shape
- return NewEmptyTensorOp.apply(grad, shape), None
-
-
-@CONV_LAYERS.register_module('Conv', force=True)
-class Conv2d(nn.Conv2d):
-
- def forward(self, x):
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
- out_shape = [x.shape[0], self.out_channels]
- for i, k, p, s, d in zip(x.shape[-2:], self.kernel_size,
- self.padding, self.stride, self.dilation):
- o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1
- out_shape.append(o)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- if self.training:
- # produce dummy gradient to avoid DDP warning.
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
- return empty + dummy
- else:
- return empty
-
- return super().forward(x)
-
-
-@CONV_LAYERS.register_module('Conv3d', force=True)
-class Conv3d(nn.Conv3d):
-
- def forward(self, x):
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
- out_shape = [x.shape[0], self.out_channels]
- for i, k, p, s, d in zip(x.shape[-3:], self.kernel_size,
- self.padding, self.stride, self.dilation):
- o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1
- out_shape.append(o)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- if self.training:
- # produce dummy gradient to avoid DDP warning.
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
- return empty + dummy
- else:
- return empty
-
- return super().forward(x)
-
-
-@CONV_LAYERS.register_module()
-@CONV_LAYERS.register_module('deconv')
-@UPSAMPLE_LAYERS.register_module('deconv', force=True)
-class ConvTranspose2d(nn.ConvTranspose2d):
-
- def forward(self, x):
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
- out_shape = [x.shape[0], self.out_channels]
- for i, k, p, s, d, op in zip(x.shape[-2:], self.kernel_size,
- self.padding, self.stride,
- self.dilation, self.output_padding):
- out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- if self.training:
- # produce dummy gradient to avoid DDP warning.
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
- return empty + dummy
- else:
- return empty
-
- return super().forward(x)
-
-
-@CONV_LAYERS.register_module()
-@CONV_LAYERS.register_module('deconv3d')
-@UPSAMPLE_LAYERS.register_module('deconv3d', force=True)
-class ConvTranspose3d(nn.ConvTranspose3d):
-
- def forward(self, x):
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
- out_shape = [x.shape[0], self.out_channels]
- for i, k, p, s, d, op in zip(x.shape[-3:], self.kernel_size,
- self.padding, self.stride,
- self.dilation, self.output_padding):
- out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- if self.training:
- # produce dummy gradient to avoid DDP warning.
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
- return empty + dummy
- else:
- return empty
-
- return super().forward(x)
-
-
-class MaxPool2d(nn.MaxPool2d):
-
- def forward(self, x):
- # PyTorch 1.9 does not support empty tensor inference yet
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)):
- out_shape = list(x.shape[:2])
- for i, k, p, s, d in zip(x.shape[-2:], _pair(self.kernel_size),
- _pair(self.padding), _pair(self.stride),
- _pair(self.dilation)):
- o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1
- o = math.ceil(o) if self.ceil_mode else math.floor(o)
- out_shape.append(o)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- return empty
-
- return super().forward(x)
-
-
-class MaxPool3d(nn.MaxPool3d):
-
- def forward(self, x):
- # PyTorch 1.9 does not support empty tensor inference yet
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)):
- out_shape = list(x.shape[:2])
- for i, k, p, s, d in zip(x.shape[-3:], _triple(self.kernel_size),
- _triple(self.padding),
- _triple(self.stride),
- _triple(self.dilation)):
- o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1
- o = math.ceil(o) if self.ceil_mode else math.floor(o)
- out_shape.append(o)
- empty = NewEmptyTensorOp.apply(x, out_shape)
- return empty
-
- return super().forward(x)
-
-
-class Linear(torch.nn.Linear):
-
- def forward(self, x):
- # empty tensor forward of Linear layer is supported in Pytorch 1.6
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)):
- out_shape = [x.shape[0], self.out_features]
- empty = NewEmptyTensorOp.apply(x, out_shape)
- if self.training:
- # produce dummy gradient to avoid DDP warning.
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
- return empty + dummy
- else:
- return empty
-
- return super().forward(x)
diff --git a/spaces/giorgiolatour/aqiprediction/my_functions.py b/spaces/giorgiolatour/aqiprediction/my_functions.py
deleted file mode 100644
index ed0da2b0da0a8bf85ae39cb060d5d7d4cb5ab210..0000000000000000000000000000000000000000
--- a/spaces/giorgiolatour/aqiprediction/my_functions.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import requests
-import pandas as pd
-import datetime
-
-"""
-GET coordinates for a given zip code from OpenWeather. Returns
-tuple of latitude and longitude coords, (lat, lon).
-"""
-def getCoords(zip_code: str, api_key: str) -> dict:
- geo_loc_url = f'http://api.openweathermap.org/geo/1.0/zip'
- params = {'zip': zip_code, 'appid': api_key}
-
- geo_loc_response = requests.get(geo_loc_url, params=params)
- geo_loc_response_json = geo_loc_response.json()
-
- return dict(geo_loc_response_json)
-
-"""
-GET AQI data for a given date range and coordinate. Returns
-Pandas DataFrame.
-"""
-def getAQI(start_date: datetime.datetime, end_date: datetime.datetime, lat: str, lon: str, api_key: str, start_date_id: int=0) -> pd.DataFrame:
- start_unix = int(datetime.datetime.timestamp(start_date))
- end_unix = int(datetime.datetime.timestamp(end_date))
-
- aqi_url = 'http://api.openweathermap.org/data/2.5/air_pollution/history'
- params = {'lat': lat, 'lon': lon, 'start': start_unix, 'end': end_unix, 'appid': api_key}
-
- aqi_response = requests.get(aqi_url, params=params)
- aqi_resp_json = aqi_response.json()
-
- """
- Extract data from AQI response.
- """
- coord = aqi_resp_json['coord'] # latitude and longitude from aqi response
- dates = [datetime.datetime.fromtimestamp(x['dt']) for x in aqi_resp_json['list']]
- aqis = [x['main']['aqi'] for x in aqi_resp_json['list']]
- pollutants = [x['components'] for x in aqi_resp_json['list']]
-
- data = pd.DataFrame(pollutants)
- data['datetime'] = dates
- data['lat'] = coord['lat']
- data['lon'] = coord['lon']
- data['aqi'] = aqis
- data['id'] = range(start_date_id, start_date_id+len(data['aqi']))
-
- return data
-
-
-"""
-Takes in a dataframe and cleans it according to needs. Sometimes we get
-duplicate entries in the AQI data and these need to be fixed. For example,
-these two rows:
-327.11,0.0,17.82,67.23,6.26,10.14,12.73,1.55,2021-11-07 01:00:00,2021-11-07,41.8798,-87.6285,2,8256
-317.1,0.0,15.94,67.23,6.26,9.71,11.87,1.28,2021-11-07 01:00:00,2021-11-07,41.8798,-87.6285,1,8257
-are from the same timestamp. We're dealing with thousands of data points,
-so just keep the first one and discard any duplicates.
-"""
-def cleanData(df: pd.DataFrame, start_date_id: int) -> pd.DataFrame:
- duplicates = df[df.duplicated(subset=['datetime'])]
- print('Duplicates: ', duplicates)
-
- df = df.drop_duplicates(subset=['datetime'], keep='first')
-
- df = df.drop(columns=['id'])
- df['id'] = range(start_date_id, start_date_id+len(df['aqi']))
-
- df = df.fillna(method='ffill')
-
- return df
-
-
-"""
-Perform feature engineering on the historical data and
-drop the historical features. Should have the future timestamps
-in there when calling this function so you only have to do it once.
-"""
-def createFeatures(data: pd.DataFrame) -> pd.DataFrame:
- df = data.copy()
- # add date features
- df['hour'] = df.index.hour
- df['dayofweek'] = df.index.dayofweek
- df['quarter'] = df.index.quarter
- df['month'] = df.index.month
- df['year'] = df.index.year
- df['dayofyear'] = df.index.dayofyear
- df['dayofmonth'] = df.index.day
- df['weekofyear'] = df.index.isocalendar().week.astype('int')
-
- # add lag features
- features_to_lag = ['co', 'no', 'no2', 'o3', 'so2', 'pm2_5', 'pm10', 'nh3', 'aqi']
-
- for feature in features_to_lag:
- # lag feature by 3 days
- new_feature_name = feature + '_lag3d'
- df[new_feature_name] = df[feature].shift(freq='3D', axis=0)
-
-
- window = 24 # hours
- df['aqi_max_lag_3d'] = df['aqi'].rolling(window=window).agg(['max']).shift(freq='3D', axis=0)
- df['aqi_mean_lag_3d'] = df['aqi'].rolling(window=window).agg(['mean']).shift(freq='3D', axis=0)
- df['aqi_std_lag_3d'] = df['aqi'].rolling(window=window).agg(['std']).shift(freq='3D', axis=0)
-
- # drop the historical features
- df = df.drop(columns=['co', 'no', 'no2', 'o3', 'so2', 'pm2_5', 'pm10', 'nh3'])
-
- return df
\ No newline at end of file
diff --git a/spaces/gradio/HuBERT/fairseq/models/speech_to_text/berard.py b/spaces/gradio/HuBERT/fairseq/models/speech_to_text/berard.py
deleted file mode 100644
index c505e3acaa84e5f3263ccbfaf9556f77123f09fc..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/models/speech_to_text/berard.py
+++ /dev/null
@@ -1,606 +0,0 @@
-#!/usr/bin/env python3
-
-from ast import literal_eval
-from typing import List, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils, utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
- register_model_architecture,
-)
-
-
-@register_model("s2t_berard")
-class BerardModel(FairseqEncoderDecoderModel):
- """Implementation of a model similar to https://arxiv.org/abs/1802.04200
-
- Paper title: End-to-End Automatic Speech Translation of Audiobooks
- An implementation is available in tensorflow at
- https://github.com/eske/seq2seq
- Relevant files in this implementation are the config
- (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml)
- and the model code
- (https://github.com/eske/seq2seq/blob/master/translate/models.py).
- The encoder and decoder try to be close to the original implementation.
- The attention is an MLP as in Bahdanau et al.
- (https://arxiv.org/abs/1409.0473).
- There is no state initialization by averaging the encoder outputs.
- """
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- parser.add_argument(
- "--input-layers",
- type=str,
- metavar="EXPR",
- help="List of linear layer dimensions. These "
- "layers are applied to the input features and "
- "are followed by tanh and possibly dropout.",
- )
- parser.add_argument(
- "--dropout",
- type=float,
- metavar="D",
- help="Dropout probability to use in the encoder/decoder. "
- "Note that this parameters control dropout in various places, "
- "there is no fine-grained control for dropout for embeddings "
- "vs LSTM layers for example.",
- )
- parser.add_argument(
- "--in-channels",
- type=int,
- metavar="N",
- help="Number of encoder input channels. " "Typically value is 1.",
- )
- parser.add_argument(
- "--conv-layers",
- type=str,
- metavar="EXPR",
- help="List of conv layers " "(format: (channels, kernel, stride)).",
- )
- parser.add_argument(
- "--num-blstm-layers",
- type=int,
- metavar="N",
- help="Number of encoder bi-LSTM layers.",
- )
- parser.add_argument(
- "--lstm-size", type=int, metavar="N", help="LSTM hidden size."
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="Embedding dimension of the decoder target tokens.",
- )
- parser.add_argument(
- "--decoder-hidden-dim",
- type=int,
- metavar="N",
- help="Decoder LSTM hidden dimension.",
- )
- parser.add_argument(
- "--decoder-num-layers",
- type=int,
- metavar="N",
- help="Number of decoder LSTM layers.",
- )
- parser.add_argument(
- "--attention-dim",
- type=int,
- metavar="N",
- help="Hidden layer dimension in MLP attention.",
- )
- parser.add_argument(
- "--output-layer-dim",
- type=int,
- metavar="N",
- help="Hidden layer dim for linear layer prior to output projection.",
- )
- parser.add_argument(
- "--load-pretrained-encoder-from",
- type=str,
- metavar="STR",
- help="model to take encoder weights from (for initialization)",
- )
- parser.add_argument(
- "--load-pretrained-decoder-from",
- type=str,
- metavar="STR",
- help="model to take decoder weights from (for initialization)",
- )
-
- @classmethod
- def build_encoder(cls, args, task):
- encoder = BerardEncoder(
- input_layers=literal_eval(args.input_layers),
- conv_layers=literal_eval(args.conv_layers),
- in_channels=args.input_channels,
- input_feat_per_channel=args.input_feat_per_channel,
- num_blstm_layers=args.num_blstm_layers,
- lstm_size=args.lstm_size,
- dropout=args.dropout,
- )
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task):
- decoder = LSTMDecoder(
- dictionary=task.target_dictionary,
- embed_dim=args.decoder_embed_dim,
- num_layers=args.decoder_num_layers,
- hidden_size=args.decoder_hidden_dim,
- dropout=args.dropout,
- encoder_output_dim=2 * args.lstm_size, # bidirectional
- attention_dim=args.attention_dim,
- output_layer_dim=args.output_layer_dim,
- )
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- encoder = cls.build_encoder(args, task)
- decoder = cls.build_decoder(args, task)
-
- return cls(encoder, decoder)
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = super().get_normalized_probs(net_output, log_probs, sample)
- # lprobs is a (B, T, D) tensor
- lprobs.batch_first = True
- return lprobs
-
-
-class BerardEncoder(FairseqEncoder):
- def __init__(
- self,
- input_layers: List[int],
- conv_layers: List[Tuple[int]],
- in_channels: int,
- input_feat_per_channel: int,
- num_blstm_layers: int,
- lstm_size: int,
- dropout: float,
- ):
- """
- Args:
- input_layers: list of linear layer dimensions. These layers are
- applied to the input features and are followed by tanh and
- possibly dropout.
- conv_layers: list of conv2d layer configurations. A configuration is
- a tuple (out_channels, conv_kernel_size, stride).
- in_channels: number of input channels.
- input_feat_per_channel: number of input features per channel. These
- are speech features, typically 40 or 80.
- num_blstm_layers: number of bidirectional LSTM layers.
- lstm_size: size of the LSTM hidden (and cell) size.
- dropout: dropout probability. Dropout can be applied after the
- linear layers and LSTM layers but not to the convolutional
- layers.
- """
- super().__init__(None)
-
- self.input_layers = nn.ModuleList()
- in_features = input_feat_per_channel
- for out_features in input_layers:
- if dropout > 0:
- self.input_layers.append(
- nn.Sequential(
- nn.Linear(in_features, out_features), nn.Dropout(p=dropout)
- )
- )
- else:
- self.input_layers.append(nn.Linear(in_features, out_features))
- in_features = out_features
-
- self.in_channels = in_channels
- self.input_dim = input_feat_per_channel
- self.conv_kernel_sizes_and_strides = []
- self.conv_layers = nn.ModuleList()
- lstm_input_dim = input_layers[-1]
- for conv_layer in conv_layers:
- out_channels, conv_kernel_size, conv_stride = conv_layer
- self.conv_layers.append(
- nn.Conv2d(
- in_channels,
- out_channels,
- conv_kernel_size,
- stride=conv_stride,
- padding=conv_kernel_size // 2,
- )
- )
- self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride))
- in_channels = out_channels
- lstm_input_dim //= conv_stride
-
- lstm_input_dim *= conv_layers[-1][0]
- self.lstm_size = lstm_size
- self.num_blstm_layers = num_blstm_layers
- self.lstm = nn.LSTM(
- input_size=lstm_input_dim,
- hidden_size=lstm_size,
- num_layers=num_blstm_layers,
- dropout=dropout,
- bidirectional=True,
- )
- self.output_dim = 2 * lstm_size # bidirectional
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- """
- Args
- src_tokens: padded tensor (B, T, C * feat)
- src_lengths: tensor of original lengths of input utterances (B,)
- """
- bsz, max_seq_len, _ = src_tokens.size()
- # (B, C, T, feat)
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
-
- for input_layer in self.input_layers:
- x = input_layer(x)
- x = torch.tanh(x)
-
- for conv_layer in self.conv_layers:
- x = conv_layer(x)
-
- bsz, _, output_seq_len, _ = x.size()
-
- # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) ->
- # (T, B, C * feat)
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
-
- input_lengths = src_lengths.clone()
- for k, s in self.conv_kernel_sizes_and_strides:
- p = k // 2
- input_lengths = (input_lengths.float() + 2 * p - k) / s + 1
- input_lengths = input_lengths.floor().long()
-
- packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths)
-
- h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- packed_outs, _ = self.lstm(packed_x, (h0, c0))
-
- # unpack outputs and apply dropout
- x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs)
- if self.dropout is not None:
- x = self.dropout(x)
-
- encoder_padding_mask = (
- lengths_to_padding_mask(output_lengths).to(src_tokens.device).t()
- )
-
- return {
- "encoder_out": x, # (T, B, C)
- "encoder_padding_mask": encoder_padding_mask, # (T, B)
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(1, new_order)
- return encoder_out
-
-
-class MLPAttention(nn.Module):
- """The original attention from Badhanau et al. (2014)
-
- https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron.
- The attention score between position i in the encoder and position j in the
- decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a)
- """
-
- def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim):
- super().__init__()
-
- self.context_dim = context_dim
- self.attention_dim = attention_dim
- # W_ae and b_a
- self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True)
- # W_ad
- self.decoder_proj = nn.Linear(
- decoder_hidden_state_dim, self.attention_dim, bias=False
- )
- # V_a
- self.to_scores = nn.Linear(self.attention_dim, 1, bias=False)
-
- def forward(self, decoder_state, source_hids, encoder_padding_mask):
- """The expected input dimensions are:
- decoder_state: bsz x decoder_hidden_state_dim
- source_hids: src_len x bsz x context_dim
- encoder_padding_mask: src_len x bsz
- """
- src_len, bsz, _ = source_hids.size()
- # (src_len*bsz) x context_dim (to feed through linear)
- flat_source_hids = source_hids.view(-1, self.context_dim)
- # (src_len*bsz) x attention_dim
- encoder_component = self.encoder_proj(flat_source_hids)
- # src_len x bsz x attention_dim
- encoder_component = encoder_component.view(src_len, bsz, self.attention_dim)
- # 1 x bsz x attention_dim
- decoder_component = self.decoder_proj(decoder_state).unsqueeze(0)
- # Sum with broadcasting and apply the non linearity
- # src_len x bsz x attention_dim
- hidden_att = torch.tanh(
- (decoder_component + encoder_component).view(-1, self.attention_dim)
- )
- # Project onto the reals to get attentions scores (src_len x bsz)
- attn_scores = self.to_scores(hidden_att).view(src_len, bsz)
-
- # Mask + softmax (src_len x bsz)
- if encoder_padding_mask is not None:
- attn_scores = (
- attn_scores.float()
- .masked_fill_(encoder_padding_mask, float("-inf"))
- .type_as(attn_scores)
- ) # FP16 support: cast to float and back
- # srclen x bsz
- normalized_masked_attn_scores = F.softmax(attn_scores, dim=0)
-
- # Sum weighted sources (bsz x context_dim)
- attn_weighted_context = (
- source_hids * normalized_masked_attn_scores.unsqueeze(2)
- ).sum(dim=0)
-
- return attn_weighted_context, normalized_masked_attn_scores
-
-
-class LSTMDecoder(FairseqIncrementalDecoder):
- def __init__(
- self,
- dictionary,
- embed_dim,
- num_layers,
- hidden_size,
- dropout,
- encoder_output_dim,
- attention_dim,
- output_layer_dim,
- ):
- """
- Args:
- dictionary: target text dictionary.
- embed_dim: embedding dimension for target tokens.
- num_layers: number of LSTM layers.
- hidden_size: hidden size for LSTM layers.
- dropout: dropout probability. Dropout can be applied to the
- embeddings, the LSTM layers, and the context vector.
- encoder_output_dim: encoder output dimension (hidden size of
- encoder LSTM).
- attention_dim: attention dimension for MLP attention.
- output_layer_dim: size of the linear layer prior to output
- projection.
- """
- super().__init__(dictionary)
- self.num_layers = num_layers
- self.hidden_size = hidden_size
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx)
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- self.layers = nn.ModuleList()
- for layer_id in range(num_layers):
- input_size = embed_dim if layer_id == 0 else encoder_output_dim
- self.layers.append(
- nn.LSTMCell(input_size=input_size, hidden_size=hidden_size)
- )
-
- self.context_dim = encoder_output_dim
- self.attention = MLPAttention(
- decoder_hidden_state_dim=hidden_size,
- context_dim=encoder_output_dim,
- attention_dim=attention_dim,
- )
-
- self.deep_output_layer = nn.Linear(
- hidden_size + encoder_output_dim + embed_dim, output_layer_dim
- )
- self.output_projection = nn.Linear(output_layer_dim, num_embeddings)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- encoder_padding_mask = encoder_out["encoder_padding_mask"]
- encoder_outs = encoder_out["encoder_out"]
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- bsz, seqlen = prev_output_tokens.size()
-
- srclen = encoder_outs.size(0)
-
- # embed tokens
- embeddings = self.embed_tokens(prev_output_tokens)
- x = embeddings
- if self.dropout is not None:
- x = self.dropout(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # initialize previous states (or get from cache during incremental
- # generation)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is not None:
- prev_hiddens, prev_cells = cached_state
- else:
- prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers
- prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers
-
- attn_scores = x.new_zeros(bsz, srclen)
- attention_outs = []
- outs = []
- for j in range(seqlen):
- input = x[j, :, :]
- attention_out = None
- for i, layer in enumerate(self.layers):
- # the previous state is one layer below except for the bottom
- # layer where the previous state is the state emitted by the
- # top layer
- hidden, cell = layer(
- input,
- (
- prev_hiddens[(i - 1) % self.num_layers],
- prev_cells[(i - 1) % self.num_layers],
- ),
- )
- if self.dropout is not None:
- hidden = self.dropout(hidden)
- prev_hiddens[i] = hidden
- prev_cells[i] = cell
- if attention_out is None:
- attention_out, attn_scores = self.attention(
- hidden, encoder_outs, encoder_padding_mask
- )
- if self.dropout is not None:
- attention_out = self.dropout(attention_out)
- attention_outs.append(attention_out)
- input = attention_out
-
- # collect the output of the top layer
- outs.append(hidden)
-
- # cache previous states (no-op except during incremental generation)
- utils.set_incremental_state(
- self, incremental_state, "cached_state", (prev_hiddens, prev_cells)
- )
-
- # collect outputs across time steps
- x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size)
- attention_outs_concat = torch.cat(attention_outs, dim=0).view(
- seqlen, bsz, self.context_dim
- )
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
- attention_outs_concat = attention_outs_concat.transpose(0, 1)
-
- # concat LSTM output, attention output and embedding
- # before output projection
- x = torch.cat((x, attention_outs_concat, embeddings), dim=2)
- x = self.deep_output_layer(x)
- x = torch.tanh(x)
- if self.dropout is not None:
- x = self.dropout(x)
- # project back to size of vocabulary
- x = self.output_projection(x)
-
- # to return the full attn_scores tensor, we need to fix the decoder
- # to account for subsampling input frames
- # return x, attn_scores
- return x, None
-
- def reorder_incremental_state(self, incremental_state, new_order):
- super().reorder_incremental_state(incremental_state, new_order)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is None:
- return
-
- def reorder_state(state):
- if isinstance(state, list):
- return [reorder_state(state_i) for state_i in state]
- return state.index_select(0, new_order)
-
- new_state = tuple(map(reorder_state, cached_state))
- utils.set_incremental_state(self, incremental_state, "cached_state", new_state)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard")
-def berard(args):
- """The original version: "End-to-End Automatic Speech Translation of
- Audiobooks" (https://arxiv.org/abs/1802.04200)
- """
- args.input_layers = getattr(args, "input_layers", "[256, 128]")
- args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]")
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 256)
- args.dropout = getattr(args, "dropout", 0.2)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 128)
- args.load_pretrained_encoder_from = getattr(
- args, "load_pretrained_encoder_from", None
- )
- args.load_pretrained_decoder_from = getattr(
- args, "load_pretrained_decoder_from", None
- )
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3")
-def berard_256_3_3(args):
- """Used in
- * "Harnessing Indirect Training Data for End-to-End Automatic Speech
- Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515)
- * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"
- (https://arxiv.org/pdf/2002.01320.pdf)
- * "Self-Supervised Representations Improve End-to-End Speech Translation"
- (https://arxiv.org/abs/2006.12124)
- """
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2")
-def berard_512_3_2(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3")
-def berard_512_5_3(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 5)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
diff --git a/spaces/gradio/longformer/tvm/_ffi/_ctypes/node.py b/spaces/gradio/longformer/tvm/_ffi/_ctypes/node.py
deleted file mode 100644
index 39fe0ef35525b61dc1a6bccd5ccc72165083262e..0000000000000000000000000000000000000000
--- a/spaces/gradio/longformer/tvm/_ffi/_ctypes/node.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# pylint: disable=invalid-name, protected-access
-# pylint: disable=no-member, missing-docstring, not-callable
-from __future__ import absolute_import
-
-import ctypes
-from ..base import _LIB, check_call, c_str
-from ..node_generic import _set_class_node_base
-from .types import TVMValue, TypeCode
-from .types import RETURN_SWITCH, C_TO_PY_ARG_SWITCH, _wrap_arg_func
-
-NodeHandle = ctypes.c_void_p
-__init_by_constructor__ = None
-
-"""Maps node type to its constructor"""
-NODE_TYPE = {}
-
-def _register_node(index, cls):
- """register node class"""
- NODE_TYPE[index] = cls
-
-def _return_node(x):
- """Return node function"""
- handle = x.v_handle
- if not isinstance(handle, NodeHandle):
- handle = NodeHandle(handle)
- tindex = ctypes.c_int()
- check_call(_LIB.TVMNodeGetTypeIndex(handle, ctypes.byref(tindex)))
- cls = NODE_TYPE.get(tindex.value, NodeBase)
- # Avoid calling __init__ of cls, instead directly call __new__
- # This allows child class to implement their own __init__
- node = cls.__new__(cls)
- node.handle = handle
- return node
-
-
-RETURN_SWITCH[TypeCode.NODE_HANDLE] = _return_node
-C_TO_PY_ARG_SWITCH[TypeCode.NODE_HANDLE] = _wrap_arg_func(
- _return_node, TypeCode.NODE_HANDLE)
-
-
-class NodeBase(object):
- __slots__ = ["handle"]
- # pylint: disable=no-member
- def __del__(self):
- if _LIB is not None:
- check_call(_LIB.TVMNodeFree(self.handle))
-
- def __getattr__(self, name):
- ret_val = TVMValue()
- ret_type_code = ctypes.c_int()
- ret_success = ctypes.c_int()
- check_call(_LIB.TVMNodeGetAttr(
- self.handle, c_str(name),
- ctypes.byref(ret_val),
- ctypes.byref(ret_type_code),
- ctypes.byref(ret_success)))
- if not ret_success.value:
- raise AttributeError(
- "'%s' object has no attribute '%s'" % (str(type(self)), name))
- return RETURN_SWITCH[ret_type_code.value](ret_val)
-
- def __init_handle_by_constructor__(self, fconstructor, *args):
- """Initialize the handle by calling constructor function.
-
- Parameters
- ----------
- fconstructor : Function
- Constructor function.
-
- args: list of objects
- The arguments to the constructor
-
- Note
- ----
- We have a special calling convention to call constructor functions.
- So the return handle is directly set into the Node object
- instead of creating a new Node.
- """
- # assign handle first to avoid error raising
- self.handle = None
- handle = __init_by_constructor__(fconstructor, args)
- if not isinstance(handle, NodeHandle):
- handle = NodeHandle(handle)
- self.handle = handle
-
-_set_class_node_base(NodeBase)
diff --git a/spaces/gradio/musical_instrument_identification_main/run.py b/spaces/gradio/musical_instrument_identification_main/run.py
deleted file mode 100644
index 94d60c7d61ce02fa4bfb4c3bb9cd54a1bdb99e8c..0000000000000000000000000000000000000000
--- a/spaces/gradio/musical_instrument_identification_main/run.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import gradio as gr
-import torch
-import torchaudio
-from timeit import default_timer as timer
-from data_setups import audio_preprocess, resample
-import gdown
-
-url = 'https://drive.google.com/uc?id=1X5CR18u0I-ZOi_8P0cNptCe5JGk9Ro0C'
-output = 'piano.wav'
-gdown.download(url, output, quiet=False)
-url = 'https://drive.google.com/uc?id=1W-8HwmGR5SiyDbUcGAZYYDKdCIst07__'
-output= 'torch_efficientnet_fold2_CNN.pth'
-gdown.download(url, output, quiet=False)
-device = "cuda" if torch.cuda.is_available() else "cpu"
-SAMPLE_RATE = 44100
-AUDIO_LEN = 2.90
-model = torch.load("torch_efficientnet_fold2_CNN.pth", map_location=torch.device('cpu'))
-LABELS = [
- "Cello", "Clarinet", "Flute", "Acoustic Guitar", "Electric Guitar", "Organ", "Piano", "Saxophone", "Trumpet", "Violin", "Voice"
-]
-example_list = [
- ["piano.wav"]
-]
-
-
-def predict(audio_path):
- start_time = timer()
- wavform, sample_rate = torchaudio.load(audio_path)
- wav = resample(wavform, sample_rate, SAMPLE_RATE)
- if len(wav) > int(AUDIO_LEN * SAMPLE_RATE):
- wav = wav[:int(AUDIO_LEN * SAMPLE_RATE)]
- else:
- print(f"input length {len(wav)} too small!, need over {int(AUDIO_LEN * SAMPLE_RATE)}")
- return
- img = audio_preprocess(wav, SAMPLE_RATE).unsqueeze(0)
- model.eval()
- with torch.inference_mode():
- pred_probs = torch.softmax(model(img), dim=1)
- pred_labels_and_probs = {LABELS[i]: float(pred_probs[0][i]) for i in range(len(LABELS))}
- pred_time = round(timer() - start_time, 5)
- return pred_labels_and_probs, pred_time
-
-demo = gr.Interface(fn=predict,
- inputs=gr.Audio(type="filepath"),
- outputs=[gr.Label(num_top_classes=11, label="Predictions"),
- gr.Number(label="Prediction time (s)")],
- examples=example_list,
- cache_examples=False
- )
-
-demo.launch(debug=False)
diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/types/storage.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/types/storage.ts
deleted file mode 100644
index 1b93e8bfe5de7259e707bdafae3055e5f0181711..0000000000000000000000000000000000000000
--- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/types/storage.ts
+++ /dev/null
@@ -1,21 +0,0 @@
-import { Conversation } from './chat';
-import { FolderInterface } from './folder';
-import { PluginKey } from './plugin';
-import { Prompt } from './prompt';
-
-// keep track of local storage schema
-export interface LocalStorage {
- apiKey: string;
- conversationHistory: Conversation[];
- selectedConversation: Conversation;
- theme: 'light' | 'dark';
- // added folders (3/23/23)
- folders: FolderInterface[];
- // added prompts (3/26/23)
- prompts: Prompt[];
- // added showChatbar and showPromptbar (3/26/23)
- showChatbar: boolean;
- showPromptbar: boolean;
- // added plugin keys (4/3/23)
- pluginKeys: PluginKey[];
-}
diff --git a/spaces/gventur4/recipesDaCasa/README.md b/spaces/gventur4/recipesDaCasa/README.md
deleted file mode 100644
index 78b12184f6b12171550577d142f7849af5e1dd69..0000000000000000000000000000000000000000
--- a/spaces/gventur4/recipesDaCasa/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: RecipesDaCasa
-emoji: 🏢
-colorFrom: pink
-colorTo: red
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/__init__.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/h2oai/wave-tour/examples/list.py b/spaces/h2oai/wave-tour/examples/list.py
deleted file mode 100644
index 6530d38a62ff820026f6ad394b3592dbb0d31f22..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/list.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Lists
-# Use list cards to lay out multiple child cards in the form of a #list.
-# ---
-import random
-
-from faker import Faker
-
-from h2o_wave import site, ui, pack, data
-
-fake = Faker()
-
-page = site['/demo']
-
-c = page.add('example', ui.list_card(
- box='1 1 2 4',
- item_view='list_item1',
- item_props=pack(dict(title='=code', caption='=currency', value='=trades', aux_value='=returns')),
- title='Exchange Rates',
- data=data('currency code trades returns', -15),
-))
-c.data = [[fake.cryptocurrency_name(), fake.cryptocurrency_code(), random.randint(100, 1000), random.randint(10, 100)]
- for _ in range(15)]
-
-page.save()
diff --git a/spaces/hOTZR/new-Bing-with_your_cookies/README.md b/spaces/hOTZR/new-Bing-with_your_cookies/README.md
deleted file mode 100644
index aa1d4bcec19b7adf9f373dd8a441290d97635403..0000000000000000000000000000000000000000
--- a/spaces/hOTZR/new-Bing-with_your_cookies/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: New-Bing-with Your Cookies
-emoji: 🐨
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: other
----
-## Inspired By:
-- [EdgeGPT](https://github.com/acheong08/EdgeGPT)
-- [DiscordBot-EdgeGPT](https://github.com/FuseFairy/DiscordBot-EdgeGPT)
-- [chatdemo](https://github.com/simpx/chatdemo)
-- [Chatbot](https://medium.datadriveninvestor.com/build-your-own-chatbot-using-chatgpt-for-inspiration-2a2ae6ebb288)
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/agent/__init__.py b/spaces/hamelcubsfan/AutoGPT/autogpt/agent/__init__.py
deleted file mode 100644
index e928af2205b1c52d19dc89ec4246e8c1d2c20e3f..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/agent/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from autogpt.agent.agent import Agent
-from autogpt.agent.agent_manager import AgentManager
-
-__all__ = ["Agent", "AgentManager"]
diff --git a/spaces/hamelcubsfan/AutoGPT/tests/milvus_memory_test.py b/spaces/hamelcubsfan/AutoGPT/tests/milvus_memory_test.py
deleted file mode 100644
index 84fd6e6d5006e781fa5e1065f949b2160537d913..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/tests/milvus_memory_test.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# sourcery skip: snake-case-functions
-"""Tests for the MilvusMemory class."""
-import os
-import sys
-import unittest
-
-try:
- from autogpt.memory.milvus import MilvusMemory
-
- def mock_config() -> dict:
- """Mock the Config class"""
- return type(
- "MockConfig",
- (object,),
- {
- "debug_mode": False,
- "continuous_mode": False,
- "speak_mode": False,
- "milvus_collection": "autogpt",
- "milvus_addr": "localhost:19530",
- },
- )
-
- class TestMilvusMemory(unittest.TestCase):
- """Tests for the MilvusMemory class."""
-
- def setUp(self) -> None:
- """Set up the test environment"""
- self.cfg = mock_config()
- self.memory = MilvusMemory(self.cfg)
-
- def test_add(self) -> None:
- """Test adding a text to the cache"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- result = self.memory.get(text)
- self.assertEqual([text], result)
-
- def test_clear(self) -> None:
- """Test clearing the cache"""
- self.memory.clear()
- self.assertEqual(self.memory.collection.num_entities, 0)
-
- def test_get(self) -> None:
- """Test getting a text from the cache"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- result = self.memory.get(text)
- self.assertEqual(result, [text])
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache"""
- text1 = "Sample text 1"
- text2 = "Sample text 2"
- self.memory.clear()
- self.memory.add(text1)
- self.memory.add(text2)
- result = self.memory.get_relevant(text1, 1)
- self.assertEqual(result, [text1])
-
- def test_get_stats(self) -> None:
- """Test getting the cache stats"""
- text = "Sample text"
- self.memory.clear()
- self.memory.add(text)
- stats = self.memory.get_stats()
- self.assertEqual(15, len(stats))
-
-except:
- print("Milvus not installed, skipping tests")
diff --git a/spaces/hari31416/Style-Transfer/Dockerfile b/spaces/hari31416/Style-Transfer/Dockerfile
deleted file mode 100644
index 212c42f8016aa43bb2c0b56a15015c13df5974ff..0000000000000000000000000000000000000000
--- a/spaces/hari31416/Style-Transfer/Dockerfile
+++ /dev/null
@@ -1,19 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-COPY . .
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-WORKDIR $HOME/app
-
-COPY --chown=user . $HOME/app
-
-CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/hayas-tohoku-workshop-2023/sample-depth-estimation/README.md b/spaces/hayas-tohoku-workshop-2023/sample-depth-estimation/README.md
deleted file mode 100644
index 246b1a25742e7ad6d2fd65984c9b68997e78a216..0000000000000000000000000000000000000000
--- a/spaces/hayas-tohoku-workshop-2023/sample-depth-estimation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sample Depth
-emoji: 🏆
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/models/yolo.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/models/yolo.py
deleted file mode 100644
index 4f4d567bec735c7782af9a78da68b955ecd30ef1..0000000000000000000000000000000000000000
--- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/models/yolo.py
+++ /dev/null
@@ -1,391 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
-"""
-YOLO-specific modules
-
-Usage:
- $ python models/yolo.py --cfg yolov5s.yaml
-"""
-
-import argparse
-import contextlib
-import os
-import platform
-import sys
-from copy import deepcopy
-from pathlib import Path
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[1] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-if platform.system() != 'Windows':
- ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.common import * # noqa
-from models.experimental import * # noqa
-from utils.autoanchor import check_anchor_order
-from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
-from utils.plots import feature_visualization
-from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device,
- time_sync)
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-
-class Detect(nn.Module):
- # YOLOv5 Detect head for detection models
- stride = None # strides computed during build
- dynamic = False # force grid reconstruction
- export = False # export mode
-
- def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
- super().__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.empty(0) for _ in range(self.nl)] # init grid
- self.anchor_grid = [torch.empty(0) for _ in range(self.nl)] # init anchor grid
- self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
- self.inplace = inplace # use inplace ops (e.g. slice assignment)
-
- def forward(self, x):
- z = [] # inference output
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
-
- if isinstance(self, Segment): # (boxes + masks)
- xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4)
- xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i] # xy
- wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh
- y = torch.cat((xy, wh, conf.sigmoid(), mask), 4)
- else: # Detect (boxes only)
- xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4)
- xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
- wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, self.na * nx * ny, self.no))
-
- return x if self.training else (torch.cat(z, 1), ) if self.export else (torch.cat(z, 1), x)
-
- def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')):
- d = self.anchors[i].device
- t = self.anchors[i].dtype
- shape = 1, self.na, ny, nx, 2 # grid shape
- y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t)
- yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility
- grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5
- anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape)
- return grid, anchor_grid
-
-
-class Segment(Detect):
- # YOLOv5 Segment head for segmentation models
- def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), inplace=True):
- super().__init__(nc, anchors, ch, inplace)
- self.nm = nm # number of masks
- self.npr = npr # number of protos
- self.no = 5 + nc + self.nm # number of outputs per anchor
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
- self.proto = Proto(ch[0], self.npr, self.nm) # protos
- self.detect = Detect.forward
-
- def forward(self, x):
- p = self.proto(x[0])
- x = self.detect(self, x)
- return (x, p) if self.training else (x[0], p) if self.export else (x[0], p, x[1])
-
-
-class BaseModel(nn.Module):
- # YOLOv5 base model
- def forward(self, x, profile=False, visualize=False):
- return self._forward_once(x, profile, visualize) # single-scale inference, train
-
- def _forward_once(self, x, profile=False, visualize=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
- if profile:
- self._profile_one_layer(m, x, dt)
- x = m(x) # run
- y.append(x if m.i in self.save else None) # save output
- if visualize:
- feature_visualization(x, m.type, m.i, save_dir=visualize)
- return x
-
- def _profile_one_layer(self, m, x, dt):
- c = m == self.model[-1] # is final layer, copy input as inplace fix
- o = thop.profile(m, inputs=(x.copy() if c else x, ), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
- t = time_sync()
- for _ in range(10):
- m(x.copy() if c else x)
- dt.append((time_sync() - t) * 100)
- if m == self.model[0]:
- LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module")
- LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
- if c:
- LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- LOGGER.info('Fusing layers... ')
- for m in self.model.modules():
- if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.forward_fuse # update forward
- self.info()
- return self
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
- def _apply(self, fn):
- # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
- self = super()._apply(fn)
- m = self.model[-1] # Detect()
- if isinstance(m, (Detect, Segment)):
- m.stride = fn(m.stride)
- m.grid = list(map(fn, m.grid))
- if isinstance(m.anchor_grid, list):
- m.anchor_grid = list(map(fn, m.anchor_grid))
- return self
-
-
-class DetectionModel(BaseModel):
- # YOLOv5 detection model
- def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super().__init__()
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg, encoding='ascii', errors='ignore') as f:
- self.yaml = yaml.safe_load(f) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- self.inplace = self.yaml.get('inplace', True)
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, (Detect, Segment)):
- s = 256 # 2x min stride
- m.inplace = self.inplace
- forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x)
- m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- LOGGER.info('')
-
- def forward(self, x, augment=False, profile=False, visualize=False):
- if augment:
- return self._forward_augment(x) # augmented inference, None
- return self._forward_once(x, profile, visualize) # single-scale inference, train
-
- def _forward_augment(self, x):
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self._forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi = self._descale_pred(yi, fi, si, img_size)
- y.append(yi)
- y = self._clip_augmented(y) # clip augmented tails
- return torch.cat(y, 1), None # augmented inference, train
-
- def _descale_pred(self, p, flips, scale, img_size):
- # de-scale predictions following augmented inference (inverse operation)
- if self.inplace:
- p[..., :4] /= scale # de-scale
- if flips == 2:
- p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
- elif flips == 3:
- p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
- else:
- x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
- if flips == 2:
- y = img_size[0] - y # de-flip ud
- elif flips == 3:
- x = img_size[1] - x # de-flip lr
- p = torch.cat((x, y, wh, p[..., 4:]), -1)
- return p
-
- def _clip_augmented(self, y):
- # Clip YOLOv5 augmented inference tails
- nl = self.model[-1].nl # number of detection layers (P3-P5)
- g = sum(4 ** x for x in range(nl)) # grid points
- e = 1 # exclude layer count
- i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices
- y[0] = y[0][:, :-i] # large
- i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices
- y[-1] = y[-1][:, i:] # small
- return y
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:5 + m.nc] += math.log(0.6 / (m.nc - 0.99999)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
-
-Model = DetectionModel # retain YOLOv5 'Model' class for backwards compatibility
-
-
-class SegmentationModel(DetectionModel):
- # YOLOv5 segmentation model
- def __init__(self, cfg='yolov5s-seg.yaml', ch=3, nc=None, anchors=None):
- super().__init__(cfg, ch, nc, anchors)
-
-
-class ClassificationModel(BaseModel):
- # YOLOv5 classification model
- def __init__(self, cfg=None, model=None, nc=1000, cutoff=10): # yaml, model, number of classes, cutoff index
- super().__init__()
- self._from_detection_model(model, nc, cutoff) if model is not None else self._from_yaml(cfg)
-
- def _from_detection_model(self, model, nc=1000, cutoff=10):
- # Create a YOLOv5 classification model from a YOLOv5 detection model
- if isinstance(model, DetectMultiBackend):
- model = model.model # unwrap DetectMultiBackend
- model.model = model.model[:cutoff] # backbone
- m = model.model[-1] # last layer
- ch = m.conv.in_channels if hasattr(m, 'conv') else m.cv1.conv.in_channels # ch into module
- c = Classify(ch, nc) # Classify()
- c.i, c.f, c.type = m.i, m.f, 'models.common.Classify' # index, from, type
- model.model[-1] = c # replace
- self.model = model.model
- self.stride = model.stride
- self.save = []
- self.nc = nc
-
- def _from_yaml(self, cfg):
- # Create a YOLOv5 classification model from a *.yaml file
- self.model = None
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- # Parse a YOLOv5 model.yaml dictionary
- LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
- anchors, nc, gd, gw, act = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'], d.get('activation')
- if act:
- Conv.default_act = eval(act) # redefine default activation, i.e. Conv.default_act = nn.SiLU()
- LOGGER.info(f"{colorstr('activation:')} {act}") # print
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- with contextlib.suppress(NameError):
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
-
- n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in {
- Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
- BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x}:
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum(ch[x] for x in f)
- # TODO: channel, gw, gd
- elif m in {Detect, Segment}:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- if m is Segment:
- args[3] = make_divisible(args[3] * gw, 8)
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum(x.numel() for x in m_.parameters()) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
- parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--profile', action='store_true', help='profile model speed')
- parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer')
- parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')
- opt = parser.parse_args()
- opt.cfg = check_yaml(opt.cfg) # check YAML
- print_args(vars(opt))
- device = select_device(opt.device)
-
- # Create model
- im = torch.rand(opt.batch_size, 3, 640, 640).to(device)
- model = Model(opt.cfg).to(device)
-
- # Options
- if opt.line_profile: # profile layer by layer
- model(im, profile=True)
-
- elif opt.profile: # profile forward-backward
- results = profile(input=im, ops=[model], n=3)
-
- elif opt.test: # test all models
- for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):
- try:
- _ = Model(cfg)
- except Exception as e:
- print(f'Error in {cfg}: {e}')
-
- else: # report fused model summary
- model.fuse()
diff --git a/spaces/hebert2099/MusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/hebert2099/MusicGen/audiocraft/modules/codebooks_patterns.py
deleted file mode 100644
index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000
--- a/spaces/hebert2099/MusicGen/audiocraft/modules/codebooks_patterns.py
+++ /dev/null
@@ -1,539 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import namedtuple
-from dataclasses import dataclass
-from functools import lru_cache
-import logging
-import typing as tp
-
-from abc import ABC, abstractmethod
-import torch
-
-LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index)
-PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class Pattern:
- """Base implementation of a pattern over a sequence with multiple codebooks.
-
- The codebook pattern consists in a layout, defining for each sequence step
- the list of coordinates of each codebook timestep in the resulting interleaved sequence.
- The first item of the pattern is always an empty list in order to properly insert a special token
- to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern
- and ``timesteps`` the number of timesteps corresponding to the original sequence.
-
- The pattern provides convenient methods to build and revert interleaved sequences from it:
- ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T]
- to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size,
- K being the number of codebooks, T the number of original timesteps and S the number of sequence steps
- for the output sequence. The unfilled positions are replaced with a special token and the built sequence
- is returned along with a mask indicating valid tokens.
- ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment
- of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask
- to fill and specify invalid positions if needed.
- See the dedicated methods for more details.
- """
- # Pattern layout, for each sequence step, we have a list of coordinates
- # corresponding to the original codebook timestep and position.
- # The first list is always an empty list in order to properly insert
- # a special token to start with.
- layout: PatternLayout
- timesteps: int
- n_q: int
-
- def __post_init__(self):
- assert len(self.layout) > 0
- assert self.layout[0] == []
- self._validate_layout()
- self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes)
- self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes)
- logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout))
-
- def _validate_layout(self):
- """Runs checks on the layout to ensure a valid pattern is defined.
- A pattern is considered invalid if:
- - Multiple timesteps for a same codebook are defined in the same sequence step
- - The timesteps for a given codebook are not in ascending order as we advance in the sequence
- (this would mean that we have future timesteps before past timesteps).
- """
- q_timesteps = {q: 0 for q in range(self.n_q)}
- for s, seq_coords in enumerate(self.layout):
- if len(seq_coords) > 0:
- qs = set()
- for coord in seq_coords:
- qs.add(coord.q)
- last_q_timestep = q_timesteps[coord.q]
- assert coord.t >= last_q_timestep, \
- f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}"
- q_timesteps[coord.q] = coord.t
- # each sequence step contains at max 1 coordinate per codebook
- assert len(qs) == len(seq_coords), \
- f"Multiple entries for a same codebook are found at step {s}"
-
- @property
- def num_sequence_steps(self):
- return len(self.layout) - 1
-
- @property
- def max_delay(self):
- max_t_in_seq_coords = 0
- for seq_coords in self.layout[1:]:
- for coords in seq_coords:
- max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1)
- return max_t_in_seq_coords - self.timesteps
-
- @property
- def valid_layout(self):
- valid_step = len(self.layout) - self.max_delay
- return self.layout[:valid_step]
-
- def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None):
- """Get codebook coordinates in the layout that corresponds to the specified timestep t
- and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step
- and the actual codebook coordinates.
- """
- assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps"
- if q is not None:
- assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks"
- coords = []
- for s, seq_codes in enumerate(self.layout):
- for code in seq_codes:
- if code.t == t and (q is None or code.q == q):
- coords.append((s, code))
- return coords
-
- def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]:
- return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)]
-
- def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]:
- steps_with_timesteps = self.get_steps_with_timestep(t, q)
- return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None
-
- def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps.
-
- Args:
- timesteps (int): Maximum number of timesteps steps to consider.
- keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps.
- device (Union[torch.device, str]): Device for created tensors.
- Returns:
- indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S].
- """
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern"
- # use the proper layout based on whether we limit ourselves to valid steps only or not,
- # note that using the valid_layout will result in a truncated sequence up to the valid steps
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy()
- mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- # the last value is n_q * timesteps as we have flattened z and append special token as the last token
- # which will correspond to the index: n_q * timesteps
- indexes[:] = n_q * timesteps
- # iterate over the pattern and fill scattered indexes and mask
- for s, sequence_coords in enumerate(ref_layout):
- for coords in sequence_coords:
- if coords.t < timesteps:
- indexes[coords.q, s] = coords.t + coords.q * timesteps
- mask[coords.q, s] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Build sequence corresponding to the pattern from the input tensor z.
- The sequence is built using up to sequence_steps if specified, and non-pattern
- coordinates are filled with the special token.
-
- Args:
- z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T].
- special_token (int): Special token used to fill non-pattern coordinates in the new sequence.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S
- corresponding either to the sequence_steps if provided, otherwise to the length of the pattern.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S].
- """
- B, K, T = z.shape
- indexes, mask = self._build_pattern_sequence_scatter_indexes(
- T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device)
- )
- z = z.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1)
- values = z[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int,
- keep_only_valid_steps: bool = False,
- is_model_output: bool = False,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Builds scatter indexes required to retrieve the original multi-codebook sequence
- from interleaving pattern.
-
- Args:
- sequence_steps (int): Sequence steps.
- n_q (int): Number of codebooks.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not.
- device (Union[torch.device, str]): Device for created tensors.
- Returns:
- torch.Tensor: Indexes for reconstructing the output, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # TODO(jade): Do we want to further truncate to only valid timesteps here as well?
- timesteps = self.timesteps
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert sequence_steps <= len(ref_layout), \
- f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}"
-
- # ensure we take the appropriate indexes to keep the model output from the first special token as well
- if is_model_output:
- ref_layout = ref_layout[1:]
-
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy()
- mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- indexes[:] = n_q * sequence_steps
- for s, sequence_codes in enumerate(ref_layout):
- if s < sequence_steps:
- for code in sequence_codes:
- if code.t < timesteps:
- indexes[code.q, code.t] = s + code.q * sequence_steps
- mask[code.q, code.t] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving.
- The sequence is reverted using up to timesteps if specified, and non-pattern coordinates
- are filled with the special token.
-
- Args:
- s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S].
- special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T
- corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- B, K, S = s.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device)
- )
- s = s.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1)
- values = s[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False):
- """Revert model logits obtained on a sequence built from the pattern
- back to a tensor matching the original sequence.
-
- This method is similar to ``revert_pattern_sequence`` with the following specificities:
- 1. It is designed to work with the extra cardinality dimension
- 2. We return the logits for the first sequence item that matches the special_token and
- which matching target in the original sequence is the first item of the sequence,
- while we skip the last logits as there is no matching target
- """
- B, card, K, S = logits.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=True, device=logits.device
- )
- logits = logits.reshape(B, card, -1)
- # we append the special token as the last index of our flattened z tensor
- logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S]
- values = logits[:, :, indexes.view(-1)]
- values = values.view(B, card, K, indexes.shape[-1])
- return values, indexes, mask
-
-
-class CodebooksPatternProvider(ABC):
- """Abstraction around providing pattern for interleaving codebooks.
-
- The CodebooksPatternProvider abstraction allows to implement various strategies to
- define interleaving pattern of sequences composed of multiple codebooks. For a given
- number of codebooks `n_q`, the pattern provider can generate a specified pattern
- corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern
- can be used to construct a new sequence from the original codes respecting the specified
- pattern. The pattern is defined as a list of list of code coordinates, code coordinate
- being a tuple with the original timestep and codebook to build the new sequence.
- Note that all patterns must start with an empty list that is then used to insert a first
- sequence step of special tokens in the newly generated sequence.
-
- Args:
- n_q (int): number of codebooks.
- cached (bool): if True, patterns for a given length are cached. In general
- that should be true for efficiency reason to avoid synchronization points.
- """
- def __init__(self, n_q: int, cached: bool = True):
- assert n_q > 0
- self.n_q = n_q
- self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore
-
- @abstractmethod
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern with specific interleaving between codebooks.
-
- Args:
- timesteps (int): Total numer of timesteps.
- """
- raise NotImplementedError()
-
-
-class DelayedPatternProvider(CodebooksPatternProvider):
- """Provider for delayed pattern across delayed codebooks.
- Codebooks are delayed in the sequence and sequence steps will contain codebooks
- from different timesteps.
-
- Example:
- Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- The resulting sequence obtained from the returned pattern is:
- [[S, 1, 2, 3, 4],
- [S, S, 1, 2, 3],
- [S, S, S, 1, 2]]
- (with S being a special token)
-
- Args:
- n_q (int): Number of codebooks.
- delays (Optional[List[int]]): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- flatten_first (int): Flatten the first N timesteps.
- empty_initial (int): Prepend with N empty list of coordinates.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None,
- flatten_first: int = 0, empty_initial: int = 0):
- super().__init__(n_q)
- if delays is None:
- delays = list(range(n_q))
- self.delays = delays
- self.flatten_first = flatten_first
- self.empty_initial = empty_initial
- assert len(self.delays) == self.n_q
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- max_delay = max(self.delays)
- if self.empty_initial:
- out += [[] for _ in range(self.empty_initial)]
- if self.flatten_first:
- for t in range(min(timesteps, self.flatten_first)):
- for q in range(self.n_q):
- out.append([LayoutCoord(t, q)])
- for t in range(self.flatten_first, timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= self.flatten_first:
- v.append(LayoutCoord(t_for_q, q))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class ParallelPatternProvider(DelayedPatternProvider):
- """Provider for parallel pattern across codebooks.
- This pattern provider is a special case of the delayed pattern with actually no delay,
- hence delays=repeat(0, n_q).
-
- Args:
- n_q (int): Number of codebooks.
- """
- def __init__(self, n_q: int):
- super().__init__(n_q, [0] * n_q)
-
-
-class UnrolledPatternProvider(CodebooksPatternProvider):
- """Provider for unrolling codebooks pattern.
- This pattern provider enables to represent the codebook flattened completely or only to some extend
- while also specifying a given delay between the flattened codebooks representation, allowing to
- unroll the codebooks in the sequence.
-
- Example:
- 1. Flattening of the codebooks.
- By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q),
- taking n_q = 3 and timesteps = 4:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, 1, S, S, 2, S, S, 3, S, S, 4],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step
- for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example
- taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks
- allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the
- same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1]
- and delays = [0, 3, 3]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, S, 1, S, 2, S, 3, S, 4],
- [S, S, S, 1, S, 2, S, 3, S, 4],
- [1, 2, 3, S, 4, S, 5, S, 6, S]]
-
- Args:
- n_q (int): Number of codebooks.
- flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined,
- the codebooks will be flattened to 1 codebook per step, meaning that the sequence will
- have n_q extra steps for each timestep.
- delays (Optional[List[int]]): Delay for each of the codebooks. If not defined,
- no delay is added and therefore will default to [0] * ``n_q``.
- Note that two codebooks that will be flattened to the same inner step
- should have the same delay, otherwise the pattern is considered as invalid.
- """
- FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay'])
-
- def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None,
- delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if flattening is None:
- flattening = list(range(n_q))
- if delays is None:
- delays = [0] * n_q
- assert len(flattening) == n_q
- assert len(delays) == n_q
- assert sorted(flattening) == flattening
- assert sorted(delays) == delays
- self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening)
- self.max_delay = max(delays)
-
- def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]):
- """Build a flattened codebooks representation as a dictionary of inner step
- and the actual codebook indices corresponding to the flattened codebook. For convenience, we
- also store the delay associated to the flattened codebook to avoid maintaining an extra mapping.
- """
- flattened_codebooks: dict = {}
- for q, (inner_step, delay) in enumerate(zip(flattening, delays)):
- if inner_step not in flattened_codebooks:
- flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay)
- else:
- flat_codebook = flattened_codebooks[inner_step]
- assert flat_codebook.delay == delay, (
- "Delay and flattening between codebooks is inconsistent: ",
- "two codebooks flattened to the same position should have the same delay."
- )
- flat_codebook.codebooks.append(q)
- flattened_codebooks[inner_step] = flat_codebook
- return flattened_codebooks
-
- @property
- def _num_inner_steps(self):
- """Number of inner steps to unroll between timesteps in order to flatten the codebooks.
- """
- return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1
-
- def num_virtual_steps(self, timesteps: int) -> int:
- return timesteps * self._num_inner_steps + 1
-
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern for delay across codebooks.
-
- Args:
- timesteps (int): Total numer of timesteps.
- """
- # the PatternLayout is built as a tuple of sequence position and list of coordinates
- # so that it can be reordered properly given the required delay between codebooks of given timesteps
- indexed_out: list = [(-1, [])]
- max_timesteps = timesteps + self.max_delay
- for t in range(max_timesteps):
- # for each timestep, we unroll the flattened codebooks,
- # emitting the sequence step with the corresponding delay
- for step in range(self._num_inner_steps):
- if step in self._flattened_codebooks:
- # we have codebooks at this virtual step to emit
- step_codebooks = self._flattened_codebooks[step]
- t_for_q = t + step_codebooks.delay
- coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks]
- if t_for_q < max_timesteps and t < max_timesteps:
- indexed_out.append((t_for_q, coords))
- else:
- # there is no codebook in this virtual step so we emit an empty list
- indexed_out.append((t, []))
- out = [coords for _, coords in sorted(indexed_out)]
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class VALLEPattern(CodebooksPatternProvider):
- """Almost VALL-E style pattern. We futher allow some delays for the
- codebooks other than the first one.
-
- Args:
- n_q (int): Number of codebooks.
- delays (Optional[List[int]]): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if delays is None:
- delays = [0] * (n_q - 1)
- self.delays = delays
- assert len(self.delays) == self.n_q - 1
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for t in range(timesteps):
- out.append([LayoutCoord(t, 0)])
- max_delay = max(self.delays)
- for t in range(timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= 0:
- v.append(LayoutCoord(t_for_q, q + 1))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class MusicLMPattern(CodebooksPatternProvider):
- """Almost MusicLM style pattern. This is equivalent to full flattening
- but in a different order.
-
- Args:
- n_q (int): Number of codebooks.
- group_by (int): Number of codebooks to group together.
- """
- def __init__(self, n_q: int, group_by: int = 2):
- super().__init__(n_q)
- self.group_by = group_by
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for offset in range(0, self.n_q, self.group_by):
- for t in range(timesteps):
- for q in range(offset, offset + self.group_by):
- out.append([LayoutCoord(t, q)])
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
diff --git a/spaces/hf4h/biomedical-language-models/app.py b/spaces/hf4h/biomedical-language-models/app.py
deleted file mode 100644
index 67d3c4219e1d77d4edaf0836a296606f05a6c8ab..0000000000000000000000000000000000000000
--- a/spaces/hf4h/biomedical-language-models/app.py
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import gradio as gr
-
-from model_list import ModelList
-
-DESCRIPTION = '# Explore Biomedical Language Models'
-NOTES = '''
-- Stanford HAI Article, ["The Shaky Foundations of Foundation Models in Healthcare"](https://hai.stanford.edu/news/shaky-foundations-foundation-models-healthcare)
-'''
-FOOTER = ''''''
-
-def main():
- model_list = ModelList()
-
- with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
-
- search_box = gr.Textbox(
- label='Search Model Name',
- placeholder=
- 'You can search for titles with regular expressions. e.g. (? 0:
- self.dropout = self.dropout_op(**self.dropout_op_kwargs)
- else:
- self.dropout = None
- self.instnorm = self.norm_op(output_channels, **self.norm_op_kwargs)
- self.lrelu = self.nonlin(**self.nonlin_kwargs)
-
- def forward(self, x):
- x = self.conv(x)
- if self.dropout is not None:
- x = self.dropout(x)
- return self.lrelu(self.instnorm(x))
-
-
-class ConvDropoutNonlinNorm(ConvDropoutNormNonlin):
- def forward(self, x):
- x = self.conv(x)
- if self.dropout is not None:
- x = self.dropout(x)
- return self.instnorm(self.lrelu(x))
-
-
-class StackedConvLayers(nn.Module):
- def __init__(self, input_feature_channels, output_feature_channels, num_convs,
- conv_op=nn.Conv2d, conv_kwargs=None,
- norm_op=nn.BatchNorm2d, norm_op_kwargs=None,
- dropout_op=nn.Dropout2d, dropout_op_kwargs=None,
- nonlin=nn.LeakyReLU, nonlin_kwargs=None, first_stride=None, basic_block=ConvDropoutNormNonlin):
- '''
- stacks ConvDropoutNormLReLU layers. initial_stride will only be applied to first layer in the stack. The other parameters affect all layers
- :param input_feature_channels:
- :param output_feature_channels:
- :param num_convs:
- :param dilation:
- :param kernel_size:
- :param padding:
- :param dropout:
- :param initial_stride:
- :param conv_op:
- :param norm_op:
- :param dropout_op:
- :param inplace:
- :param neg_slope:
- :param norm_affine:
- :param conv_bias:
- '''
- self.input_channels = input_feature_channels
- self.output_channels = output_feature_channels
-
- if nonlin_kwargs is None:
- nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True}
- if dropout_op_kwargs is None:
- dropout_op_kwargs = {'p': 0.5, 'inplace': True}
- if norm_op_kwargs is None:
- norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'momentum': 0.1}
- if conv_kwargs is None:
- conv_kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1, 'dilation': 1, 'bias': True}
-
- self.nonlin_kwargs = nonlin_kwargs
- self.nonlin = nonlin
- self.dropout_op = dropout_op
- self.dropout_op_kwargs = dropout_op_kwargs
- self.norm_op_kwargs = norm_op_kwargs
- self.conv_kwargs = conv_kwargs
- self.conv_op = conv_op
- self.norm_op = norm_op
-
- if first_stride is not None:
- self.conv_kwargs_first_conv = deepcopy(conv_kwargs)
- self.conv_kwargs_first_conv['stride'] = first_stride
- else:
- self.conv_kwargs_first_conv = conv_kwargs
-
- super(StackedConvLayers, self).__init__()
- self.blocks = nn.Sequential(
- *([basic_block(input_feature_channels, output_feature_channels, self.conv_op,
- self.conv_kwargs_first_conv,
- self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs,
- self.nonlin, self.nonlin_kwargs)] +
- [basic_block(output_feature_channels, output_feature_channels, self.conv_op,
- self.conv_kwargs,
- self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs,
- self.nonlin, self.nonlin_kwargs) for _ in range(num_convs - 1)]))
-
- def forward(self, x):
- return self.blocks(x)
-
-
-def print_module_training_status(module):
- if isinstance(module, nn.Conv2d) or isinstance(module, nn.Conv3d) or isinstance(module, nn.Dropout3d) or \
- isinstance(module, nn.Dropout2d) or isinstance(module, nn.Dropout) or isinstance(module, nn.InstanceNorm3d) \
- or isinstance(module, nn.InstanceNorm2d) or isinstance(module, nn.InstanceNorm1d) \
- or isinstance(module, nn.BatchNorm2d) or isinstance(module, nn.BatchNorm3d) or isinstance(module,
- nn.BatchNorm1d):
- print(str(module), module.training)
-
-
-class Upsample(nn.Module):
- def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=False):
- super(Upsample, self).__init__()
- self.align_corners = align_corners
- self.mode = mode
- self.scale_factor = scale_factor
- self.size = size
-
- def forward(self, x):
- return nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, mode=self.mode,
- align_corners=self.align_corners)
-
-
-class Generic_UNet(SegmentationNetwork):
- DEFAULT_BATCH_SIZE_3D = 2
- DEFAULT_PATCH_SIZE_3D = (64, 192, 160)
- SPACING_FACTOR_BETWEEN_STAGES = 2
- BASE_NUM_FEATURES_3D = 30
- MAX_NUMPOOL_3D = 999
- MAX_NUM_FILTERS_3D = 320
-
- DEFAULT_PATCH_SIZE_2D = (256, 256)
- BASE_NUM_FEATURES_2D = 30
- DEFAULT_BATCH_SIZE_2D = 50
- MAX_NUMPOOL_2D = 999
- MAX_FILTERS_2D = 480
-
- use_this_for_batch_size_computation_2D = 19739648
- use_this_for_batch_size_computation_3D = 520000000 # 505789440
-
- def __init__(self, input_channels, base_num_features, num_classes, num_pool, num_conv_per_stage=2,
- feat_map_mul_on_downscale=2, conv_op=nn.Conv2d,
- norm_op=nn.BatchNorm2d, norm_op_kwargs=None,
- dropout_op=nn.Dropout2d, dropout_op_kwargs=None,
- nonlin=nn.LeakyReLU, nonlin_kwargs=None, deep_supervision=True, dropout_in_localization=False,
- final_nonlin=softmax_helper, weightInitializer=InitWeights_He(1e-2), pool_op_kernel_sizes=None,
- conv_kernel_sizes=None,
- upscale_logits=False, convolutional_pooling=False, convolutional_upsampling=False,
- max_num_features=None, basic_block=ConvDropoutNormNonlin,
- seg_output_use_bias=False):
- """
- basically more flexible than v1, architecture is the same
-
- Does this look complicated? Nah bro. Functionality > usability
-
- This does everything you need, including world peace.
-
- Questions? -> f.isensee@dkfz.de
- """
- super(Generic_UNet, self).__init__()
- self.convolutional_upsampling = convolutional_upsampling
- self.convolutional_pooling = convolutional_pooling
- self.upscale_logits = upscale_logits
- if nonlin_kwargs is None:
- nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True}
- if dropout_op_kwargs is None:
- dropout_op_kwargs = {'p': 0.5, 'inplace': True}
- if norm_op_kwargs is None:
- norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'momentum': 0.1}
-
- self.conv_kwargs = {'stride': 1, 'dilation': 1, 'bias': True}
-
- self.nonlin = nonlin
- self.nonlin_kwargs = nonlin_kwargs
- self.dropout_op_kwargs = dropout_op_kwargs
- self.norm_op_kwargs = norm_op_kwargs
- self.weightInitializer = weightInitializer
- self.conv_op = conv_op
- self.norm_op = norm_op
- self.dropout_op = dropout_op
- self.num_classes = num_classes
- self.final_nonlin = final_nonlin
- self._deep_supervision = deep_supervision
- self.do_ds = deep_supervision
-
- if conv_op == nn.Conv2d:
- upsample_mode = 'bilinear'
- pool_op = nn.MaxPool2d
- transpconv = nn.ConvTranspose2d
- if pool_op_kernel_sizes is None:
- pool_op_kernel_sizes = [(2, 2)] * num_pool
- if conv_kernel_sizes is None:
- conv_kernel_sizes = [(3, 3)] * (num_pool + 1)
- elif conv_op == nn.Conv3d:
- upsample_mode = 'trilinear'
- pool_op = nn.MaxPool3d
- transpconv = nn.ConvTranspose3d
- if pool_op_kernel_sizes is None:
- pool_op_kernel_sizes = [(2, 2, 2)] * num_pool
- if conv_kernel_sizes is None:
- conv_kernel_sizes = [(3, 3, 3)] * (num_pool + 1)
- else:
- raise ValueError("unknown convolution dimensionality, conv op: %s" % str(conv_op))
-
- self.input_shape_must_be_divisible_by = np.prod(pool_op_kernel_sizes, 0, dtype=np.int64)
- self.pool_op_kernel_sizes = pool_op_kernel_sizes
- self.conv_kernel_sizes = conv_kernel_sizes
-
- self.conv_pad_sizes = []
- for krnl in self.conv_kernel_sizes:
- self.conv_pad_sizes.append([1 if i == 3 else 0 for i in krnl])
-
- if max_num_features is None:
- if self.conv_op == nn.Conv3d:
- self.max_num_features = self.MAX_NUM_FILTERS_3D
- else:
- self.max_num_features = self.MAX_FILTERS_2D
- else:
- self.max_num_features = max_num_features
-
- self.conv_blocks_context = []
- self.conv_blocks_localization = []
- self.td = []
- self.tu = []
- self.seg_outputs = []
-
- output_features = base_num_features
- input_features = input_channels
-
- for d in range(num_pool):
- # determine the first stride
- if d != 0 and self.convolutional_pooling:
- first_stride = pool_op_kernel_sizes[d - 1]
- else:
- first_stride = None
-
- self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[d]
- self.conv_kwargs['padding'] = self.conv_pad_sizes[d]
- # add convolutions
- self.conv_blocks_context.append(StackedConvLayers(input_features, output_features, num_conv_per_stage,
- self.conv_op, self.conv_kwargs, self.norm_op,
- self.norm_op_kwargs, self.dropout_op,
- self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs,
- first_stride, basic_block=basic_block))
- if not self.convolutional_pooling:
- self.td.append(pool_op(pool_op_kernel_sizes[d]))
- input_features = output_features
- output_features = int(np.round(output_features * feat_map_mul_on_downscale))
-
- output_features = min(output_features, self.max_num_features)
-
- # now the bottleneck.
- # determine the first stride
- if self.convolutional_pooling:
- first_stride = pool_op_kernel_sizes[-1]
- else:
- first_stride = None
-
- # the output of the last conv must match the number of features from the skip connection if we are not using
- # convolutional upsampling. If we use convolutional upsampling then the reduction in feature maps will be
- # done by the transposed conv
- if self.convolutional_upsampling:
- final_num_features = output_features
- else:
- final_num_features = self.conv_blocks_context[-1].output_channels
-
- self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[num_pool]
- self.conv_kwargs['padding'] = self.conv_pad_sizes[num_pool]
- self.conv_blocks_context.append(nn.Sequential(
- StackedConvLayers(input_features, output_features, num_conv_per_stage - 1, self.conv_op, self.conv_kwargs,
- self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, self.nonlin,
- self.nonlin_kwargs, first_stride, basic_block=basic_block),
- StackedConvLayers(output_features, final_num_features, 1, self.conv_op, self.conv_kwargs,
- self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, self.nonlin,
- self.nonlin_kwargs, basic_block=basic_block)))
-
- # if we don't want to do dropout in the localization pathway then we set the dropout prob to zero here
- if not dropout_in_localization:
- old_dropout_p = self.dropout_op_kwargs['p']
- self.dropout_op_kwargs['p'] = 0.0
-
- # now lets build the localization pathway
- for u in range(num_pool):
- nfeatures_from_down = final_num_features
- nfeatures_from_skip = self.conv_blocks_context[
- -(2 + u)].output_channels # self.conv_blocks_context[-1] is bottleneck, so start with -2
- n_features_after_tu_and_concat = nfeatures_from_skip * 2
-
- # the first conv reduces the number of features to match those of skip
- # the following convs work on that number of features
- # if not convolutional upsampling then the final conv reduces the num of features again
- if u != num_pool - 1 and not self.convolutional_upsampling:
- final_num_features = self.conv_blocks_context[-(3 + u)].output_channels
- else:
- final_num_features = nfeatures_from_skip
-
- if not self.convolutional_upsampling:
- self.tu.append(Upsample(scale_factor=pool_op_kernel_sizes[-(u + 1)], mode=upsample_mode))
- else:
- self.tu.append(transpconv(nfeatures_from_down, nfeatures_from_skip, pool_op_kernel_sizes[-(u + 1)],
- pool_op_kernel_sizes[-(u + 1)], bias=False))
-
- self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[- (u + 1)]
- self.conv_kwargs['padding'] = self.conv_pad_sizes[- (u + 1)]
- self.conv_blocks_localization.append(nn.Sequential(
- StackedConvLayers(n_features_after_tu_and_concat, nfeatures_from_skip, num_conv_per_stage - 1,
- self.conv_op, self.conv_kwargs, self.norm_op, self.norm_op_kwargs, self.dropout_op,
- self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs, basic_block=basic_block),
- StackedConvLayers(nfeatures_from_skip, final_num_features, 1, self.conv_op, self.conv_kwargs,
- self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs,
- self.nonlin, self.nonlin_kwargs, basic_block=basic_block)
- ))
-
- for ds in range(len(self.conv_blocks_localization)):
- self.seg_outputs.append(conv_op(self.conv_blocks_localization[ds][-1].output_channels, num_classes,
- 1, 1, 0, 1, 1, seg_output_use_bias))
-
- self.upscale_logits_ops = []
- cum_upsample = np.cumprod(np.vstack(pool_op_kernel_sizes), axis=0)[::-1]
- for usl in range(num_pool - 1):
- if self.upscale_logits:
- self.upscale_logits_ops.append(Upsample(scale_factor=tuple([int(i) for i in cum_upsample[usl + 1]]),
- mode=upsample_mode))
- else:
- self.upscale_logits_ops.append(lambda x: x)
-
- if not dropout_in_localization:
- self.dropout_op_kwargs['p'] = old_dropout_p
-
- # register all modules properly
- self.conv_blocks_localization = nn.ModuleList(self.conv_blocks_localization)
- self.conv_blocks_context = nn.ModuleList(self.conv_blocks_context)
- self.td = nn.ModuleList(self.td)
- self.tu = nn.ModuleList(self.tu)
- self.seg_outputs = nn.ModuleList(self.seg_outputs)
- if self.upscale_logits:
- self.upscale_logits_ops = nn.ModuleList(
- self.upscale_logits_ops) # lambda x:x is not a Module so we need to distinguish here
-
- if self.weightInitializer is not None:
- self.apply(self.weightInitializer)
- # self.apply(print_module_training_status)
-
- def forward(self, x):
- skips = []
- seg_outputs = []
- for d in range(len(self.conv_blocks_context) - 1):
- x = self.conv_blocks_context[d](x)
- skips.append(x)
- if not self.convolutional_pooling:
- x = self.td[d](x)
-
- x = self.conv_blocks_context[-1](x)
-
- for u in range(len(self.tu)):
- x = self.tu[u](x)
- x = torch.cat((x, skips[-(u + 1)]), dim=1)
- x = self.conv_blocks_localization[u](x)
- seg_outputs.append(self.final_nonlin(self.seg_outputs[u](x)))
-
- if self._deep_supervision and self.do_ds:
- return tuple([seg_outputs[-1]] + [i(j) for i, j in
- zip(list(self.upscale_logits_ops)[::-1], seg_outputs[:-1][::-1])])
- else:
- return seg_outputs[-1]
-
- @staticmethod
- def compute_approx_vram_consumption(patch_size, num_pool_per_axis, base_num_features, max_num_features,
- num_modalities, num_classes, pool_op_kernel_sizes, deep_supervision=False,
- conv_per_stage=2):
- """
- This only applies for num_conv_per_stage and convolutional_upsampling=True
- not real vram consumption. just a constant term to which the vram consumption will be approx proportional
- (+ offset for parameter storage)
- :param deep_supervision:
- :param patch_size:
- :param num_pool_per_axis:
- :param base_num_features:
- :param max_num_features:
- :param num_modalities:
- :param num_classes:
- :param pool_op_kernel_sizes:
- :return:
- """
- if not isinstance(num_pool_per_axis, np.ndarray):
- num_pool_per_axis = np.array(num_pool_per_axis)
-
- npool = len(pool_op_kernel_sizes)
-
- map_size = np.array(patch_size)
- tmp = np.int64((conv_per_stage * 2 + 1) * np.prod(map_size, dtype=np.int64) * base_num_features +
- num_modalities * np.prod(map_size, dtype=np.int64) +
- num_classes * np.prod(map_size, dtype=np.int64))
-
- num_feat = base_num_features
-
- for p in range(npool):
- for pi in range(len(num_pool_per_axis)):
- map_size[pi] /= pool_op_kernel_sizes[p][pi]
- num_feat = min(num_feat * 2, max_num_features)
- num_blocks = (conv_per_stage * 2 + 1) if p < (npool - 1) else conv_per_stage # conv_per_stage + conv_per_stage for the convs of encode/decode and 1 for transposed conv
- tmp += num_blocks * np.prod(map_size, dtype=np.int64) * num_feat
- if deep_supervision and p < (npool - 2):
- tmp += np.prod(map_size, dtype=np.int64) * num_classes
- # print(p, map_size, num_feat, tmp)
- return tmp
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_late_4.sh b/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_late_4.sh
deleted file mode 100644
index 44bf4f7d40d14080b4b4562cd2fa594f1b4e2fe2..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_late_4.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash -l
-#SBATCH --nodes=1 --gres=gpu:1 --time=24:00:00
-#SBATCH --job-name=Task503_glacier_mtl_late_4
-
-export data_raw="/home/woody/iwi5/iwi5039h/data_raw"
-export nnUNet_raw_data_base="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_raw_data_base/"
-export nnUNet_preprocessed="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_preprocessed/"
-export RESULTS_FOLDER="/home/woody/iwi5/iwi5039h/nnUNet_data/RESULTS_FOLDER"
-
-cd nnunet_glacer
-pwd
-conda activate nnunet
-
-python3 nnunet/dataset_conversion/Task503_Glacier_mtl.py -data_percentage 100 -base $data_raw
-python3 nnunet/experiment_planning/nnUNet_plan_and_preprocess.py -t 503 -pl3d None -pl2d ExperimentPlanner2D_mtl
-
-python3 nnunet/run/run_training.py 2d nnUNetTrainerMTLlate 503 4 -p nnUNetPlans_mtl --disable_postprocessing_on_folds
-python3 nnunet/inference/predict_simple.py -i $nnUNet_raw_data_base/nnUNet_raw_data/Task503_Glacier_mtl/imagesTs -o $RESULTS_FOLDER/test_predictions/Task503_Glacier_mtl_late/fold_4 -t 503 -m 2d -f 4 -p nnUNetPlans_mtl -tr nnUNetTrainerMTLlate
-python3 nnunet/dataset_conversion/Task503_Glacier_mtl_reverse.py -i $RESULTS_FOLDER/test_predictions/Task503_Glacier_mtl_late/fold_4
-python3 ./evaluate_nnUNet.py --predictions $RESULTS_FOLDER/test_predictions/Task503_Glacier_mtl_late/fold_4/pngs --labels_fronts $data_raw/fronts/test --labels_zones $data_raw/zones/test --sar_images $data_raw/sar_images/test
diff --git a/spaces/huangbatian/newbing/Dockerfile b/spaces/huangbatian/newbing/Dockerfile
deleted file mode 100644
index c1ba952a6cfcc7d248b1b223055cffbd6bd9c833..0000000000000000000000000000000000000000
--- a/spaces/huangbatian/newbing/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="asd9rQ92ncMjLaoQWYtX5rG6yE3fZ4iO"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/huggan/FastGan/utils.py b/spaces/huggan/FastGan/utils.py
deleted file mode 100644
index b091dd3ad228face4166a5018bbddf4e13e790e8..0000000000000000000000000000000000000000
--- a/spaces/huggan/FastGan/utils.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import torch
-import torch.nn as nn
-from enum import Enum
-
-import base64
-import json
-from io import BytesIO
-from PIL import Image
-import requests
-import re
-from copy import deepcopy
-
-class ImageType(Enum):
- REAL_UP_L = 0
- REAL_UP_R = 1
- REAL_DOWN_R = 2
- REAL_DOWN_L = 3
- FAKE = 4
-
-
-def crop_image_part(image: torch.Tensor,
- part: ImageType) -> torch.Tensor:
- size = image.shape[2] // 2
-
- if part == ImageType.REAL_UP_L:
- return image[:, :, :size, :size]
-
- elif part == ImageType.REAL_UP_R:
- return image[:, :, :size, size:]
-
- elif part == ImageType.REAL_DOWN_L:
- return image[:, :, size:, :size]
-
- elif part == ImageType.REAL_DOWN_R:
- return image[:, :, size:, size:]
-
- else:
- raise ValueError('invalid part')
-
-
-def init_weights(module: nn.Module):
- if isinstance(module, nn.Conv2d):
- torch.nn.init.normal_(module.weight, 0.0, 0.02)
-
- if isinstance(module, nn.BatchNorm2d):
- torch.nn.init.normal_(module.weight, 1.0, 0.02)
- module.bias.data.fill_(0)
-
-def load_image_from_local(image_path, image_resize=None):
- image = Image.open(image_path)
-
- if isinstance(image_resize, tuple):
- image = image.resize(image_resize)
- return image
-
-def load_image_from_url(image_url, rgba_mode=False, image_resize=None, default_image=None):
- try:
- image = Image.open(requests.get(image_url, stream=True).raw)
-
- if rgba_mode:
- image = image.convert("RGBA")
-
- if isinstance(image_resize, tuple):
- image = image.resize(image_resize)
-
- except Exception as e:
- image = None
- if default_image:
- image = load_image_from_local(default_image, image_resize=image_resize)
-
- return image
-
-def image_to_base64(image_array):
- buffered = BytesIO()
- image_array.save(buffered, format="PNG")
- image_b64 = base64.b64encode(buffered.getvalue()).decode("utf-8")
- return f"data:image/png;base64, {image_b64}"
-
-
-def copy_G_params(model):
- flatten = deepcopy(list(p.data for p in model.parameters()))
- return flatten
-
-
-def load_params(model, new_param):
- for p, new_p in zip(model.parameters(), new_param):
- p.data.copy_(new_p)
diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp
deleted file mode 100644
index 448a776b3cda9f39f4dd0ad908f1b135c647ca8f..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp
+++ /dev/null
@@ -1,138 +0,0 @@
-#include "masked_image.h"
-#include
-#include
-
-const cv::Size MaskedImage::kDownsampleKernelSize = cv::Size(6, 6);
-const int MaskedImage::kDownsampleKernel[6] = {1, 5, 10, 10, 5, 1};
-
-bool MaskedImage::contains_mask(int y, int x, int patch_size) const {
- auto mask_size = size();
- for (int dy = -patch_size; dy <= patch_size; ++dy) {
- for (int dx = -patch_size; dx <= patch_size; ++dx) {
- int yy = y + dy, xx = x + dx;
- if (yy >= 0 && yy < mask_size.height && xx >= 0 && xx < mask_size.width) {
- if (is_masked(yy, xx) && !is_globally_masked(yy, xx)) return true;
- }
- }
- }
- return false;
-}
-
-MaskedImage MaskedImage::downsample() const {
- const auto &kernel_size = MaskedImage::kDownsampleKernelSize;
- const auto &kernel = MaskedImage::kDownsampleKernel;
-
- const auto size = this->size();
- const auto new_size = cv::Size(size.width / 2, size.height / 2);
-
- auto ret = MaskedImage(new_size.width, new_size.height);
- if (!m_global_mask.empty()) ret.init_global_mask_mat();
- for (int y = 0; y < size.height - 1; y += 2) {
- for (int x = 0; x < size.width - 1; x += 2) {
- int r = 0, g = 0, b = 0, ksum = 0;
- bool is_gmasked = true;
-
- for (int dy = -kernel_size.height / 2 + 1; dy <= kernel_size.height / 2; ++dy) {
- for (int dx = -kernel_size.width / 2 + 1; dx <= kernel_size.width / 2; ++dx) {
- int yy = y + dy, xx = x + dx;
- if (yy >= 0 && yy < size.height && xx >= 0 && xx < size.width) {
- if (!is_globally_masked(yy, xx)) {
- is_gmasked = false;
- }
- if (!is_masked(yy, xx)) {
- auto source_ptr = get_image(yy, xx);
- int k = kernel[kernel_size.height / 2 - 1 + dy] * kernel[kernel_size.width / 2 - 1 + dx];
- r += source_ptr[0] * k, g += source_ptr[1] * k, b += source_ptr[2] * k;
- ksum += k;
- }
- }
- }
- }
-
- if (ksum > 0) r /= ksum, g /= ksum, b /= ksum;
-
- if (!m_global_mask.empty()) {
- ret.set_global_mask(y / 2, x / 2, is_gmasked);
- }
- if (ksum > 0) {
- auto target_ptr = ret.get_mutable_image(y / 2, x / 2);
- target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b;
- ret.set_mask(y / 2, x / 2, 0);
- } else {
- ret.set_mask(y / 2, x / 2, 1);
- }
- }
- }
-
- return ret;
-}
-
-MaskedImage MaskedImage::upsample(int new_w, int new_h) const {
- const auto size = this->size();
- auto ret = MaskedImage(new_w, new_h);
- if (!m_global_mask.empty()) ret.init_global_mask_mat();
- for (int y = 0; y < new_h; ++y) {
- for (int x = 0; x < new_w; ++x) {
- int yy = y * size.height / new_h;
- int xx = x * size.width / new_w;
-
- if (is_globally_masked(yy, xx)) {
- ret.set_global_mask(y, x, 1);
- ret.set_mask(y, x, 1);
- } else {
- if (!m_global_mask.empty()) ret.set_global_mask(y, x, 0);
-
- if (is_masked(yy, xx)) {
- ret.set_mask(y, x, 1);
- } else {
- auto source_ptr = get_image(yy, xx);
- auto target_ptr = ret.get_mutable_image(y, x);
- for (int c = 0; c < 3; ++c)
- target_ptr[c] = source_ptr[c];
- ret.set_mask(y, x, 0);
- }
- }
- }
- }
-
- return ret;
-}
-
-MaskedImage MaskedImage::upsample(int new_w, int new_h, const cv::Mat &new_global_mask) const {
- auto ret = upsample(new_w, new_h);
- ret.set_global_mask_mat(new_global_mask);
- return ret;
-}
-
-void MaskedImage::compute_image_gradients() {
- if (m_image_grad_computed) {
- return;
- }
-
- const auto size = m_image.size();
- m_image_grady = cv::Mat(size, CV_8UC3);
- m_image_gradx = cv::Mat(size, CV_8UC3);
- m_image_grady = cv::Scalar::all(0);
- m_image_gradx = cv::Scalar::all(0);
-
- for (int i = 1; i < size.height - 1; ++i) {
- const auto *ptr = m_image.ptr(i, 0);
- const auto *ptry1 = m_image.ptr(i + 1, 0);
- const auto *ptry2 = m_image.ptr(i - 1, 0);
- const auto *ptrx1 = m_image.ptr(i, 0) + 3;
- const auto *ptrx2 = m_image.ptr(i, 0) - 3;
- auto *mptry = m_image_grady.ptr(i, 0);
- auto *mptrx = m_image_gradx.ptr(i, 0);
- for (int j = 3; j < size.width * 3 - 3; ++j) {
- mptry[j] = (ptry1[j] / 2 - ptry2[j] / 2) + 128;
- mptrx[j] = (ptrx1[j] / 2 - ptrx2[j] / 2) + 128;
- }
- }
-
- m_image_grad_computed = true;
-}
-
-void MaskedImage::compute_image_gradients() const {
- const_cast(this)->compute_image_gradients();
-}
-
diff --git a/spaces/huy-ha/semabs-relevancy/CLIP/clip/auxiliary.py b/spaces/huy-ha/semabs-relevancy/CLIP/clip/auxiliary.py
deleted file mode 100644
index 0f732508de85f3a4d6efd8d64006045577470dc6..0000000000000000000000000000000000000000
--- a/spaces/huy-ha/semabs-relevancy/CLIP/clip/auxiliary.py
+++ /dev/null
@@ -1,545 +0,0 @@
-# adding hooks, copied from: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/e63b4ab0d0722faa11ff2f7549c4f88074e7edd7/CLIP/clip/auxilary.py
-import torch
-import warnings
-from typing import Tuple, Optional
-
-import torch
-from torch import Tensor
-from torch.nn.init import xavier_uniform_
-from torch.nn.init import constant_
-from torch.nn.init import xavier_normal_
-from torch.nn.parameter import Parameter
-from torch.nn import functional as F
-from math import ceil, floor
-
-# We define this function as _pad because it takes an argument
-# named pad, which clobbers the recursive reference to the pad
-# function needed for __torch_function__ support
-pad = F.pad
-
-# This class exists solely for Transformer; it has an annotation stating
-# that bias is never None, which appeases TorchScript
-
-
-def interpolate_positional_emb(positional_embedding, target_seq_len):
- interpolated_positional_emb = torch.zeros_like(positional_embedding[0])[
- None, :
- ].repeat(target_seq_len, 1)
- for i in range(target_seq_len):
- i3 = float(i) / (target_seq_len / 50)
- i1 = floor(i3)
- i2 = ceil(i3)
- if i2 < len(positional_embedding):
- interpolated_positional_emb[i] = torch.lerp(
- positional_embedding[i1, :], positional_embedding[i2, :], i3 - i1
- )
- else:
- interpolated_positional_emb[i] = positional_embedding[-1, :]
- return interpolated_positional_emb
-
-
-class _LinearWithBias(torch.nn.Linear):
- bias: Tensor
-
- def __init__(self, in_features: int, out_features: int) -> None:
- super().__init__(in_features, out_features, bias=True)
-
-
-def multi_head_attention_forward(
- query: Tensor,
- key: Tensor,
- value: Tensor,
- embed_dim_to_check: int,
- num_heads: int,
- in_proj_weight: Tensor,
- in_proj_bias: Tensor,
- bias_k: Optional[Tensor],
- bias_v: Optional[Tensor],
- add_zero_attn: bool,
- dropout_p: float,
- out_proj_weight: Tensor,
- out_proj_bias: Tensor,
- training: bool = True,
- key_padding_mask: Optional[Tensor] = None,
- need_weights: bool = True,
- attn_mask: Optional[Tensor] = None,
- use_separate_proj_weight: bool = False,
- q_proj_weight: Optional[Tensor] = None,
- k_proj_weight: Optional[Tensor] = None,
- v_proj_weight: Optional[Tensor] = None,
- static_k: Optional[Tensor] = None,
- static_v: Optional[Tensor] = None,
- attention_probs_forward_hook=None,
- attention_probs_backwards_hook=None,
-) -> Tuple[Tensor, Optional[Tensor]]:
- if not torch.jit.is_scripting():
- tens_ops = (
- query,
- key,
- value,
- in_proj_weight,
- in_proj_bias,
- bias_k,
- bias_v,
- out_proj_weight,
- out_proj_bias,
- )
- if any([type(t) is not Tensor for t in tens_ops]) and F.has_torch_function(
- tens_ops
- ):
- return F.handle_torch_function(
- multi_head_attention_forward,
- tens_ops,
- query,
- key,
- value,
- embed_dim_to_check,
- num_heads,
- in_proj_weight,
- in_proj_bias,
- bias_k,
- bias_v,
- add_zero_attn,
- dropout_p,
- out_proj_weight,
- out_proj_bias,
- training=training,
- key_padding_mask=key_padding_mask,
- need_weights=need_weights,
- attn_mask=attn_mask,
- use_separate_proj_weight=use_separate_proj_weight,
- q_proj_weight=q_proj_weight,
- k_proj_weight=k_proj_weight,
- v_proj_weight=v_proj_weight,
- static_k=static_k,
- static_v=static_v,
- )
- tgt_len, bsz, embed_dim = query.size()
- assert embed_dim == embed_dim_to_check
- # allow MHA to have different sizes for the feature dimension
- assert key.size(0) == value.size(0) and key.size(1) == value.size(1)
-
- head_dim = embed_dim // num_heads
- assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
- scaling = float(head_dim) ** -0.5
-
- if not use_separate_proj_weight:
- if torch.equal(query, key) and torch.equal(key, value):
- # self-attention
- q, k, v = F.linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
-
- elif torch.equal(key, value):
- # encoder-decoder attention
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = 0
- _end = embed_dim
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- q = F.linear(query, _w, _b)
-
- if key is None:
- assert value is None
- k = None
- v = None
- else:
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim
- _end = None
- _w = in_proj_weight[_start:, :]
- if _b is not None:
- _b = _b[_start:]
- k, v = F.linear(key, _w, _b).chunk(2, dim=-1)
-
- else:
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = 0
- _end = embed_dim
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- q = F.linear(query, _w, _b)
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim
- _end = embed_dim * 2
- _w = in_proj_weight[_start:_end, :]
- if _b is not None:
- _b = _b[_start:_end]
- k = F.linear(key, _w, _b)
-
- # This is inline in_proj function with in_proj_weight and in_proj_bias
- _b = in_proj_bias
- _start = embed_dim * 2
- _end = None
- _w = in_proj_weight[_start:, :]
- if _b is not None:
- _b = _b[_start:]
- v = F.linear(value, _w, _b)
- else:
- q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
- len1, len2 = q_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == query.size(-1)
-
- k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
- len1, len2 = k_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == key.size(-1)
-
- v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
- len1, len2 = v_proj_weight_non_opt.size()
- assert len1 == embed_dim and len2 == value.size(-1)
-
- if in_proj_bias is not None:
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
- k = F.linear(
- key, k_proj_weight_non_opt, in_proj_bias[embed_dim : (embed_dim * 2)]
- )
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2) :])
- else:
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias)
- k = F.linear(key, k_proj_weight_non_opt, in_proj_bias)
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias)
- q = q * scaling
-
- if attn_mask is not None:
- assert (
- attn_mask.dtype == torch.float32
- or attn_mask.dtype == torch.float64
- or attn_mask.dtype == torch.float16
- or attn_mask.dtype == torch.uint8
- or attn_mask.dtype == torch.bool
- ), "Only float, byte, and bool types are supported for attn_mask, not {}".format(
- attn_mask.dtype
- )
- if attn_mask.dtype == torch.uint8:
- warnings.warn(
- "Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead."
- )
- attn_mask = attn_mask.to(torch.bool)
-
- if attn_mask.dim() == 2:
- attn_mask = attn_mask.unsqueeze(0)
- if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
- raise RuntimeError("The size of the 2D attn_mask is not correct.")
- elif attn_mask.dim() == 3:
- if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
- raise RuntimeError("The size of the 3D attn_mask is not correct.")
- else:
- raise RuntimeError(
- "attn_mask's dimension {} is not supported".format(attn_mask.dim())
- )
- # attn_mask's dim is 3 now.
-
- # convert ByteTensor key_padding_mask to bool
- if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
- warnings.warn(
- "Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead."
- )
- key_padding_mask = key_padding_mask.to(torch.bool)
-
- if bias_k is not None and bias_v is not None:
- if static_k is None and static_v is None:
- k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
- v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
- if attn_mask is not None:
- attn_mask = pad(attn_mask, (0, 1))
- if key_padding_mask is not None:
- key_padding_mask = pad(key_padding_mask, (0, 1))
- else:
- assert static_k is None, "bias cannot be added to static key."
- assert static_v is None, "bias cannot be added to static value."
- else:
- assert bias_k is None
- assert bias_v is None
-
- q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
- if k is not None:
- k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
- if v is not None:
- v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
-
- if static_k is not None:
- assert static_k.size(0) == bsz * num_heads
- assert static_k.size(2) == head_dim
- k = static_k
-
- if static_v is not None:
- assert static_v.size(0) == bsz * num_heads
- assert static_v.size(2) == head_dim
- v = static_v
-
- src_len = k.size(1)
-
- if key_padding_mask is not None:
- assert key_padding_mask.size(0) == bsz
- assert key_padding_mask.size(1) == src_len
-
- if add_zero_attn:
- src_len += 1
- k = torch.cat(
- [
- k,
- torch.zeros(
- (k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device
- ),
- ],
- dim=1,
- )
- v = torch.cat(
- [
- v,
- torch.zeros(
- (v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device
- ),
- ],
- dim=1,
- )
- if attn_mask is not None:
- attn_mask = pad(attn_mask, (0, 1))
- if key_padding_mask is not None:
- key_padding_mask = pad(key_padding_mask, (0, 1))
-
- attn_output_weights = torch.bmm(q, k.transpose(1, 2))
- assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
-
- if attn_mask is not None:
- if attn_mask.dtype == torch.bool:
- attn_output_weights.masked_fill_(attn_mask, float("-inf"))
- else:
- attn_output_weights += attn_mask
-
- if key_padding_mask is not None:
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
- attn_output_weights = attn_output_weights.masked_fill(
- key_padding_mask.unsqueeze(1).unsqueeze(2),
- float("-inf"),
- )
- attn_output_weights = attn_output_weights.view(
- bsz * num_heads, tgt_len, src_len
- )
-
- attn_output_weights = F.softmax(attn_output_weights, dim=-1)
- attn_output_weights = F.dropout(attn_output_weights, p=dropout_p, training=training)
-
- # use hooks for the attention weights if necessary
- if (
- attention_probs_forward_hook is not None
- and attention_probs_backwards_hook is not None
- ):
- attention_probs_forward_hook(attn_output_weights)
- # attn_output_weights.register_hook(attention_probs_backwards_hook)
-
- attn_output = torch.bmm(attn_output_weights, v)
- assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
- attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
- attn_output = F.linear(attn_output, out_proj_weight, out_proj_bias)
-
- if need_weights:
- # average attention weights over heads
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
- return attn_output, attn_output_weights.sum(dim=1) / num_heads
- else:
- return attn_output, None
-
-
-class MultiheadAttention(torch.nn.Module):
- r"""Allows the model to jointly attend to information
- from different representation subspaces.
- See reference: Attention Is All You Need
- .. math::
- \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
- \text{where} head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)
- Args:
- embed_dim: total dimension of the model.
- num_heads: parallel attention heads.
- dropout: a Dropout layer on attn_output_weights. Default: 0.0.
- bias: add bias as module parameter. Default: True.
- add_bias_kv: add bias to the key and value sequences at dim=0.
- add_zero_attn: add a new batch of zeros to the key and
- value sequences at dim=1.
- kdim: total number of features in key. Default: None.
- vdim: total number of features in value. Default: None.
- Note: if kdim and vdim are None, they will be set to embed_dim such that
- query, key, and value have the same number of features.
- Examples::
- >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
- >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
- """
- bias_k: Optional[torch.Tensor]
- bias_v: Optional[torch.Tensor]
-
- def __init__(
- self,
- embed_dim,
- num_heads,
- dropout=0.0,
- bias=True,
- add_bias_kv=False,
- add_zero_attn=False,
- kdim=None,
- vdim=None,
- ):
- super(MultiheadAttention, self).__init__()
- self.embed_dim = embed_dim
- self.kdim = kdim if kdim is not None else embed_dim
- self.vdim = vdim if vdim is not None else embed_dim
- self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
-
- self.num_heads = num_heads
- self.dropout = dropout
- self.head_dim = embed_dim // num_heads
- assert (
- self.head_dim * num_heads == self.embed_dim
- ), "embed_dim must be divisible by num_heads"
-
- if self._qkv_same_embed_dim is False:
- self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
- self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
- self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
- self.register_parameter("in_proj_weight", None)
- else:
- self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))
- self.register_parameter("q_proj_weight", None)
- self.register_parameter("k_proj_weight", None)
- self.register_parameter("v_proj_weight", None)
-
- if bias:
- self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))
- else:
- self.register_parameter("in_proj_bias", None)
- self.out_proj = _LinearWithBias(embed_dim, embed_dim)
-
- if add_bias_kv:
- self.bias_k = Parameter(torch.empty(1, 1, embed_dim))
- self.bias_v = Parameter(torch.empty(1, 1, embed_dim))
- else:
- self.bias_k = self.bias_v = None
-
- self.add_zero_attn = add_zero_attn
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- if self._qkv_same_embed_dim:
- xavier_uniform_(self.in_proj_weight)
- else:
- xavier_uniform_(self.q_proj_weight)
- xavier_uniform_(self.k_proj_weight)
- xavier_uniform_(self.v_proj_weight)
-
- if self.in_proj_bias is not None:
- constant_(self.in_proj_bias, 0.0)
- constant_(self.out_proj.bias, 0.0)
- if self.bias_k is not None:
- xavier_normal_(self.bias_k)
- if self.bias_v is not None:
- xavier_normal_(self.bias_v)
-
- def __setstate__(self, state):
- # Support loading old MultiheadAttention checkpoints generated by v1.1.0
- if "_qkv_same_embed_dim" not in state:
- state["_qkv_same_embed_dim"] = True
-
- super(MultiheadAttention, self).__setstate__(state)
-
- def forward(
- self,
- query,
- key,
- value,
- key_padding_mask=None,
- need_weights=True,
- attn_mask=None,
- attention_probs_forward_hook=None,
- attention_probs_backwards_hook=None,
- ):
- r"""
- Args:
- query, key, value: map a query and a set of key-value pairs to an output.
- See "Attention Is All You Need" for more details.
- key_padding_mask: if provided, specified padding elements in the key will
- be ignored by the attention. When given a binary mask and a value is True,
- the corresponding value on the attention layer will be ignored. When given
- a byte mask and a value is non-zero, the corresponding value on the attention
- layer will be ignored
- need_weights: output attn_output_weights.
- attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
- the batches while a 3D mask allows to specify a different mask for the entries of each batch.
- Shape:
- - Inputs:
- - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
- the embedding dimension.
- - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
- the embedding dimension.
- - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
- If a ByteTensor is provided, the non-zero positions will be ignored while the position
- with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
- 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
- S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
- is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
- is provided, it will be added to the attention weight.
- - Outputs:
- - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
- E is the embedding dimension.
- - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
- L is the target sequence length, S is the source sequence length.
- """
- if not self._qkv_same_embed_dim:
- return multi_head_attention_forward(
- query,
- key,
- value,
- self.embed_dim,
- self.num_heads,
- self.in_proj_weight,
- self.in_proj_bias,
- self.bias_k,
- self.bias_v,
- self.add_zero_attn,
- self.dropout,
- self.out_proj.weight,
- self.out_proj.bias,
- training=self.training,
- key_padding_mask=key_padding_mask,
- need_weights=need_weights,
- attn_mask=attn_mask,
- use_separate_proj_weight=True,
- q_proj_weight=self.q_proj_weight,
- k_proj_weight=self.k_proj_weight,
- v_proj_weight=self.v_proj_weight,
- attention_probs_forward_hook=attention_probs_forward_hook,
- attention_probs_backwards_hook=attention_probs_backwards_hook,
- )
- else:
- return multi_head_attention_forward(
- query,
- key,
- value,
- self.embed_dim,
- self.num_heads,
- self.in_proj_weight,
- self.in_proj_bias,
- self.bias_k,
- self.bias_v,
- self.add_zero_attn,
- self.dropout,
- self.out_proj.weight,
- self.out_proj.bias,
- training=self.training,
- key_padding_mask=key_padding_mask,
- need_weights=need_weights,
- attn_mask=attn_mask,
- attention_probs_forward_hook=attention_probs_forward_hook,
- attention_probs_backwards_hook=attention_probs_backwards_hook,
- )
diff --git a/spaces/hysts/ControlNet/app.py b/spaces/hysts/ControlNet/app.py
deleted file mode 100644
index 36ecbf56cddaa2a2033f77f91d8ab19f48a38c77..0000000000000000000000000000000000000000
--- a/spaces/hysts/ControlNet/app.py
+++ /dev/null
@@ -1,157 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-import pathlib
-import shlex
-import subprocess
-
-import gradio as gr
-import torch
-
-if os.getenv('SYSTEM') == 'spaces':
- with open('patch') as f:
- subprocess.run(shlex.split('patch -p1'), stdin=f, cwd='ControlNet')
-
-base_url = 'https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/'
-names = [
- 'body_pose_model.pth',
- 'dpt_hybrid-midas-501f0c75.pt',
- 'hand_pose_model.pth',
- 'mlsd_large_512_fp32.pth',
- 'mlsd_tiny_512_fp32.pth',
- 'network-bsds500.pth',
- 'upernet_global_small.pth',
-]
-for name in names:
- command = f'wget https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/{name} -O {name}'
- out_path = pathlib.Path(f'ControlNet/annotator/ckpts/{name}')
- if out_path.exists():
- continue
- subprocess.run(shlex.split(command), cwd='ControlNet/annotator/ckpts/')
-
-from app_canny import create_demo as create_demo_canny
-from app_depth import create_demo as create_demo_depth
-from app_fake_scribble import create_demo as create_demo_fake_scribble
-from app_hed import create_demo as create_demo_hed
-from app_hough import create_demo as create_demo_hough
-from app_normal import create_demo as create_demo_normal
-from app_pose import create_demo as create_demo_pose
-from app_scribble import create_demo as create_demo_scribble
-from app_scribble_interactive import \
- create_demo as create_demo_scribble_interactive
-from app_seg import create_demo as create_demo_seg
-from model import Model, download_all_controlnet_weights
-
-DESCRIPTION = '''# [ControlNet v1.0](https://github.com/lllyasviel/ControlNet)
-
-New ControlNet v1.1 is available here .
-'''
-
-SPACE_ID = os.getenv('SPACE_ID')
-ALLOW_CHANGING_BASE_MODEL = SPACE_ID != 'hysts/ControlNet'
-
-if SPACE_ID is not None:
- DESCRIPTION += f'\nFor faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
'
-if not torch.cuda.is_available():
- DESCRIPTION += '\nRunning on CPU 🥶 This demo does not work on CPU.
'
-
-if torch.cuda.is_available():
- if os.getenv('SYSTEM') == 'spaces':
- download_all_controlnet_weights()
-
-MAX_IMAGES = int(os.getenv('MAX_IMAGES', '3'))
-DEFAULT_NUM_IMAGES = min(MAX_IMAGES, int(os.getenv('DEFAULT_NUM_IMAGES', '1')))
-
-DEFAULT_MODEL_ID = os.getenv('DEFAULT_MODEL_ID',
- 'runwayml/stable-diffusion-v1-5')
-model = Model(base_model_id=DEFAULT_MODEL_ID, task_name='canny')
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Tabs():
- with gr.TabItem('Canny'):
- create_demo_canny(model.process_canny,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Hough'):
- create_demo_hough(model.process_hough,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('HED'):
- create_demo_hed(model.process_hed,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Scribble'):
- create_demo_scribble(model.process_scribble,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Scribble Interactive'):
- create_demo_scribble_interactive(
- model.process_scribble_interactive,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Fake Scribble'):
- create_demo_fake_scribble(model.process_fake_scribble,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Pose'):
- create_demo_pose(model.process_pose,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Segmentation'):
- create_demo_seg(model.process_seg,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Depth'):
- create_demo_depth(model.process_depth,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Normal map'):
- create_demo_normal(model.process_normal,
- max_images=MAX_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
-
- with gr.Accordion(label='Base model', open=False):
- with gr.Row():
- with gr.Column():
- current_base_model = gr.Text(label='Current base model')
- with gr.Column(scale=0.3):
- check_base_model_button = gr.Button('Check current base model')
- with gr.Row():
- with gr.Column():
- new_base_model_id = gr.Text(
- label='New base model',
- max_lines=1,
- placeholder='runwayml/stable-diffusion-v1-5',
- info=
- 'The base model must be compatible with Stable Diffusion v1.5.',
- interactive=ALLOW_CHANGING_BASE_MODEL)
- with gr.Column(scale=0.3):
- change_base_model_button = gr.Button(
- 'Change base model', interactive=ALLOW_CHANGING_BASE_MODEL)
- if not ALLOW_CHANGING_BASE_MODEL:
- gr.Markdown(
- '''The base model is not allowed to be changed in this Space so as not to slow down the demo, but it can be changed if you duplicate the Space. '''
- )
-
- gr.Markdown('''### Related Spaces
-
-- [Space using Anything-v4.0 as base model](https://huggingface.co/spaces/hysts/ControlNet-with-Anything-v4)
-- https://huggingface.co/spaces/jonigata/PoseMaker2
-- https://huggingface.co/spaces/diffusers/controlnet-openpose
-- https://huggingface.co/spaces/diffusers/controlnet-canny
-''')
-
- check_base_model_button.click(fn=lambda: model.base_model_id,
- outputs=current_base_model,
- queue=False)
- new_base_model_id.submit(fn=model.set_base_model,
- inputs=new_base_model_id,
- outputs=current_base_model)
- change_base_model_button.click(fn=model.set_base_model,
- inputs=new_base_model_id,
- outputs=current_base_model)
-
-demo.queue(api_open=False, max_size=10).launch()
diff --git a/spaces/iamstolas/STOLAS/postcss.config.js b/spaces/iamstolas/STOLAS/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/iamstolas/STOLAS/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/layouts/benchmark.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/layouts/benchmark.py
deleted file mode 100644
index f58e47a7a0dc5b681fa78a0276df1b482c8c532d..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/layouts/benchmark.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import gradio
-
-from DeepFakeAI.uis.components import about, processors, execution, benchmark
-from DeepFakeAI.utilities import conditional_download
-
-
-def pre_check() -> bool:
- conditional_download('.assets/examples',
- [
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/source.jpg',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-240p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-360p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-540p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-720p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-1080p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-1440p.mp4',
- 'https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/examples/target-2160p.mp4'
- ])
- return True
-
-
-def render() -> gradio.Blocks:
- with gradio.Blocks() as layout:
- with gradio.Row():
- with gradio.Column(scale = 2):
- about.render()
- processors.render()
- execution.render()
- with gradio.Column(scale= 5):
- benchmark.render()
- return layout
-
-
-def listen() -> None:
- processors.listen()
- execution.listen()
- benchmark.listen()
diff --git a/spaces/inamXcontru/PoeticTTS/Catastrophe Season 2 Torrent Everything You Need to Know About the Second Season of the Emmy-Nominated Series.md b/spaces/inamXcontru/PoeticTTS/Catastrophe Season 2 Torrent Everything You Need to Know About the Second Season of the Emmy-Nominated Series.md
deleted file mode 100644
index f5a887b1b45308f8bf34f024fb8edd391e4fe169..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Catastrophe Season 2 Torrent Everything You Need to Know About the Second Season of the Emmy-Nominated Series.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-Though Catastrophe is still a very funny show, season two, more than the first, seems happy to serve up scenes of pure drama and it does it well, to boot. It is, at times, genuinely intense and, just like a good drama, some episodes employ effective cliffhangers, leaving you eager to see how things will resolve (or dissolve).
-Catastrophe Season 2 Torrent Download ->->->-> https://gohhs.com/2uz4Ra
-In Season 2, Prem faces a new adversary: evangelist preacher Herbert Todd. When an outbreak of smallpox threatens to bring catastrophe to the village, Prem finds himself fighting prejudice and incompetence while he tries to win the hearts and minds of the villagers over the bull-headed Todd. Also, Prem and his wife Kamini nervously await the arrival of his dreaded mother-in-law, Pushpa.
-At least 1,061 people have been killed amid the deluges that began with the seasonal monsoon rains in mid-June, and that toll is set to rise further as many communities in the mountainous northern regions remain cut off by flood-swollen rivers that washed away roads and bridges.
-Pakistan's climate minister has warned that a third of the country could be underwater by the time this year's "monster monsoon" flooding recedes. Pakistan is hit, on average, with three or four spells of monsoon rains per season, but this year has been wicked. The country is currently in the grips of its eighth spell of relentless rainfall of the summer.
-Abra-Catastrophe! is a television film initially released as the seventh, eighth, and ninth episodes of the third season of The Fairly OddParents . It was originally broadcast on Nickelodeon in the United States on July 12, 2003.
-
-"Abra-Catastrophe!" premiered on July 12, 2003.[1] Attracting over 4 million views, the television film was the highest rated film on basic cable on the week it premiered.[2] "Abra-Catastrophe" was released on a DVD and VHS tape of the same name on July 15, 2003, by Nickelodeon and Paramount Home Entertainment.[3] The DVD version includes the episode itself and some bonus materials. It was also put on the season 3 DVD in 2011.
-66 million years ago a seven-mile-wide asteroid collided with Earth, triggering a chain of events suspected of ending the dinosaurs' reign. But experts have long debated exactly what happened when the asteroid struck and how the giant beasts met their end. Now, scientists have uncovered compelling new clues about the catastrophe - from New Jersey to the wilds of Patagonia, and an international expedition of scientists has drilled into the impact crater off the coast of Mexico, recovering crucial direct evidence of the searing energy and giant tsunami unleashed by the asteroid. Join NOVA as scientists piece together a chillingly precise unfolding of the Earth's biggest cataclysm, moment by moment. And discover how our early mammalian ancestors managed to survive and repopulate the Earth.
-Ivelin Zvezdov is a financial and insurance economist by training. He has masters' degrees from the Universities of St. Andrews and Oxford. He works on natural and man-made catastrophe modeling and product development for the (re)insurance industry. His research interests include climate change and environmental risk, contagion and propagation of systemic risk, sustainable and ecosystems approaches to managing natural resources. Mr. Zvezdov has published research papers on financial and insurance quantitative methods for risk management, and on environmental and biodiversity risk estimation.
-A team of Ukrainian cyber-activists has thought of a simple yet potentially effective way to spread uncensored information in Russia: bundling torrents with text and video files pretending to include installation instructions.
-The initiative creates torrents that contain a text file with a list of credible news sources that Russians can trust and instructions on downloading and installing a VPN to secure anonymity from ISPs.
-Enclosed videos show a graphic representation of the situation in Ukraine, highlighting examples of physical catastrophe and human suffering, the results of a military operation that Russian media present as a liberating intervention.
-The torrents are uploaded to popular torrent tracking platforms that pirates use for searching, and thanks to volunteers who seed them aggressively, they rise in popularity and rank high in tracker results.
-As Jack Ryan begins season three of the action-thriller series, he finds himself on the run and in a race against time. A massive conspiracy wrongly implicates him, and he suddenly finds himself on the run.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Chhota Bheem Sky Dragon Full Movie Download WORK.md b/spaces/inamXcontru/PoeticTTS/Chhota Bheem Sky Dragon Full Movie Download WORK.md
deleted file mode 100644
index 343a50b76a0144f1b2544c033f65fb7052c15ef5..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Chhota Bheem Sky Dragon Full Movie Download WORK.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-download Bheem Vs Kirmada unlimited Movies and videos Download Here.Bheem Vs Kirmada Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
-Chhota Bheem Sky Dragon Full Movie Download Download File ✏ ✏ ✏ https://gohhs.com/2uz38F
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Dialibatoul Marakhib En Francais Pdf 65 Les Diffrentes Versions et les Variantes de ce Pome dans les Manuscrits Anciens.md b/spaces/inamXcontru/PoeticTTS/Dialibatoul Marakhib En Francais Pdf 65 Les Diffrentes Versions et les Variantes de ce Pome dans les Manuscrits Anciens.md
deleted file mode 100644
index af5a08ad0a991a82aaa4afa1840b24db2a60ddcb..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Dialibatoul Marakhib En Francais Pdf 65 Les Diffrentes Versions et les Variantes de ce Pome dans les Manuscrits Anciens.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Dialibatoul Marakhib En Francais Pdf 65 Download Zip ✸ https://gohhs.com/2uz5KE
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Chacha Bhatija Aur Mayavi Rakshas (Hindi) (Diamond Comics Chacha Bhatija Book 2).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Chacha Bhatija Aur Mayavi Rakshas (Hindi) (Diamond Comics Chacha Bhatija Book 2).md
deleted file mode 100644
index 292ab215055656d862f5ced640bf7a69cd2025cd..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Chacha Bhatija Aur Mayavi Rakshas (Hindi) (Diamond Comics Chacha Bhatija Book 2).md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-Chacha Bhatija aur Mayavi Rakshas: A Review
-Chacha Bhatija aur Mayavi Rakshas is the second book in the Diamond Comics Chacha Bhatija series. It is a comic book in Hindi that features the adventures of Chacha and Bhatija, a pair of uncle and nephew who solve mysteries and fight crime. In this book, they encounter a mysterious monster who terrorizes a village and kidnaps a princess. They have to use their wit and courage to rescue her and defeat the monster.
-Chacha Bhatija aur Mayavi Rakshas (Hindi) (Diamond Comics Chacha Bhatija Book 2) Download » https://urlin.us/2uEyub
-The comic book is based on the popular Indian comic characters Chacha and Bhatija, who were created by Diamond Comics, the largest comic book publisher and distributor in India[^2^]. Chacha and Bhatija are also featured in a 1977 Bollywood movie of the same name, starring Dharmendra, Hema Malini, Randhir Kapoor and Yogeeta Bali[^3^]. The comic book is suitable for children and adults alike, as it has humor, action, suspense and fantasy elements.
-Chacha Bhatija aur Mayavi Rakshas is a fun and entertaining read that showcases the bond between Chacha and Bhatija, as well as their bravery and intelligence. The comic book has colorful illustrations and catchy dialogues that capture the essence of the characters and the story. The comic book is available on Amazon Kindle and other online platforms.
The comic book also has some interesting trivia and facts about Chacha and Bhatija, as well as the Diamond Comics company. For example, did you know that Chacha means uncle and Bhatija means nephew in Hindi? Or that Diamond Comics was founded by Gulshan Rai in 1978 and has published over 2500 titles in various languages? Or that Chacha and Bhatija have a pet dog named Rocket who helps them in their missions?
-If you are a fan of Chacha and Bhatija, or if you are looking for a comic book that is fun, engaging and educational, then you should definitely check out Chacha Bhatija aur Mayavi Rakshas. It is a comic book that will make you laugh, thrill you and inspire you.
Chacha Bhatija aur Mayavi Rakshas is not the only comic book in the Diamond Comics Chacha Bhatija series. There are many other books that feature the adventures of Chacha and Bhatija in different settings and scenarios. Some of the titles include Chacha Bhatija aur Bhooton ka Desh, Chacha Bhatija aur Jadui Chirag, Chacha Bhatija aur Shaitani Shakti and Chacha Bhatija aur Antariksh Yatra. You can find these books on Amazon Kindle and other online platforms as well.
-
-Chacha and Bhatija are not only comic book characters, but also cultural icons in India. They have been loved and admired by generations of readers for their humor, wisdom and heroism. They have also inspired many other comic book creators and artists to create their own characters and stories. Chacha and Bhatija are truly the pride of Diamond Comics and the Indian comic industry.
In conclusion, Chacha Bhatija aur Mayavi Rakshas is a comic book that you should not miss if you are a fan of Chacha and Bhatija, or if you are looking for a comic book that is fun, engaging and educational. It is a comic book that showcases the bond between Chacha and Bhatija, as well as their bravery and intelligence. It is a comic book that has humor, action, suspense and fantasy elements. It is a comic book that will make you laugh, thrill you and inspire you.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Jam Origin Midi Guitar Crack Mac.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Jam Origin Midi Guitar Crack Mac.md
deleted file mode 100644
index 3d24eb90f79dac5be76fba8d173ca8d20844dd91..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Jam Origin Midi Guitar Crack Mac.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
- jam origin midi guitar crack jamorigin midi guitar is the worlds first real-time polyphonic audio tracking midi to guitar software. its been built by musicians who have experience with audio so you know it will work brilliantly. we have also made sure to include vst, au, and rtas versions so you can use it in any daw. jamorigin midi guitar automatically records the playing styles of your guitar and then transcribes it into the computer.
-no special awkward microphones and no physical mods. our solution is a pure software solution that will work on any of your guitars, not just one. it integrates seamlessly with your digital audio workstation and can process old recordings as well as live playing. the worlds first real-time polyphonic sound tracker. it tracks finger playing and complex chords as well as monophonic leads. it detects hammer-ons, pull-offs, slides, and bends and transparently deals with different pickup types, intonations, and fret noise.
-jam origin midi guitar crack mac DOWNLOAD ✯ https://urlin.us/2uEwcC
-no special awkward microphones and no physical mods. our solution is a pure software solution that will work on any of your guitars, not just one. it integrates seamlessly with your digital audio workstation and can process old recordings as well as live playing. it tracks finger playing and complex chords as well as monophonic leads. it detects bumps, slips, slips, and turns and transparently deals with different pickup types, intonations, and fret sounds.
-jam origin is a guitar kenpow, the worlds first true polyphonic midi guitar pedal.our audio recognition and transcription technology has been in development for 8 years (patent pending). it is truly unique and the worlds first low latency polyphonic audio transcription solution. it is currently being tested by thousands of beta testers.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/AdobePhotoshopLightroomCC20181085CrackVERIFIED Freedownload.md b/spaces/inreVtussa/clothingai/Examples/AdobePhotoshopLightroomCC20181085CrackVERIFIED Freedownload.md
deleted file mode 100644
index 705cc8a0f6054af12d262bb87aecaa32abb9523e..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/AdobePhotoshopLightroomCC20181085CrackVERIFIED Freedownload.md
+++ /dev/null
@@ -1,11 +0,0 @@
-AdobePhotoshopLightroomCC20181085Crackfreedownload Download Zip ☆ https://tiurll.com/2uCjdG
-
-6. Label. Montag, July 30, 2018 Der Ewige Kreis...for sharing the very best photos you have...://trello.com/c/FPq534sx/49-hack-adobe-photoshop-lightroom-612-cc -for-mac-top- ... Download Adobe Photoshop Lightroom CC for macOS free [PDF], ...
-Mac OSX / Mac OS X. Download Adobe Photoshop Lightroom 6.2.
-6.0. Download Adobe Photoshop Lightroom 4.4.
-5.7. Download Adobe Photoshop Lightroom...
-Adobe Photoshop Lightroom is a powerful application that allows you to edit images and import them from storage, ...
-Adobe Photoshop Lightroom is a program designed to organize and ... Bug fixes and stability improvements have been made in version 6.4. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Ambulimama Stories In Tamil Pdf 34 HOT!.md b/spaces/inreVtussa/clothingai/Examples/Ambulimama Stories In Tamil Pdf 34 HOT!.md
deleted file mode 100644
index b6626da3c7eeb4e00e39083d88b63a747ee46be2..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Ambulimama Stories In Tamil Pdf 34 HOT!.md
+++ /dev/null
@@ -1,16 +0,0 @@
-Ambulimama Stories In Tamil Pdf 34 Download File 🆓 https://tiurll.com/2uClvV
-
-The [Towenkaksharal]( is a free and open-source Knowledge base for Tamil-medium Education and Culture, based on the [Towen kaksharal project]( the Towen kaksharal site. The series of [Ambulimama stories in tamil]( has been published in the ‘Ambulimama’.
-
-Archives
-
-Metropolitan Assurance Inc. (formerly Assurance Insurance Agency) is located in the former home of the late Harvey DeMott, the State Architect of the State of Vermont, at 113 Main St, Montpelier, Vermont 05601. (Barney's location)In many applications it is necessary to obtain information regarding the elasticity of a material. For example, in the medical field, it is often necessary to know the stiffness of certain body organs, including the heart and the arteries, in order to detect and diagnose diseases.
-
-One well known technique for measuring the elasticity of a material is to cause a tensile force to be exerted in a known direction, on a sample of the material, and to measure the resulting tension force. The magnitude of the tensile force is a function of the stiffness of the sample.
-
-An example of a material testing instrument that is commonly used to determine the tensile stiffness of an object is disclosed in the U.S. Pat. No. 4,452,152 (Malekian). The patent discloses a tension measuring arrangement which measures the tensile stiffness of a material by applying a known force to a sample of the material, and measuring the resulting tensile force.
-
-Although tension testing instruments provide an effective means of measuring the stiffness of an object, they are typically relatively expensive and complicated to use. For example, the tensile testing arrangement disclosed in the Malekian patent requires a drive mechanism which applies a tensile 4fefd39f24
-
-
-
diff --git a/spaces/isaakkamau/whisper-video-caption/README.md b/spaces/isaakkamau/whisper-video-caption/README.md
deleted file mode 100644
index 8c36da3f3d35d571b00ddcac8f38066fef1936ec..0000000000000000000000000000000000000000
--- a/spaces/isaakkamau/whisper-video-caption/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Whisper Video Caption
-emoji: 🐢
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/server/index.ts b/spaces/jbilcke-hf/VideoChain-UI/src/server/index.ts
deleted file mode 100644
index 30006b5d2cfaf30676d5de415f181e68841e94cc..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/VideoChain-UI/src/server/index.ts
+++ /dev/null
@@ -1,69 +0,0 @@
-"use server"
-
-import { revalidatePath } from "next/cache"
-
-import { Video, VideoAPIRequest, GenericAPIResponse, VideoStatusRequest, VideoStatus } from "@/app/types"
-
-import { GET, POST, DELETE, PATCH } from "./base"
-
-
-// note: for security purposes we do not directly expose the VideoChain API:
-// all calls are protected with a token, that way it the VideooChain API can stay
-// lightweight, security and quotas are handled outside
-
-// this should be used by the admin only
-export const getAllVideos = async () => {
- const tasks = await GET("", [])
-
- return tasks
-}
-
-// return all tasks of a owner
-export const getVideos = async (ownerId: string) => {
- const tasks = await GET(ownerId, [])
-
- return tasks
-}
-
-export const getVideo = async (ownerId: string, videoId: string) => {
- const task = await GET(`${ownerId}/${videoId}`, null as unknown as Video)
-
- return task
-}
-
-export const setVideoStatus = async (ownerId: string, videoId: string, status: VideoStatus) => {
- const task = await PATCH(`${ownerId}/${videoId}`, { status }, null as unknown as Video)
-
- revalidatePath(`/studio/${ownerId}`)
-
- return task
-}
-
-/*
-export const deleteVideo = async (ownerId: string, videoId: string) => {
- const task = await DELETE(`${ownerId}/${videoId}`, { success: false })
- return task
-}
-
-*/
-/*
-export async function deleteVideos(ownerId: string, videoIds: string[]) {
- const task = await DELETE(ownerAndVideoId, { success: true })
-
- return task
-}
-*/
-
-export const createNewVideo = async (ownerId: string, taskRequest: VideoAPIRequest) => {
- console.log("create new video")
- const task = await POST(
- ownerId,
- taskRequest,
- null as unknown as Video
- )
-
- // for doc see https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions
- revalidatePath(`/studio/${ownerId}`)
- return task
-}
-
diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/utils/deleteFileIfExists.ts b/spaces/jbilcke-hf/ai-clip-factory/src/app/server/utils/deleteFileIfExists.ts
deleted file mode 100644
index 0361bde97137c2f91cf1dcf300b0c16ea7599190..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/utils/deleteFileIfExists.ts
+++ /dev/null
@@ -1,19 +0,0 @@
-import { existsSync, promises as fs } from "node:fs"
-
-export const deleteFileIfExists = async (filePath: string) => {
-
- const safePath = filePath.trim()
- // just a sanity check
- if (safePath.includes("*") ||safePath === "/" || safePath === "~" || safePath === ".") {
- throw new Error(`lol, no.`)
- }
- if (existsSync(filePath)) {
- try {
- await fs.unlink(safePath)
- return true
- } catch (err) {
- console.log(`failed to delete file ${safePath}`)
- }
- }
- return false
-}
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/observer/src/app/engine/see.ts b/spaces/jbilcke-hf/observer/src/app/engine/see.ts
deleted file mode 100644
index f8a5868d5e8e694162e94e5aec4380db5b656f0e..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/observer/src/app/engine/see.ts
+++ /dev/null
@@ -1,55 +0,0 @@
-"use server"
-
-import { ImageAnalysisRequest, ImageAnalysisResponse } from "@/types"
-
-const apiUrl = `${process.env.RENDERING_ENGINE_API || ""}`
-
-export async function see({
- prompt,
- imageBase64
-}: {
- prompt: string
- imageBase64: string
-}): Promise {
- if (!prompt) {
- console.error(`cannot call the API without an image, aborting..`)
- throw new Error(`cannot call the API without an image, aborting..`)
- }
-
- try {
- const request = {
- prompt,
- image: imageBase64
-
- } as ImageAnalysisRequest
-
- console.log(`calling ${apiUrl}/analyze called with: `, {
- prompt: request.prompt,
- image: request.image.slice(0, 20)
- })
-
- const res = await fetch(`${apiUrl}/analyze`, {
- method: "POST",
- headers: {
- Accept: "application/json",
- "Content-Type": "application/json",
- // Authorization: `Bearer ${process.env.VC_SECRET_ACCESS_TOKEN}`,
- },
- body: JSON.stringify(request),
- cache: 'no-store',
- // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
- // next: { revalidate: 1 }
- })
-
- if (res.status !== 200) {
- throw new Error('Failed to fetch data')
- }
-
- const response = (await res.json()) as ImageAnalysisResponse
-
- return response.result.replaceAll("The image shows", "")
- } catch (err) {
- console.error(err)
- return ""
- }
-}
diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/sanskrit.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/svcbbase.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/svcbbase.py
deleted file mode 100644
index ba5b53d2cb7d0e25d0437b2193f0aae4a3324f26..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/svcbbase.py
+++ /dev/null
@@ -1,563 +0,0 @@
-# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
-
-import base64
-import enum
-import io
-import struct
-
-import dns.enum
-import dns.exception
-import dns.immutable
-import dns.ipv4
-import dns.ipv6
-import dns.name
-import dns.rdata
-import dns.rdtypes.util
-import dns.tokenizer
-import dns.wire
-
-# Until there is an RFC, this module is experimental and may be changed in
-# incompatible ways.
-
-
-class UnknownParamKey(dns.exception.DNSException):
- """Unknown SVCB ParamKey"""
-
-
-class ParamKey(dns.enum.IntEnum):
- """SVCB ParamKey"""
-
- MANDATORY = 0
- ALPN = 1
- NO_DEFAULT_ALPN = 2
- PORT = 3
- IPV4HINT = 4
- ECH = 5
- IPV6HINT = 6
- DOHPATH = 7
-
- @classmethod
- def _maximum(cls):
- return 65535
-
- @classmethod
- def _short_name(cls):
- return "SVCBParamKey"
-
- @classmethod
- def _prefix(cls):
- return "KEY"
-
- @classmethod
- def _unknown_exception_class(cls):
- return UnknownParamKey
-
-
-class Emptiness(enum.IntEnum):
- NEVER = 0
- ALWAYS = 1
- ALLOWED = 2
-
-
-def _validate_key(key):
- force_generic = False
- if isinstance(key, bytes):
- # We decode to latin-1 so we get 0-255 as valid and do NOT interpret
- # UTF-8 sequences
- key = key.decode("latin-1")
- if isinstance(key, str):
- if key.lower().startswith("key"):
- force_generic = True
- if key[3:].startswith("0") and len(key) != 4:
- # key has leading zeros
- raise ValueError("leading zeros in key")
- key = key.replace("-", "_")
- return (ParamKey.make(key), force_generic)
-
-
-def key_to_text(key):
- return ParamKey.to_text(key).replace("_", "-").lower()
-
-
-# Like rdata escapify, but escapes ',' too.
-
-_escaped = b'",\\'
-
-
-def _escapify(qstring):
- text = ""
- for c in qstring:
- if c in _escaped:
- text += "\\" + chr(c)
- elif c >= 0x20 and c < 0x7F:
- text += chr(c)
- else:
- text += "\\%03d" % c
- return text
-
-
-def _unescape(value):
- if value == "":
- return value
- unescaped = b""
- l = len(value)
- i = 0
- while i < l:
- c = value[i]
- i += 1
- if c == "\\":
- if i >= l: # pragma: no cover (can't happen via tokenizer get())
- raise dns.exception.UnexpectedEnd
- c = value[i]
- i += 1
- if c.isdigit():
- if i >= l:
- raise dns.exception.UnexpectedEnd
- c2 = value[i]
- i += 1
- if i >= l:
- raise dns.exception.UnexpectedEnd
- c3 = value[i]
- i += 1
- if not (c2.isdigit() and c3.isdigit()):
- raise dns.exception.SyntaxError
- codepoint = int(c) * 100 + int(c2) * 10 + int(c3)
- if codepoint > 255:
- raise dns.exception.SyntaxError
- unescaped += b"%c" % (codepoint)
- continue
- unescaped += c.encode()
- return unescaped
-
-
-def _split(value):
- l = len(value)
- i = 0
- items = []
- unescaped = b""
- while i < l:
- c = value[i]
- i += 1
- if c == ord("\\"):
- if i >= l: # pragma: no cover (can't happen via tokenizer get())
- raise dns.exception.UnexpectedEnd
- c = value[i]
- i += 1
- unescaped += b"%c" % (c)
- elif c == ord(","):
- items.append(unescaped)
- unescaped = b""
- else:
- unescaped += b"%c" % (c)
- items.append(unescaped)
- return items
-
-
-@dns.immutable.immutable
-class Param:
- """Abstract base class for SVCB parameters"""
-
- @classmethod
- def emptiness(cls):
- return Emptiness.NEVER
-
-
-@dns.immutable.immutable
-class GenericParam(Param):
- """Generic SVCB parameter"""
-
- def __init__(self, value):
- self.value = dns.rdata.Rdata._as_bytes(value, True)
-
- @classmethod
- def emptiness(cls):
- return Emptiness.ALLOWED
-
- @classmethod
- def from_value(cls, value):
- if value is None or len(value) == 0:
- return None
- else:
- return cls(_unescape(value))
-
- def to_text(self):
- return '"' + dns.rdata._escapify(self.value) + '"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- value = parser.get_bytes(parser.remaining())
- if len(value) == 0:
- return None
- else:
- return cls(value)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- file.write(self.value)
-
-
-@dns.immutable.immutable
-class MandatoryParam(Param):
- def __init__(self, keys):
- # check for duplicates
- keys = sorted([_validate_key(key)[0] for key in keys])
- prior_k = None
- for k in keys:
- if k == prior_k:
- raise ValueError(f"duplicate key {k:d}")
- prior_k = k
- if k == ParamKey.MANDATORY:
- raise ValueError("listed the mandatory key as mandatory")
- self.keys = tuple(keys)
-
- @classmethod
- def from_value(cls, value):
- keys = [k.encode() for k in value.split(",")]
- return cls(keys)
-
- def to_text(self):
- return '"' + ",".join([key_to_text(key) for key in self.keys]) + '"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- keys = []
- last_key = -1
- while parser.remaining() > 0:
- key = parser.get_uint16()
- if key < last_key:
- raise dns.exception.FormError("manadatory keys not ascending")
- last_key = key
- keys.append(key)
- return cls(keys)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- for key in self.keys:
- file.write(struct.pack("!H", key))
-
-
-@dns.immutable.immutable
-class ALPNParam(Param):
- def __init__(self, ids):
- self.ids = dns.rdata.Rdata._as_tuple(
- ids, lambda x: dns.rdata.Rdata._as_bytes(x, True, 255, False)
- )
-
- @classmethod
- def from_value(cls, value):
- return cls(_split(_unescape(value)))
-
- def to_text(self):
- value = ",".join([_escapify(id) for id in self.ids])
- return '"' + dns.rdata._escapify(value.encode()) + '"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- ids = []
- while parser.remaining() > 0:
- id = parser.get_counted_bytes()
- ids.append(id)
- return cls(ids)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- for id in self.ids:
- file.write(struct.pack("!B", len(id)))
- file.write(id)
-
-
-@dns.immutable.immutable
-class NoDefaultALPNParam(Param):
- # We don't ever expect to instantiate this class, but we need
- # a from_value() and a from_wire_parser(), so we just return None
- # from the class methods when things are OK.
-
- @classmethod
- def emptiness(cls):
- return Emptiness.ALWAYS
-
- @classmethod
- def from_value(cls, value):
- if value is None or value == "":
- return None
- else:
- raise ValueError("no-default-alpn with non-empty value")
-
- def to_text(self):
- raise NotImplementedError # pragma: no cover
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- if parser.remaining() != 0:
- raise dns.exception.FormError
- return None
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- raise NotImplementedError # pragma: no cover
-
-
-@dns.immutable.immutable
-class PortParam(Param):
- def __init__(self, port):
- self.port = dns.rdata.Rdata._as_uint16(port)
-
- @classmethod
- def from_value(cls, value):
- value = int(value)
- return cls(value)
-
- def to_text(self):
- return f'"{self.port}"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- port = parser.get_uint16()
- return cls(port)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- file.write(struct.pack("!H", self.port))
-
-
-@dns.immutable.immutable
-class IPv4HintParam(Param):
- def __init__(self, addresses):
- self.addresses = dns.rdata.Rdata._as_tuple(
- addresses, dns.rdata.Rdata._as_ipv4_address
- )
-
- @classmethod
- def from_value(cls, value):
- addresses = value.split(",")
- return cls(addresses)
-
- def to_text(self):
- return '"' + ",".join(self.addresses) + '"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- addresses = []
- while parser.remaining() > 0:
- ip = parser.get_bytes(4)
- addresses.append(dns.ipv4.inet_ntoa(ip))
- return cls(addresses)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- for address in self.addresses:
- file.write(dns.ipv4.inet_aton(address))
-
-
-@dns.immutable.immutable
-class IPv6HintParam(Param):
- def __init__(self, addresses):
- self.addresses = dns.rdata.Rdata._as_tuple(
- addresses, dns.rdata.Rdata._as_ipv6_address
- )
-
- @classmethod
- def from_value(cls, value):
- addresses = value.split(",")
- return cls(addresses)
-
- def to_text(self):
- return '"' + ",".join(self.addresses) + '"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- addresses = []
- while parser.remaining() > 0:
- ip = parser.get_bytes(16)
- addresses.append(dns.ipv6.inet_ntoa(ip))
- return cls(addresses)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- for address in self.addresses:
- file.write(dns.ipv6.inet_aton(address))
-
-
-@dns.immutable.immutable
-class ECHParam(Param):
- def __init__(self, ech):
- self.ech = dns.rdata.Rdata._as_bytes(ech, True)
-
- @classmethod
- def from_value(cls, value):
- if "\\" in value:
- raise ValueError("escape in ECH value")
- value = base64.b64decode(value.encode())
- return cls(value)
-
- def to_text(self):
- b64 = base64.b64encode(self.ech).decode("ascii")
- return f'"{b64}"'
-
- @classmethod
- def from_wire_parser(cls, parser, origin=None): # pylint: disable=W0613
- value = parser.get_bytes(parser.remaining())
- return cls(value)
-
- def to_wire(self, file, origin=None): # pylint: disable=W0613
- file.write(self.ech)
-
-
-_class_for_key = {
- ParamKey.MANDATORY: MandatoryParam,
- ParamKey.ALPN: ALPNParam,
- ParamKey.NO_DEFAULT_ALPN: NoDefaultALPNParam,
- ParamKey.PORT: PortParam,
- ParamKey.IPV4HINT: IPv4HintParam,
- ParamKey.ECH: ECHParam,
- ParamKey.IPV6HINT: IPv6HintParam,
-}
-
-
-def _validate_and_define(params, key, value):
- (key, force_generic) = _validate_key(_unescape(key))
- if key in params:
- raise SyntaxError(f'duplicate key "{key:d}"')
- cls = _class_for_key.get(key, GenericParam)
- emptiness = cls.emptiness()
- if value is None:
- if emptiness == Emptiness.NEVER:
- raise SyntaxError("value cannot be empty")
- value = cls.from_value(value)
- else:
- if force_generic:
- value = cls.from_wire_parser(dns.wire.Parser(_unescape(value)))
- else:
- value = cls.from_value(value)
- params[key] = value
-
-
-@dns.immutable.immutable
-class SVCBBase(dns.rdata.Rdata):
-
- """Base class for SVCB-like records"""
-
- # see: draft-ietf-dnsop-svcb-https-11
-
- __slots__ = ["priority", "target", "params"]
-
- def __init__(self, rdclass, rdtype, priority, target, params):
- super().__init__(rdclass, rdtype)
- self.priority = self._as_uint16(priority)
- self.target = self._as_name(target)
- for k, v in params.items():
- k = ParamKey.make(k)
- if not isinstance(v, Param) and v is not None:
- raise ValueError(f"{k:d} not a Param")
- self.params = dns.immutable.Dict(params)
- # Make sure any parameter listed as mandatory is present in the
- # record.
- mandatory = params.get(ParamKey.MANDATORY)
- if mandatory:
- for key in mandatory.keys:
- # Note we have to say "not in" as we have None as a value
- # so a get() and a not None test would be wrong.
- if key not in params:
- raise ValueError(f"key {key:d} declared mandatory but not present")
- # The no-default-alpn parameter requires the alpn parameter.
- if ParamKey.NO_DEFAULT_ALPN in params:
- if ParamKey.ALPN not in params:
- raise ValueError("no-default-alpn present, but alpn missing")
-
- def to_text(self, origin=None, relativize=True, **kw):
- target = self.target.choose_relativity(origin, relativize)
- params = []
- for key in sorted(self.params.keys()):
- value = self.params[key]
- if value is None:
- params.append(key_to_text(key))
- else:
- kv = key_to_text(key) + "=" + value.to_text()
- params.append(kv)
- if len(params) > 0:
- space = " "
- else:
- space = ""
- return "%d %s%s%s" % (self.priority, target, space, " ".join(params))
-
- @classmethod
- def from_text(
- cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None
- ):
- priority = tok.get_uint16()
- target = tok.get_name(origin, relativize, relativize_to)
- if priority == 0:
- token = tok.get()
- if not token.is_eol_or_eof():
- raise SyntaxError("parameters in AliasMode")
- tok.unget(token)
- params = {}
- while True:
- token = tok.get()
- if token.is_eol_or_eof():
- tok.unget(token)
- break
- if token.ttype != dns.tokenizer.IDENTIFIER:
- raise SyntaxError("parameter is not an identifier")
- equals = token.value.find("=")
- if equals == len(token.value) - 1:
- # 'key=', so next token should be a quoted string without
- # any intervening whitespace.
- key = token.value[:-1]
- token = tok.get(want_leading=True)
- if token.ttype != dns.tokenizer.QUOTED_STRING:
- raise SyntaxError("whitespace after =")
- value = token.value
- elif equals > 0:
- # key=value
- key = token.value[:equals]
- value = token.value[equals + 1 :]
- elif equals == 0:
- # =key
- raise SyntaxError('parameter cannot start with "="')
- else:
- # key
- key = token.value
- value = None
- _validate_and_define(params, key, value)
- return cls(rdclass, rdtype, priority, target, params)
-
- def _to_wire(self, file, compress=None, origin=None, canonicalize=False):
- file.write(struct.pack("!H", self.priority))
- self.target.to_wire(file, None, origin, False)
- for key in sorted(self.params):
- file.write(struct.pack("!H", key))
- value = self.params[key]
- # placeholder for length (or actual length of empty values)
- file.write(struct.pack("!H", 0))
- if value is None:
- continue
- else:
- start = file.tell()
- value.to_wire(file, origin)
- end = file.tell()
- assert end - start < 65536
- file.seek(start - 2)
- stuff = struct.pack("!H", end - start)
- file.write(stuff)
- file.seek(0, io.SEEK_END)
-
- @classmethod
- def from_wire_parser(cls, rdclass, rdtype, parser, origin=None):
- priority = parser.get_uint16()
- target = parser.get_name(origin)
- if priority == 0 and parser.remaining() != 0:
- raise dns.exception.FormError("parameters in AliasMode")
- params = {}
- prior_key = -1
- while parser.remaining() > 0:
- key = parser.get_uint16()
- if key < prior_key:
- raise dns.exception.FormError("keys not in order")
- prior_key = key
- vlen = parser.get_uint16()
- pcls = _class_for_key.get(key, GenericParam)
- with parser.restrict_to(vlen):
- value = pcls.from_wire_parser(parser, origin)
- params[key] = value
- return cls(rdclass, rdtype, priority, target, params)
-
- def _processing_priority(self):
- return self.priority
-
- @classmethod
- def _processing_order(cls, iterable):
- return dns.rdtypes.util.priority_processing_order(iterable)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/templating.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/templating.py
deleted file mode 100644
index 0cb868486edd9dda38f90c65f314597813128cf8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/templating.py
+++ /dev/null
@@ -1 +0,0 @@
-from starlette.templating import Jinja2Templates as Jinja2Templates # noqa
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py
deleted file mode 100644
index da299c6d85893e4113c459d503d77c6a120128ae..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6morx.html
-class table__m_o_r_x(BaseTTXConverter):
- pass
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/constants.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/constants.py
deleted file mode 100644
index d619a2462ae933d7bdde127f86bde58dac29b79b..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/constants.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""Set of constants."""
-
-MAX_CHUNK_SIZE = 3900
-MAX_CHUNK_OVERLAP = 200
-NUM_OUTPUTS = 256
diff --git a/spaces/jordonpeter01/MusicGen2/audiocraft/models/musicgen.py b/spaces/jordonpeter01/MusicGen2/audiocraft/models/musicgen.py
deleted file mode 100644
index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/MusicGen2/audiocraft/models/musicgen.py
+++ /dev/null
@@ -1,362 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using MusicGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import os
-import typing as tp
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes, WavCondition
-from ..utils.autocast import TorchAutocast
-
-
-MelodyList = tp.List[tp.Optional[torch.Tensor]]
-MelodyType = tp.Union[torch.Tensor, MelodyList]
-
-
-class MusicGen:
- """MusicGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: float = 30):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- self.max_duration = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=15) # 15 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> int:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'melody', device=None):
- """Return pretrained model, we provide four models:
- - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small
- - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium
- - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody
- - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large
- """
-
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device)
- lm = get_debug_lm_model(device)
- return MusicGen(name, compression_model, lm)
-
- if name not in HF_MODEL_CHECKPOINTS_MAP:
- if not os.path.isfile(name) and not os.path.isdir(name):
- raise ValueError(
- f"{name} is not a valid checkpoint name. "
- f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}"
- )
-
- cache_dir = os.environ.get('MUSICGEN_ROOT', None)
- compression_model = load_compression_model(name, device=device, cache_dir=cache_dir)
- lm = load_lm_model(name, device=device, cache_dir=cache_dir)
- if name == 'melody':
- lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True
-
- return MusicGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 30.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 18):
- """Set the generation parameters for MusicGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 30.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor:
- """Generate samples in an unconditional manner.
-
- Args:
- num_samples (int): Number of samples to be generated.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- descriptions: tp.List[tp.Optional[str]] = [None] * num_samples
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType,
- melody_sample_rate: int, progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text and melody.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as
- melody conditioning. Should have shape [B, C, T] with B matching the description length,
- C=1 or 2. It can be [C, T] if there is a single description. It can also be
- a list of [C, T] tensors.
- melody_sample_rate: (int): Sample rate of the melody waveforms.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if isinstance(melody_wavs, torch.Tensor):
- if melody_wavs.dim() == 2:
- melody_wavs = melody_wavs[None]
- if melody_wavs.dim() != 3:
- raise ValueError("Melody wavs should have a shape [B, C, T].")
- melody_wavs = list(melody_wavs)
- else:
- for melody in melody_wavs:
- if melody is not None:
- assert melody.dim() == 2, "One melody in the list has the wrong number of dims."
-
- melody_wavs = [
- convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels)
- if wav is not None else None
- for wav in melody_wavs]
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None,
- melody_wavs=melody_wavs)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- melody_wavs: tp.Optional[MelodyList] = None,
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms
- used as melody conditioning. Defaults to None.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if melody_wavs is None:
- for attr in attributes:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- if self.name != "melody":
- raise RuntimeError("This model doesn't support melody conditioning. "
- "Use the `melody` model.")
- assert len(melody_wavs) == len(descriptions), \
- f"number of melody wavs must match number of descriptions! " \
- f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}"
- for attr, melody in zip(attributes, melody_wavs):
- if melody is None:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- attr.wav['self_wav'] = WavCondition(
- melody.to(device=self.device),
- torch.tensor([melody.shape[-1]], device=self.device))
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody).
- prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- # now this gets a bit messier, we need to handle prompts,
- # melody conditioning etc.
- ref_wavs = [attr.wav['self_wav'] for attr in attributes]
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- for attr, ref_wav in zip(attributes, ref_wavs):
- wav_length = ref_wav.length.item()
- if wav_length == 0:
- continue
- # We will extend the wav periodically if it not long enough.
- # we have to do it here rather than in conditioners.py as otherwise
- # we wouldn't have the full wav.
- initial_position = int(time_offset * self.sample_rate)
- wav_target_length = int(self.max_duration * self.sample_rate)
- print(initial_position / self.sample_rate, wav_target_length / self.sample_rate)
- positions = torch.arange(initial_position,
- initial_position + wav_target_length, device=self.device)
- attr.wav['self_wav'] = WavCondition(
- ref_wav[0][:, positions % wav_length],
- torch.full_like(ref_wav[1], wav_target_length))
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
-
- # generate audio
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
diff --git a/spaces/josuelmet/Metal_Music_Interpolator/_Compressor.py b/spaces/josuelmet/Metal_Music_Interpolator/_Compressor.py
deleted file mode 100644
index e8d93a95ad680e8e3a8f341ffd7bb4d732480eaa..0000000000000000000000000000000000000000
--- a/spaces/josuelmet/Metal_Music_Interpolator/_Compressor.py
+++ /dev/null
@@ -1,208 +0,0 @@
-'''
-Imports
-'''
-import guitarpro
-from guitarpro import *
-import numpy as np
-import os
-import pickle
-from tqdm import tqdm
-
-from keras.utils import np_utils
-
-from _NoteData import NoteData
-
-
-'''
-Constants
-'''
-# PITCH[i] = the pitch associated with midi note number i.
-# For example, PITCH[69] = 'A4'
-PITCH = {val : str(GuitarString(number=0, value=val)) for val in range(128)}
-# MIDI[string] = the midi number associated with the note described by string.
-# For example, MIDI['A4'] = 69.
-MIDI = {str(GuitarString(number=0, value=val)) : val for val in range(128)}
-
-
-
-
-'''
-process_notes function
-'''
-def process_notes(beat, tuning, as_fingerings=True):
-
- noteData = NoteData()
-
- duration = (beat.duration.value, beat.duration.isDotted)
-
- # Tuplets are cool but rare.
- # If a tuplet is found, simply halve its play time (by doubling its duration value) to simplify things.
- if beat.duration.tuplet.enters != 1 or beat.duration.tuplet.times != 1:
- duration = (duration[0] * 2, duration[1]) # Tuples aren't mutable, so just re-assign the tuple.
-
- noteData.duration = duration[0]
- noteData.isDotted = duration[1]
-
- if len(beat.notes) == 0:
- # return 'rest', duration[0], duration[1], False
- noteData.value = 'rest'
- return noteData
-
- noteData.palmMute = beat.notes[0].effect.palmMute
-
-
- note_types = [note.type for note in beat.notes]
-
-
- if all(note_type == NoteType.rest for note_type in note_types):
- #return 'rest', duration[0], duration[1], False
- noteData.value = 'rest'
- return noteData
-
- if all(note_type == NoteType.tie for note_type in note_types):
- #return 'tied', duration[0], duration[1], False
- noteData.value = 'tied'
- return noteData
-
- if all(note_type == NoteType.dead for note_type in note_types):
- # return 'dead', duration[0], duration[1], False
- noteData.value = 'dead'
- return noteData
-
-
-
- lowest_string = len(tuning)
-
-
- if as_fingerings:
- # NEW CODE: Represent each pitch as its distance (in semitones) from the tuning of the lowest string.
- pitches = np.array([note.value + tuning[note.string] - tuning[lowest_string] for note in beat.notes if note.type == NoteType.normal])
- else:
- # note_number = MIDI note number, where A4 = 440 Hz = note 69
- # OLD CODE:
- pitches = np.array([note.value + tuning[note.string] for note in beat.notes if note.type == NoteType.normal])
-
- # Remove any possible NaN values.
- pitches = pitches[~np.isnan(pitches)]
-
-
- # Pitches are often stored in descending order, but we want to make sure they're in ascending order.
- # Thus, we flip the pitches before sorting, so as to help the algorithm.
- pitches = np.sort(pitches[::-1])
-
- if len(pitches) == 0:
- #return 'rest', duration[0], duration[1]
- noteData.value = 'rest'
- return noteData
-
- if len(pitches) == 1:
- if as_fingerings:
- # NEW CODE:
- # return str(pitches[0]), duration[0], duration[1]
- noteData.value = str(pitches[0])
- return noteData
- else:
- # OLD CODE:
- # return PITCH[pitches[0]], duration[0], duration[1]
- noteData.value = PITCH[pitches[0]]
- return noteData
-
- # Look at the pitch intervals in the lowest 3 notes that are being played.
- # Usually, chords will start at the lowest 2 notes.
- # However, sometimes players will strum the open lowest string constantly throughout the song.
- # (see: 'Be Quiet and Drive', 'Kaiowas')
- # Thus, the next-highest pair of notes should be considered when labeling a chord.
- if len(pitches) == 2:
- note_pairs = [(0, 1)]
- if len(pitches) == 3:
- note_pairs = [(0, 1), (0, 2), (1, 2)]
- elif len(pitches) >= 4:
- note_pairs = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3)]
-
- for idx1, idx2 in note_pairs:
-
- interval = pitches[idx2] - pitches[idx1]
-
- if interval == 12 or interval == 7:
- # Return a power chord associated with pitches[idx1]
- if as_fingerings:
- # NEW CODE:
- # return str(pitches[idx1]) + '_5', duration[0], duration[1]
- noteData.value = str(pitches[idx1]) + '_5'
- return noteData
- else:
- # OLD CODE:
- # return PITCH[pitches[idx1]] + '_5', duration[0], duration[1]
- noteData.value = PITCH[pitches[idx1]] + '_5'
- return noteData
-
- if interval == 6:
- # Return a tritone chord associated with pitches[idx1]
- if as_fingerings:
- # NEW CODE:
- # return str(pitches[idx1]) + '_dim5', duration[0], duration[1]
- noteData.value = str(pitches[idx1]) + '_dim5'
- return noteData
- else:
- # OLD CODE:
- # return PITCH[pitches[idx1]] + '_dim5', duration[0], duration[1]
- noteData.value = PITCH[pitches[idx1]] + 'dim_5'
- return noteData
-
- if interval == 5:
- # Return a P4 chord associated with pitches[idx1]
- if as_fingerings:
- # return str(pitches[idx1]) + '_4', duration[0], duration[1]
- noteData.value = str(pitches[idx1]) + '_4'
- return noteData
- else:
- # return PITCH[pitches[idx1]] + '_4', duration[0], duration[1]
- noteData.value = PITCH[pitches[idx1]] + '_4'
- return noteData
-
-
-
- if as_fingerings:
- # NEW CODE:
- #return str(pitches[0]), duration[0], duration[1]
- noteData.value = str(pitches[0])
- return noteData
- else:
- # OLD CODE:
- # return PITCH[pitches[0]], duration[0], duration[1]
- noteData.value = PITCH[pitches[0]]
- return noteData
-
-
-
-
-'''
-compress_track function
-'''
-def compress_track(track, as_fingerings=True):
- # 'song' contains the compressed representation of track.
- song = np.empty(len(track.measures), dtype=object)
-
- # Get the tuning and lowest string of the instrument in this track.
- tuning = {string.number : string.value for string in track.strings}
- lowest_string = len(tuning) # Bass have 4-6 strings, while metal guitars have 6 - 8 strings.
-
- #print(f'Tuning = {[PITCH[x] for x in tuning.values()]}')
-
- for m_i, measure in enumerate(track.measures):
- '''
- Upon inspection of some of the most popular Songsterr .gp5 tabs,
- it turns out that each measure always has two Voices.
- The first Voice (index 0) always contains music, while
- the second Voice (index 1) always just contains an empty Beat with no notes.
-
- Therefore, only the first Voice (index 0) actually matters.
- '''
- song[m_i] = []
-
- #print(m_i+1)
- for b_i, beat in enumerate(measure.voices[0].beats):
- song[m_i].append(process_notes(beat, tuning, as_fingerings).as_tuple())
- #print('\t', song[m_i][b_i], '\t', beat.duration)
-
- return song
\ No newline at end of file
diff --git a/spaces/jskalbg/ChatDev01/chatdev/phase.py b/spaces/jskalbg/ChatDev01/chatdev/phase.py
deleted file mode 100644
index fbf181e3aca6999d49cd07a02864924d6a5c8d3f..0000000000000000000000000000000000000000
--- a/spaces/jskalbg/ChatDev01/chatdev/phase.py
+++ /dev/null
@@ -1,597 +0,0 @@
-import os
-import re
-from abc import ABC, abstractmethod
-
-from camel.agents import RolePlaying
-from camel.messages import ChatMessage
-from camel.typing import TaskType, ModelType
-from chatdev.chat_env import ChatEnv
-from chatdev.statistics import get_info
-from chatdev.utils import log_and_print_online, log_arguments
-
-
-class Phase(ABC):
-
- def __init__(self,
- assistant_role_name,
- user_role_name,
- phase_prompt,
- role_prompts,
- phase_name,
- model_type,
- log_filepath):
- """
-
- Args:
- assistant_role_name: who receives chat in a phase
- user_role_name: who starts the chat in a phase
- phase_prompt: prompt of this phase
- role_prompts: prompts of all roles
- phase_name: name of this phase
- """
- self.seminar_conclusion = None
- self.assistant_role_name = assistant_role_name
- self.user_role_name = user_role_name
- self.phase_prompt = phase_prompt
- self.phase_env = dict()
- self.phase_name = phase_name
- self.assistant_role_prompt = role_prompts[assistant_role_name]
- self.user_role_prompt = role_prompts[user_role_name]
- self.ceo_prompt = role_prompts["Chief Executive Officer"]
- self.counselor_prompt = role_prompts["Counselor"]
- self.timeout_seconds = 1.0
- self.max_retries = 3
- self.reflection_prompt = """Here is a conversation between two roles: {conversations} {question}"""
- self.model_type = model_type
- self.log_filepath = log_filepath
-
- @log_arguments
- def chatting(
- self,
- chat_env,
- task_prompt: str,
- assistant_role_name: str,
- user_role_name: str,
- phase_prompt: str,
- phase_name: str,
- assistant_role_prompt: str,
- user_role_prompt: str,
- task_type=TaskType.CHATDEV,
- need_reflect=False,
- with_task_specify=False,
- model_type=ModelType.GPT_3_5_TURBO,
- placeholders=None,
- chat_turn_limit=10
- ) -> str:
- """
-
- Args:
- chat_env: global chatchain environment TODO: only for employee detection, can be deleted
- task_prompt: user query prompt for building the software
- assistant_role_name: who receives the chat
- user_role_name: who starts the chat
- phase_prompt: prompt of the phase
- phase_name: name of the phase
- assistant_role_prompt: prompt of assistant role
- user_role_prompt: prompt of user role
- task_type: task type
- need_reflect: flag for checking reflection
- with_task_specify: with task specify
- model_type: model type
- placeholders: placeholders for phase environment to generate phase prompt
- chat_turn_limit: turn limits in each chat
-
- Returns:
-
- """
-
- if placeholders is None:
- placeholders = {}
- assert 1 <= chat_turn_limit <= 100
-
- if not chat_env.exist_employee(assistant_role_name):
- raise ValueError(f"{assistant_role_name} not recruited in ChatEnv.")
- if not chat_env.exist_employee(user_role_name):
- raise ValueError(f"{user_role_name} not recruited in ChatEnv.")
-
- # init role play
- role_play_session = RolePlaying(
- assistant_role_name=assistant_role_name,
- user_role_name=user_role_name,
- assistant_role_prompt=assistant_role_prompt,
- user_role_prompt=user_role_prompt,
- task_prompt=task_prompt,
- task_type=task_type,
- with_task_specify=with_task_specify,
- model_type=model_type,
- )
-
- # log_and_print_online("System", role_play_session.assistant_sys_msg)
- # log_and_print_online("System", role_play_session.user_sys_msg)
-
- # start the chat
- _, input_user_msg = role_play_session.init_chat(None, placeholders, phase_prompt)
- seminar_conclusion = None
-
- # handle chats
- # the purpose of the chatting in one phase is to get a seminar conclusion
- # there are two types of conclusion
- # 1. with "" mark
- # 1.1 get seminar conclusion flag (ChatAgent.info) from assistant or user role, which means there exist special "" mark in the conversation
- # 1.2 add "" to the reflected content of the chat (which may be terminated chat without "" mark)
- # 2. without "" mark, which means the chat is terminated or normally ended without generating a marked conclusion, and there is no need to reflect
- for i in range(chat_turn_limit):
- # start the chat, we represent the user and send msg to assistant
- # 1. so the input_user_msg should be assistant_role_prompt + phase_prompt
- # 2. then input_user_msg send to LLM and get assistant_response
- # 3. now we represent the assistant and send msg to user, so the input_assistant_msg is user_role_prompt + assistant_response
- # 4. then input_assistant_msg send to LLM and get user_response
- # all above are done in role_play_session.step, which contains two interactions with LLM
- # the first interaction is logged in role_play_session.init_chat
- assistant_response, user_response = role_play_session.step(input_user_msg, chat_turn_limit == 1)
-
- conversation_meta = "**" + assistant_role_name + "<->" + user_role_name + " on : " + str(
- phase_name) + ", turn " + str(i) + "**\n\n"
-
- # TODO: max_tokens_exceeded errors here
- if isinstance(assistant_response.msg, ChatMessage):
- # we log the second interaction here
- log_and_print_online(role_play_session.assistant_agent.role_name,
- conversation_meta + "[" + role_play_session.user_agent.system_message.content + "]\n\n" + assistant_response.msg.content)
- if role_play_session.assistant_agent.info:
- seminar_conclusion = assistant_response.msg.content
- break
- if assistant_response.terminated:
- break
-
- if isinstance(user_response.msg, ChatMessage):
- # here is the result of the second interaction, which may be used to start the next chat turn
- log_and_print_online(role_play_session.user_agent.role_name,
- conversation_meta + "[" + role_play_session.assistant_agent.system_message.content + "]\n\n" + user_response.msg.content)
- if role_play_session.user_agent.info:
- seminar_conclusion = user_response.msg.content
- break
- if user_response.terminated:
- break
-
- # continue the chat
- if chat_turn_limit > 1 and isinstance(user_response.msg, ChatMessage):
- input_user_msg = user_response.msg
- else:
- break
-
- # conduct self reflection
- if need_reflect:
- if seminar_conclusion in [None, ""]:
- seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session, phase_name,
- chat_env)
- if "recruiting" in phase_name:
- if "Yes".lower() not in seminar_conclusion.lower() and "No".lower() not in seminar_conclusion.lower():
- seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session,
- phase_name,
- chat_env)
- elif seminar_conclusion in [None, ""]:
- seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session, phase_name,
- chat_env)
- else:
- seminar_conclusion = assistant_response.msg.content
-
- log_and_print_online("**[Seminar Conclusion]**:\n\n {}".format(seminar_conclusion))
- seminar_conclusion = seminar_conclusion.split("")[-1]
- return seminar_conclusion
-
- def self_reflection(self,
- task_prompt: str,
- role_play_session: RolePlaying,
- phase_name: str,
- chat_env: ChatEnv) -> str:
- """
-
- Args:
- task_prompt: user query prompt for building the software
- role_play_session: role play session from the chat phase which needs reflection
- phase_name: name of the chat phase which needs reflection
- chat_env: global chatchain environment
-
- Returns:
- reflected_content: str, reflected results
-
- """
- messages = role_play_session.assistant_agent.stored_messages if len(
- role_play_session.assistant_agent.stored_messages) >= len(
- role_play_session.user_agent.stored_messages) else role_play_session.user_agent.stored_messages
- messages = ["{}: {}".format(message.role_name, message.content.replace("\n\n", "\n")) for message in messages]
- messages = "\n\n".join(messages)
-
- if "recruiting" in phase_name:
- question = """Answer their final discussed conclusion (Yes or No) in the discussion without any other words, e.g., "Yes" """
- elif phase_name == "DemandAnalysis":
- question = """Answer their final product modality in the discussion without any other words, e.g., "PowerPoint" """
- # elif phase_name in [PhaseType.BRAINSTORMING]:
- # question = """Conclude three most creative and imaginative brainstorm ideas from the whole discussion, in the format: "1) *; 2) *; 3) *; where '*' represents a suggestion." """
- elif phase_name == "LanguageChoose":
- question = """Conclude the programming language being discussed for software development, in the format: "*" where '*' represents a programming language." """
- elif phase_name == "EnvironmentDoc":
- question = """According to the codes and file format listed above, write a requirements.txt file to specify the dependencies or packages required for the project to run properly." """
- else:
- raise ValueError(f"Reflection of phase {phase_name}: Not Assigned.")
-
- # Reflections actually is a special phase between CEO and counselor
- # They read the whole chatting history of this phase and give refined conclusion of this phase
- reflected_content = \
- self.chatting(chat_env=chat_env,
- task_prompt=task_prompt,
- assistant_role_name="Chief Executive Officer",
- user_role_name="Counselor",
- phase_prompt=self.reflection_prompt,
- phase_name="Reflection",
- assistant_role_prompt=self.ceo_prompt,
- user_role_prompt=self.counselor_prompt,
- placeholders={"conversations": messages, "question": question},
- need_reflect=False,
- chat_turn_limit=1,
- model_type=self.model_type)
-
- if "recruiting" in phase_name:
- if "Yes".lower() in reflected_content.lower():
- return "Yes"
- return "No"
- else:
- return reflected_content
-
- @abstractmethod
- def update_phase_env(self, chat_env):
- """
- update self.phase_env (if needed) using chat_env, then the chatting will use self.phase_env to follow the context and fill placeholders in phase prompt
- must be implemented in customized phase
- the usual format is just like:
- ```
- self.phase_env.update({key:chat_env[key]})
- ```
- Args:
- chat_env: global chat chain environment
-
- Returns: None
-
- """
- pass
-
- @abstractmethod
- def update_chat_env(self, chat_env) -> ChatEnv:
- """
- update chan_env based on the results of self.execute, which is self.seminar_conclusion
- must be implemented in customized phase
- the usual format is just like:
- ```
- chat_env.xxx = some_func_for_postprocess(self.seminar_conclusion)
- ```
- Args:
- chat_env:global chat chain environment
-
- Returns:
- chat_env: updated global chat chain environment
-
- """
- pass
-
- def execute(self, chat_env, chat_turn_limit, need_reflect) -> ChatEnv:
- """
- execute the chatting in this phase
- 1. receive information from environment: update the phase environment from global environment
- 2. execute the chatting
- 3. change the environment: update the global environment using the conclusion
- Args:
- chat_env: global chat chain environment
- chat_turn_limit: turn limit in each chat
- need_reflect: flag for reflection
-
- Returns:
- chat_env: updated global chat chain environment using the conclusion from this phase execution
-
- """
- self.update_phase_env(chat_env)
- self.seminar_conclusion = \
- self.chatting(chat_env=chat_env,
- task_prompt=chat_env.env_dict['task_prompt'],
- need_reflect=need_reflect,
- assistant_role_name=self.assistant_role_name,
- user_role_name=self.user_role_name,
- phase_prompt=self.phase_prompt,
- phase_name=self.phase_name,
- assistant_role_prompt=self.assistant_role_prompt,
- user_role_prompt=self.user_role_prompt,
- chat_turn_limit=chat_turn_limit,
- placeholders=self.phase_env,
- model_type=self.model_type)
- chat_env = self.update_chat_env(chat_env)
- return chat_env
-
-
-class DemandAnalysis(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- pass
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- if len(self.seminar_conclusion) > 0:
- chat_env.env_dict['modality'] = self.seminar_conclusion.split("")[-1].lower().replace(".", "").strip()
- return chat_env
-
-
-class LanguageChoose(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas']})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- if len(self.seminar_conclusion) > 0 and "" in self.seminar_conclusion:
- chat_env.env_dict['language'] = self.seminar_conclusion.split("")[-1].lower().replace(".", "").strip()
- elif len(self.seminar_conclusion) > 0:
- chat_env.env_dict['language'] = self.seminar_conclusion
- else:
- chat_env.env_dict['language'] = "Python"
- return chat_env
-
-
-class Coding(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- gui = "" if not chat_env.config.gui_design \
- else "The software should be equipped with graphical user interface (GUI) so that user can visually and graphically use it; so you must choose a GUI framework (e.g., in Python, you can implement GUI via tkinter, Pygame, Flexx, PyGUI, etc,)."
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "gui": gui})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.update_codes(self.seminar_conclusion)
- if len(chat_env.codes.codebooks.keys()) == 0:
- raise ValueError("No Valid Codes.")
- chat_env.rewrite_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class ArtDesign(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env = {"task": chat_env.env_dict['task_prompt'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes()}
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.proposed_images = chat_env.get_proposed_images_from_message(self.seminar_conclusion)
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class ArtIntegration(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env = {"task": chat_env.env_dict['task_prompt'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "images": "\n".join(
- ["{}: {}".format(filename, chat_env.proposed_images[filename]) for
- filename in sorted(list(chat_env.proposed_images.keys()))])}
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.update_codes(self.seminar_conclusion)
- chat_env.rewrite_codes()
- # chat_env.generate_images_from_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class CodeComplete(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "unimplemented_file": ""})
- unimplemented_file = ""
- for filename in self.phase_env['pyfiles']:
- code_content = open(os.path.join(chat_env.env_dict['directory'], filename)).read()
- lines = [line.strip() for line in code_content.split("\n") if line.strip() == "pass"]
- if len(lines) > 0 and self.phase_env['num_tried'][filename] < self.phase_env['max_num_implement']:
- unimplemented_file = filename
- break
- self.phase_env['num_tried'][unimplemented_file] += 1
- self.phase_env['unimplemented_file'] = unimplemented_file
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.update_codes(self.seminar_conclusion)
- if len(chat_env.codes.codebooks.keys()) == 0:
- raise ValueError("No Valid Codes.")
- chat_env.rewrite_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class CodeReviewComment(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update(
- {"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "images": ", ".join(chat_env.incorporated_images)})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.env_dict['review_comments'] = self.seminar_conclusion
- return chat_env
-
-
-class CodeReviewModification(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "comments": chat_env.env_dict['review_comments']})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- if "```".lower() in self.seminar_conclusion.lower():
- chat_env.update_codes(self.seminar_conclusion)
- chat_env.rewrite_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- self.phase_env['modification_conclusion'] = self.seminar_conclusion
- return chat_env
-
-
-class CodeReviewHuman(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- print(
- f"You can participate in the development of the software {chat_env.env_dict['task_prompt']}. Please input your feedback. (\"End\" to quit the involvement.)")
- provided_comments = input()
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "comments": provided_comments})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- if "```".lower() in self.seminar_conclusion.lower():
- chat_env.update_codes(self.seminar_conclusion)
- chat_env.rewrite_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class TestErrorSummary(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- chat_env.generate_images_from_codes()
- (exist_bugs_flag, test_reports) = chat_env.exist_bugs()
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "test_reports": test_reports,
- "exist_bugs_flag": exist_bugs_flag})
- log_and_print_online("**[Test Reports]**:\n\n{}".format(test_reports))
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env.env_dict['error_summary'] = self.seminar_conclusion
- chat_env.env_dict['test_reports'] = self.phase_env['test_reports']
-
- return chat_env
-
- def execute(self, chat_env, chat_turn_limit, need_reflect) -> ChatEnv:
- self.update_phase_env(chat_env)
- if "ModuleNotFoundError" in self.phase_env['test_reports']:
- chat_env.fix_module_not_found_error(self.phase_env['test_reports'])
- log_and_print_online(
- f"Software Test Engineer found ModuleNotFoundError:\n{self.phase_env['test_reports']}\n")
- pip_install_content = ""
- for match in re.finditer(r"No module named '(\S+)'", self.phase_env['test_reports'], re.DOTALL):
- module = match.group(1)
- pip_install_content += "{}\n```{}\n{}\n```\n".format("cmd", "bash", f"pip install {module}")
- log_and_print_online(f"Programmer resolve ModuleNotFoundError by:\n{pip_install_content}\n")
- self.seminar_conclusion = "nothing need to do"
- else:
- self.seminar_conclusion = \
- self.chatting(chat_env=chat_env,
- task_prompt=chat_env.env_dict['task_prompt'],
- need_reflect=need_reflect,
- assistant_role_name=self.assistant_role_name,
- user_role_name=self.user_role_name,
- phase_prompt=self.phase_prompt,
- phase_name=self.phase_name,
- assistant_role_prompt=self.assistant_role_prompt,
- user_role_prompt=self.user_role_prompt,
- chat_turn_limit=chat_turn_limit,
- placeholders=self.phase_env)
- chat_env = self.update_chat_env(chat_env)
- return chat_env
-
-
-class TestModification(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "test_reports": chat_env.env_dict['test_reports'],
- "error_summary": chat_env.env_dict['error_summary'],
- "codes": chat_env.get_codes()
- })
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- if "```".lower() in self.seminar_conclusion.lower():
- chat_env.update_codes(self.seminar_conclusion)
- chat_env.rewrite_codes()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class EnvironmentDoc(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes()})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env._update_requirements(self.seminar_conclusion)
- chat_env.rewrite_requirements()
- log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath)))
- return chat_env
-
-
-class Manual(Phase):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def update_phase_env(self, chat_env):
- self.phase_env.update({"task": chat_env.env_dict['task_prompt'],
- "modality": chat_env.env_dict['modality'],
- "ideas": chat_env.env_dict['ideas'],
- "language": chat_env.env_dict['language'],
- "codes": chat_env.get_codes(),
- "requirements": chat_env.get_requirements()})
-
- def update_chat_env(self, chat_env) -> ChatEnv:
- chat_env._update_manuals(self.seminar_conclusion)
- chat_env.rewrite_manuals()
- return chat_env
diff --git a/spaces/juancopi81/sd-riffusion/README.md b/spaces/juancopi81/sd-riffusion/README.md
deleted file mode 100644
index 78f1688b5707aed9f8fb11eaa0f3b7220ce694b3..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/sd-riffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sd Riffusion
-emoji: 📊
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jyseo/3DFuse/vis_cam.py b/spaces/jyseo/3DFuse/vis_cam.py
deleted file mode 100644
index 6af57cdcfaa13b871d805cc5831c5256cc199161..0000000000000000000000000000000000000000
--- a/spaces/jyseo/3DFuse/vis_cam.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import json
-import numpy as np
-from numpy.linalg import inv
-from pathlib import Path
-import imageio
-import open3d as o3d
-
-from hc3d.vis import CameraCone
-from hc3d.render import compute_intrinsics, unproject
-from hc3d.utils import batch_img_resize
-from fabric.utils.seed import seed_everything
-
-
-def get_K(H=500, W=500, fov=60):
- K = compute_intrinsics(W / H, fov, H)
- return K
-
-
-def shoot_rays(K, pose):
- h = 200
- pixs = np.array([
- [10, h],
- [200, h],
- [400, h]
- ])
- pts = unproject(K, pixs, depth=1.0)
- pts = np.concatenate([
- pts,
- np.array([0, 0, 0, 1]).reshape(1, -1),
- ], axis=0) # origin, followed by 4 img corners
- pts = pts @ pose.T
- pts = pts[:, :3]
- pts = pts.astype(np.float32)
-
- n = len(pixs)
- lines = np.array([
- [i, n] for i in range(n)
- ], dtype=np.int32)
-
- color = [1, 1, 0]
- colors = np.array([color] * len(lines), dtype=np.float32)
-
- lset = o3d.t.geometry.LineSet()
- lset.point['positions'] = pts
- lset.line['indices'] = lines
- lset.line['colors'] = colors
-
- return lset
-
-
-def test_rays(H, W, K):
- xs, ys = np.meshgrid(
- np.arange(W, dtype=np.float32),
- np.arange(H, dtype=np.float32), indexing='xy'
- )
- xys = np.stack([xs, ys], axis=-1)
- my_rays = unproject(K, xys.reshape(-1, 2))
- my_rays = my_rays.reshape(int(H), int(W), 4)[:, :, :3]
- return
-
-
-def plot_inward_facing_views():
- # from run_sjc import get_train_poses
- from math import pi
- from pose import Poser
- H, W = 64, 64
- poser = Poser(H, W, FoV=60, R=4)
- # K, poses = poser.sample_test(100)
- K, poses, _ = poser.sample_train(1000)
- K = K[0]
-
- cam_locs = poses[:, :3, -1]
- # radius = np.linalg.norm(cam_locs, axis=1)
- # print(f"scene radius {radius}")
-
- # test_rays(H, W, K)
-
- # K = get_K(H, W, 50)
- # NeRF blender actually follows OpenGL camera convention (except top-left corner); nice
- # but its world coordinate is z up. I find it strange.
-
- def generate_cam(po, color, im=None):
- cone = CameraCone(K, po, W, H, scale=0.1,
- top_left_corner=(0, 0), color=color)
- lset = cone.as_line_set()
- if im is None:
- return [lset]
- else:
- # o3d img tsr requires contiguous array
- im = np.ascontiguousarray(im)
- view_plane = cone.as_view_plane(im)
- return [lset, view_plane]
-
- cones = []
-
- for i in range(len(poses)):
- po = poses[i]
- geom = generate_cam(po, [1, 0, 0])
- cones.extend(geom)
- # rays = shoot_rays(K, po)
- # cones.extend([rays])
-
- o3d.visualization.draw(cones, show_skybox=False)
-
-
-def blend_rgba(img):
- img = img[..., :3] * img[..., -1:] + (1. - img[..., -1:]) # blend A to RGB
- return img
-
-
-def compare():
- import math
- import matplotlib.pyplot as plt
-
- vs = np.linspace(1e-5, math.pi - 1e-5, 500)
- phi = np.arccos(1 - 2 * (vs / math.pi))
- plt.plot(vs, phi)
- plt.show()
-
-
-if __name__ == "__main__":
- seed_everything(0)
- plot_inward_facing_views()
- # compare()
diff --git a/spaces/kamalkraj/Mega-Dalle/README.md b/spaces/kamalkraj/Mega-Dalle/README.md
deleted file mode 100644
index 6111bc8f33f4de456004aa133e68e4bf7f3bbcde..0000000000000000000000000000000000000000
--- a/spaces/kamalkraj/Mega-Dalle/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mega Dalle
-emoji: 🏃
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.22
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kasun/comparing-captioning-models/README.md b/spaces/kasun/comparing-captioning-models/README.md
deleted file mode 100644
index 2c7b6de73fa3a62afe0d0895177cbfe7e1ac0091..0000000000000000000000000000000000000000
--- a/spaces/kasun/comparing-captioning-models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Comparing Captioning Models
-emoji: 🔥
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-duplicated_from: nielsr/comparing-captioning-models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/katasou/Music-discord-bot/ffmpeg.py b/spaces/katasou/Music-discord-bot/ffmpeg.py
deleted file mode 100644
index d3ef2a308a7f96450287a09ec9a94986ebae2d44..0000000000000000000000000000000000000000
--- a/spaces/katasou/Music-discord-bot/ffmpeg.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import shlex
-import subprocess
-from discord.opus import Encoder as OpusEncoder
-import logging
-import discord
-from yt_dlp import YoutubeDL
-import asyncio
-import re
-from nicodl import NicoNico as niconico_dl#ニコニコ関連
-
-log = logging.getLogger(__name__)
-
-class OriginalFFmpegPCMAudio(discord.FFmpegPCMAudio):
- def __init__(self,
- source,
- *,
- executable='ffmpeg',
- pipe=False,
- stderr=None,
- before_options=None,
- options=None):
- self.total_milliseconds = 0
- self.source = source
-
- super().__init__(source,
- executable=executable,
- pipe=pipe,
- stderr=stderr,
- before_options=before_options,
- options=options)
-
- def wait_buffer(self):
- self._stdout.peek(OpusEncoder.FRAME_SIZE)
-
- def read(self):
- ret = super().read()
-
- if ret:
- self.total_milliseconds += 20
- return ret
-
- def get_tootal_millisecond(self, seek_time):
- if seek_time:
- list = reversed([int(x) for x in seek_time.split(":")])
- total = 0
- for i, x in enumerate(list):
- total += x * 3600 if i == 2 else x * 60 if i == 1 else x
- return max(1000 * total, 0)
- else:
- raise Exception()
-
- def rewind(self,
- rewind_time,
- *,
- executable='ffmpeg',
- pipe=False,
- stderr=None,
- before_options=None,
- options=None):
- seek_time = str(
- int((self.total_milliseconds -
- self.get_tootal_millisecond(rewind_time)) / 1000))
-
- self.seek(seek_time=seek_time,
- executable=executable,
- pipe=pipe,
- stderr=stderr,
- before_options=before_options,
- options=options)
-
- def seek(self,
- seek_time,
- *,
- executable='ffmpeg',
- pipe=False,
- stderr=None,
- before_options=None,
- options=None):
- self.total_milliseconds = self.get_tootal_millisecond(seek_time)
- proc = self._process
- before_options = f"-ss {seek_time} " + before_options
- args = []
- subprocess_kwargs = {
- 'stdin': self.source if pipe else subprocess.DEVNULL,
- 'stderr': stderr
- }
-
- if isinstance(before_options, str):
- args.extend(shlex.split(before_options))
-
- args.append('-i')
- args.append('-' if pipe else self.source)
- args.extend(('-f', 's16le', '-ar', '48000', '-ac', '2', '-loglevel',
- 'warning'))
-
- if isinstance(options, str):
- args.extend(shlex.split(options))
-
- args.append('pipe:1')
-
- args = [executable, *args]
- kwargs = {'stdout': subprocess.PIPE}
- kwargs.update(subprocess_kwargs)
-
- self._process = self._spawn_process(args, **kwargs)
- self._stdout = self._process.stdout
- self.kill(proc)
-
- def kill(self, proc):
- if proc is None:
- return
-
- log.info('Preparing to terminate ffmpeg process %s.', proc.pid)
-
- try:
- proc.kill()
- except Exception:
- log.exception(
- "Ignoring error attempting to kill ffmpeg process %s",
- proc.pid)
-
- if proc.poll() is None:
- log.info(
- 'ffmpeg process %s has not terminated. Waiting to terminate...',
- proc.pid)
- proc.communicate()
- log.info(
- 'ffmpeg process %s should have terminated with a return code of %s.',
- proc.pid, proc.returncode)
- else:
- log.info(
- 'ffmpeg process %s successfully terminated with return code of %s.',
- proc.pid, proc.returncode)
-
-
-ytdl_format_options = {'format': 'bestaudio/best','outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s','restrictfilenames': True,'noplaylist': True,'nocheckcertificate': True,'ignoreerrors': False,'logtostderr': False,'quiet': True,'no_warnings': True,'default_search': 'auto','source_address':'0.0.0.0','user-agent':"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"}
-ffmpeg_options = {'before_options':'-vn -reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5'}
-ytdlp = YoutubeDL(ytdl_format_options)
-
-class YTDLSource(discord.PCMVolumeTransformer):
- def __init__(self, source, *, data, volume=0.1):
- super().__init__(source, volume)
-
- self.data = data
-
- self.title = data.get('title')
- self.url = data.get('url')
-
- @classmethod
- async def from_url(cls, url, *, loop=None, stream=False, volume=0.1):
- loop = loop or asyncio.get_event_loop()
- data = await loop.run_in_executor(
- None, lambda: ytdlp.extract_info(url, download=not stream))
-
- if 'entries' in data:
- data = data['entries'][0]
-
- filename = data['url'] if stream else ytdlp.prepare_filename(data)
- source = OriginalFFmpegPCMAudio(filename, **ffmpeg_options)
- return cls(source, data=data, volume=volume)
-
-niconico_headers = {
- "Accept-Encoding": "gzip, deflate, br",
- "Accept-Language": "ja",
- "Connection": "keep-alive",
- "Host": "nvapi.nicovideo.jp",
- "Origin": "https://www.nicovideo.jp",
- "Referer": "https://www.nicovideo.jp/",
- "sec-ch-ua-mobile": "?0",
- "Sec-Fetch-Dest": "empty",
- "Sec-Fetch-Mode": "cors",
- "Sec-Fetch-Site": "same-site",
- "User-Agent":
- "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36",
- "X-Frontend-Id": "6",
- "X-Frontend-Version": "0",
- "X-Niconico-Language": "ja-jp"
-}
-
-headers = {
- "User-Agent":
- "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0",
-}
-
-class NicoNicoDLSource(discord.PCMVolumeTransformer):
- def __init__(self, source, *, url, volume=0.1):
- super().__init__(source, volume)
-
- self.url = url
-
- @classmethod
- async def from_url(cls, url, *, log=False, volume=0.1):
- nico_id = url.split("/")[-1]
- niconico = niconico_dl(nico_id, log=log)
- stream_url = await niconico.get_download_link()
-
- source = OriginalFFmpegPCMAudio(stream_url, **ffmpeg_options)
- return (cls(source, url=stream_url, volume=volume), niconico)
\ No newline at end of file
diff --git a/spaces/kazgafa/ChatGPT4/README.md b/spaces/kazgafa/ChatGPT4/README.md
deleted file mode 100644
index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000
--- a/spaces/kazgafa/ChatGPT4/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chat-with-GPT4
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ysharma/ChatGPT4
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Liaobots.py b/spaces/kepl/gpt/g4f/Provider/Providers/Liaobots.py
deleted file mode 100644
index a04b9574f60842d424712efcd8bef5f6e1e97f4f..0000000000000000000000000000000000000000
--- a/spaces/kepl/gpt/g4f/Provider/Providers/Liaobots.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import os
-import uuid
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://liaobots.com'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4']
-supports_stream = True
-needs_auth = True
-working = False
-
-models = {
- 'gpt-4': {
- "id": "gpt-4",
- "name": "GPT-4",
- "maxLength": 24000,
- "tokenLimit": 8000
- },
- 'gpt-3.5-turbo': {
- "id": "gpt-3.5-turbo",
- "name": "GPT-3.5",
- "maxLength": 12000,
- "tokenLimit": 4000
- },
- 'gpt-3.5-turbo-16k': {
- "id": "gpt-3.5-turbo-16k",
- "name": "GPT-3.5-16k",
- "maxLength": 48000,
- "tokenLimit": 16000
- },
-}
-
-
-def _create_completion(model: str, messages: list, stream: bool, chatId: str, **kwargs):
-
- print(kwargs)
-
- headers = {
- 'authority': 'liaobots.com',
- 'content-type': 'application/json',
- 'origin': 'https://liaobots.com',
- 'referer': 'https://liaobots.com/',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
- 'x-auth-code': 'qlcUMVn1KLMhd'
- }
-
- json_data = {
- 'conversationId': chatId,
- 'model': models[model],
- 'messages': messages,
- 'key': '',
- 'prompt': "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.",
- }
-
- response = requests.post('https://liaobots.com/api/chat',
- headers=headers, json=json_data, stream=True)
-
- for token in response.iter_content(chunk_size=2046):
- yield (token.decode('utf-8'))
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/keras-io/question_answering/README.md b/spaces/keras-io/question_answering/README.md
deleted file mode 100644
index 330916b034f68e93b6597f5252c350f7770a22dd..0000000000000000000000000000000000000000
--- a/spaces/keras-io/question_answering/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Keras Question Answering
-emoji: ❤️
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/animate.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/animate.py
deleted file mode 100644
index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/animate.py
+++ /dev/null
@@ -1,257 +0,0 @@
-import os
-import cv2
-import yaml
-import numpy as np
-import warnings
-from skimage import img_as_ubyte
-import safetensors
-import safetensors.torch
-warnings.filterwarnings('ignore')
-
-
-import imageio
-import torch
-import torchvision
-
-
-from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector
-from src.facerender.modules.mapping import MappingNet
-from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator
-from src.facerender.modules.make_animation import make_animation
-
-from pydub import AudioSegment
-from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
-from src.utils.paste_pic import paste_pic
-from src.utils.videoio import save_video_with_watermark
-
-try:
- import webui # in webui
- in_webui = True
-except:
- in_webui = False
-
-class AnimateFromCoeff():
-
- def __init__(self, sadtalker_path, device):
-
- with open(sadtalker_path['facerender_yaml']) as f:
- config = yaml.safe_load(f)
-
- generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'],
- **config['model_params']['common_params'])
- kp_extractor = KPDetector(**config['model_params']['kp_detector_params'],
- **config['model_params']['common_params'])
- he_estimator = HEEstimator(**config['model_params']['he_estimator_params'],
- **config['model_params']['common_params'])
- mapping = MappingNet(**config['model_params']['mapping_params'])
-
- generator.to(device)
- kp_extractor.to(device)
- he_estimator.to(device)
- mapping.to(device)
- for param in generator.parameters():
- param.requires_grad = False
- for param in kp_extractor.parameters():
- param.requires_grad = False
- for param in he_estimator.parameters():
- param.requires_grad = False
- for param in mapping.parameters():
- param.requires_grad = False
-
- if sadtalker_path is not None:
- if 'checkpoint' in sadtalker_path: # use safe tensor
- self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None)
- else:
- self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- if sadtalker_path['mappingnet_checkpoint'] is not None:
- self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.he_estimator = he_estimator
- self.mapping = mapping
-
- self.kp_extractor.eval()
- self.generator.eval()
- self.he_estimator.eval()
- self.mapping.eval()
-
- self.device = device
-
- def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
- kp_detector=None, he_estimator=None,
- device="cpu"):
-
- checkpoint = safetensors.torch.load_file(checkpoint_path)
-
- if generator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'generator' in k:
- x_generator[k.replace('generator.', '')] = v
- generator.load_state_dict(x_generator)
- if kp_detector is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'kp_extractor' in k:
- x_generator[k.replace('kp_extractor.', '')] = v
- kp_detector.load_state_dict(x_generator)
- if he_estimator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'he_estimator' in k:
- x_generator[k.replace('he_estimator.', '')] = v
- he_estimator.load_state_dict(x_generator)
-
- return None
-
- def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
- kp_detector=None, he_estimator=None, optimizer_generator=None,
- optimizer_discriminator=None, optimizer_kp_detector=None,
- optimizer_he_estimator=None, device="cpu"):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if generator is not None:
- generator.load_state_dict(checkpoint['generator'])
- if kp_detector is not None:
- kp_detector.load_state_dict(checkpoint['kp_detector'])
- if he_estimator is not None:
- he_estimator.load_state_dict(checkpoint['he_estimator'])
- if discriminator is not None:
- try:
- discriminator.load_state_dict(checkpoint['discriminator'])
- except:
- print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
- if optimizer_generator is not None:
- optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
- if optimizer_discriminator is not None:
- try:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
- except RuntimeError as e:
- print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
- if optimizer_kp_detector is not None:
- optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
- if optimizer_he_estimator is not None:
- optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
-
- return checkpoint['epoch']
-
- def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
- optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if mapping is not None:
- mapping.load_state_dict(checkpoint['mapping'])
- if discriminator is not None:
- discriminator.load_state_dict(checkpoint['discriminator'])
- if optimizer_mapping is not None:
- optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
- if optimizer_discriminator is not None:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
-
- return checkpoint['epoch']
-
- def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
-
- source_image=x['source_image'].type(torch.FloatTensor)
- source_semantics=x['source_semantics'].type(torch.FloatTensor)
- target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
- source_image=source_image.to(self.device)
- source_semantics=source_semantics.to(self.device)
- target_semantics=target_semantics.to(self.device)
- if 'yaw_c_seq' in x:
- yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
- yaw_c_seq = x['yaw_c_seq'].to(self.device)
- else:
- yaw_c_seq = None
- if 'pitch_c_seq' in x:
- pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
- pitch_c_seq = x['pitch_c_seq'].to(self.device)
- else:
- pitch_c_seq = None
- if 'roll_c_seq' in x:
- roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
- roll_c_seq = x['roll_c_seq'].to(self.device)
- else:
- roll_c_seq = None
-
- frame_num = x['frame_num']
-
- predictions_video = make_animation(source_image, source_semantics, target_semantics,
- self.generator, self.kp_extractor, self.he_estimator, self.mapping,
- yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
-
- predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:])
- predictions_video = predictions_video[:frame_num]
-
- video = []
- for idx in range(predictions_video.shape[0]):
- image = predictions_video[idx]
- image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32)
- video.append(image)
- result = img_as_ubyte(video)
-
- ### the generated video is 256x256, so we keep the aspect ratio,
- original_size = crop_info[0]
- if original_size:
- result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ]
-
- video_name = x['video_name'] + '.mp4'
- path = os.path.join(video_save_dir, 'temp_'+video_name)
-
- imageio.mimsave(path, result, fps=float(25))
-
- av_path = os.path.join(video_save_dir, video_name)
- return_path = av_path
-
- audio_path = x['audio_path']
- audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0]
- new_audio_path = os.path.join(video_save_dir, audio_name+'.wav')
- start_time = 0
- # cog will not keep the .mp3 filename
- sound = AudioSegment.from_file(audio_path)
- frames = frame_num
- end_time = start_time + frames*1/25*1000
- word1=sound.set_frame_rate(16000)
- word = word1[start_time:end_time]
- word.export(new_audio_path, format="wav")
-
- save_video_with_watermark(path, new_audio_path, av_path, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name}')
-
- if 'full' in preprocess.lower():
- # only add watermark to the full image.
- video_name_full = x['video_name'] + '_full.mp4'
- full_video_path = os.path.join(video_save_dir, video_name_full)
- return_path = full_video_path
- paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False)
- print(f'The generated video is named {video_save_dir}/{video_name_full}')
- else:
- full_video_path = av_path
-
- #### paste back then enhancers
- if enhancer:
- video_name_enhancer = x['video_name'] + '_enhanced.mp4'
- enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer)
- av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer)
- return_path = av_path_enhancer
-
- try:
- enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
- except:
- enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
-
- save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name_enhancer}')
- os.remove(enhanced_path)
-
- os.remove(path)
- os.remove(new_audio_path)
-
- return return_path
-
diff --git a/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/README.md b/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/README.md
deleted file mode 100644
index e5ea6b93603ca0251e911bdb861b93c970ba3f4b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/OpenAI-TTS-Voice-Conversion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: OpenAI TTS New
-emoji: 📊
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 4.1.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/glint360k_r34.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/glint360k_r34.py
deleted file mode 100644
index fda2701758a839a7161d09c25f0ca3d26033baff..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/configs/glint360k_r34.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "r34"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/__init__.py
deleted file mode 100644
index 999e090a458ee148ceca0649f1e3806a40e909bd..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/__init__.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .assign_score_withk import assign_score_withk
-from .ball_query import ball_query
-from .bbox import bbox_overlaps
-from .border_align import BorderAlign, border_align
-from .box_iou_rotated import box_iou_rotated
-from .carafe import CARAFE, CARAFENaive, CARAFEPack, carafe, carafe_naive
-from .cc_attention import CrissCrossAttention
-from .contour_expand import contour_expand
-from .corner_pool import CornerPool
-from .correlation import Correlation
-from .deform_conv import DeformConv2d, DeformConv2dPack, deform_conv2d
-from .deform_roi_pool import (DeformRoIPool, DeformRoIPoolPack,
- ModulatedDeformRoIPoolPack, deform_roi_pool)
-from .deprecated_wrappers import Conv2d_deprecated as Conv2d
-from .deprecated_wrappers import ConvTranspose2d_deprecated as ConvTranspose2d
-from .deprecated_wrappers import Linear_deprecated as Linear
-from .deprecated_wrappers import MaxPool2d_deprecated as MaxPool2d
-from .focal_loss import (SigmoidFocalLoss, SoftmaxFocalLoss,
- sigmoid_focal_loss, softmax_focal_loss)
-from .furthest_point_sample import (furthest_point_sample,
- furthest_point_sample_with_dist)
-from .fused_bias_leakyrelu import FusedBiasLeakyReLU, fused_bias_leakyrelu
-from .gather_points import gather_points
-from .group_points import GroupAll, QueryAndGroup, grouping_operation
-from .info import (get_compiler_version, get_compiling_cuda_version,
- get_onnxruntime_op_path)
-from .iou3d import boxes_iou_bev, nms_bev, nms_normal_bev
-from .knn import knn
-from .masked_conv import MaskedConv2d, masked_conv2d
-from .modulated_deform_conv import (ModulatedDeformConv2d,
- ModulatedDeformConv2dPack,
- modulated_deform_conv2d)
-from .multi_scale_deform_attn import MultiScaleDeformableAttention
-from .nms import batched_nms, nms, nms_match, nms_rotated, soft_nms
-from .pixel_group import pixel_group
-from .point_sample import (SimpleRoIAlign, point_sample,
- rel_roi_point_to_rel_img_point)
-from .points_in_boxes import (points_in_boxes_all, points_in_boxes_cpu,
- points_in_boxes_part)
-from .points_sampler import PointsSampler
-from .psa_mask import PSAMask
-from .roi_align import RoIAlign, roi_align
-from .roi_align_rotated import RoIAlignRotated, roi_align_rotated
-from .roi_pool import RoIPool, roi_pool
-from .roiaware_pool3d import RoIAwarePool3d
-from .roipoint_pool3d import RoIPointPool3d
-from .saconv import SAConv2d
-from .scatter_points import DynamicScatter, dynamic_scatter
-from .sync_bn import SyncBatchNorm
-from .three_interpolate import three_interpolate
-from .three_nn import three_nn
-from .tin_shift import TINShift, tin_shift
-from .upfirdn2d import upfirdn2d
-from .voxelize import Voxelization, voxelization
-
-__all__ = [
- 'bbox_overlaps', 'CARAFE', 'CARAFENaive', 'CARAFEPack', 'carafe',
- 'carafe_naive', 'CornerPool', 'DeformConv2d', 'DeformConv2dPack',
- 'deform_conv2d', 'DeformRoIPool', 'DeformRoIPoolPack',
- 'ModulatedDeformRoIPoolPack', 'deform_roi_pool', 'SigmoidFocalLoss',
- 'SoftmaxFocalLoss', 'sigmoid_focal_loss', 'softmax_focal_loss',
- 'get_compiler_version', 'get_compiling_cuda_version',
- 'get_onnxruntime_op_path', 'MaskedConv2d', 'masked_conv2d',
- 'ModulatedDeformConv2d', 'ModulatedDeformConv2dPack',
- 'modulated_deform_conv2d', 'batched_nms', 'nms', 'soft_nms', 'nms_match',
- 'RoIAlign', 'roi_align', 'RoIPool', 'roi_pool', 'SyncBatchNorm', 'Conv2d',
- 'ConvTranspose2d', 'Linear', 'MaxPool2d', 'CrissCrossAttention', 'PSAMask',
- 'point_sample', 'rel_roi_point_to_rel_img_point', 'SimpleRoIAlign',
- 'SAConv2d', 'TINShift', 'tin_shift', 'assign_score_withk',
- 'box_iou_rotated', 'RoIPointPool3d', 'nms_rotated', 'knn', 'ball_query',
- 'upfirdn2d', 'FusedBiasLeakyReLU', 'fused_bias_leakyrelu',
- 'RoIAlignRotated', 'roi_align_rotated', 'pixel_group', 'QueryAndGroup',
- 'GroupAll', 'grouping_operation', 'contour_expand', 'three_nn',
- 'three_interpolate', 'MultiScaleDeformableAttention', 'BorderAlign',
- 'border_align', 'gather_points', 'furthest_point_sample',
- 'furthest_point_sample_with_dist', 'PointsSampler', 'Correlation',
- 'boxes_iou_bev', 'nms_bev', 'nms_normal_bev', 'Voxelization',
- 'voxelization', 'dynamic_scatter', 'DynamicScatter', 'RoIAwarePool3d',
- 'points_in_boxes_part', 'points_in_boxes_cpu', 'points_in_boxes_all'
-]
diff --git a/spaces/kokuma/img-to-music/app.py b/spaces/kokuma/img-to-music/app.py
deleted file mode 100644
index 71d588c6c9a55424fae557f3796742674aacc737..0000000000000000000000000000000000000000
--- a/spaces/kokuma/img-to-music/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import gradio as gr
-import os
-import requests
-import urllib
-
-from os import path
-from pydub import AudioSegment
-
-img_to_text = gr.Blocks.load(name="spaces/fffiloni/CLIP-Interrogator-2")
-text_to_music = gr.Interface.load("spaces/fffiloni/text-2-music")
-
-def get_prompts(uploaded_image):
- prompt = img_to_text(uploaded_image, "ViT-L (best for Stable Diffusion 1.*)", "fast", fn_index=1)[0]
- music_result = get_music(prompt)
- print(f"""—————
- PROMPT: {prompt}
- ———————
- """)
- return music_result, prompt
-
-
-def get_music(prompt):
-
- result = text_to_music(prompt, fn_index=0)
-
- print(f"""—————
- MUSIC
- prompt: {result}
- ———————
- """)
-
- url = result
- save_as = "file.mp3"
-
- data = urllib.request.urlopen(url)
-
- f = open(save_as,'wb')
- f.write(data.read())
- f.close()
-
- wave_file="file.wav"
-
- sound = AudioSegment.from_mp3(save_as)
- sound.export(wave_file, format="wav")
-
- return wave_file
-
-
-css = """
-#col-container {max-width: 700px; margin-left: auto; margin-right: auto;}
-a {text-decoration-line: underline; font-weight: 600;}
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-"""
-
-with gr.Blocks(css=css) as demo:
- with gr.Column(elem_id="col-container"):
- gr.HTML("""
-
-
- Image to Music
-
-
-
- Sends an image in to CLIP Interrogator
- to generate a text prompt which is then run through
- Mubert text-to-music to generate music from the input image!
-
-
""")
-
-
- input_img = gr.Image(type="filepath", elem_id="input-img")
- generate = gr.Button("Generate Music from Image")
-
- music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output")
- prompt_text = gr.Textbox(label="Prompt")
-
- generate.click(get_prompts, inputs=[input_img], outputs=[music_output, prompt_text], api_name="i2m")
-
-demo.queue(max_size=32, concurrency_count=20).launch()
\ No newline at end of file
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/params.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/params.py
deleted file mode 100644
index 16c5c309a785f9e6d2a6f9cbd82cadd5971fa819..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fastapi/params.py
+++ /dev/null
@@ -1,381 +0,0 @@
-from enum import Enum
-from typing import Any, Callable, Dict, Optional, Sequence
-
-from pydantic.fields import FieldInfo, Undefined
-
-
-class ParamTypes(Enum):
- query = "query"
- header = "header"
- path = "path"
- cookie = "cookie"
-
-
-class Param(FieldInfo):
- in_: ParamTypes
-
- def __init__(
- self,
- default: Any = Undefined,
- *,
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- deprecated: Optional[bool] = None,
- include_in_schema: bool = True,
- **extra: Any,
- ):
- self.deprecated = deprecated
- self.example = example
- self.examples = examples
- self.include_in_schema = include_in_schema
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- **extra,
- )
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}({self.default})"
-
-
-class Path(Param):
- in_ = ParamTypes.path
-
- def __init__(
- self,
- default: Any = ...,
- *,
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- deprecated: Optional[bool] = None,
- include_in_schema: bool = True,
- **extra: Any,
- ):
- assert default is ..., "Path parameters cannot have a default value"
- self.in_ = self.in_
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- deprecated=deprecated,
- example=example,
- examples=examples,
- include_in_schema=include_in_schema,
- **extra,
- )
-
-
-class Query(Param):
- in_ = ParamTypes.query
-
- def __init__(
- self,
- default: Any = Undefined,
- *,
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- deprecated: Optional[bool] = None,
- include_in_schema: bool = True,
- **extra: Any,
- ):
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- deprecated=deprecated,
- example=example,
- examples=examples,
- include_in_schema=include_in_schema,
- **extra,
- )
-
-
-class Header(Param):
- in_ = ParamTypes.header
-
- def __init__(
- self,
- default: Any = Undefined,
- *,
- alias: Optional[str] = None,
- convert_underscores: bool = True,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- deprecated: Optional[bool] = None,
- include_in_schema: bool = True,
- **extra: Any,
- ):
- self.convert_underscores = convert_underscores
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- deprecated=deprecated,
- example=example,
- examples=examples,
- include_in_schema=include_in_schema,
- **extra,
- )
-
-
-class Cookie(Param):
- in_ = ParamTypes.cookie
-
- def __init__(
- self,
- default: Any = Undefined,
- *,
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- deprecated: Optional[bool] = None,
- include_in_schema: bool = True,
- **extra: Any,
- ):
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- deprecated=deprecated,
- example=example,
- examples=examples,
- include_in_schema=include_in_schema,
- **extra,
- )
-
-
-class Body(FieldInfo):
- def __init__(
- self,
- default: Any = Undefined,
- *,
- embed: bool = False,
- media_type: str = "application/json",
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- **extra: Any,
- ):
- self.embed = embed
- self.media_type = media_type
- self.example = example
- self.examples = examples
- super().__init__(
- default=default,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- **extra,
- )
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}({self.default})"
-
-
-class Form(Body):
- def __init__(
- self,
- default: Any = Undefined,
- *,
- media_type: str = "application/x-www-form-urlencoded",
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- **extra: Any,
- ):
- super().__init__(
- default=default,
- embed=True,
- media_type=media_type,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- example=example,
- examples=examples,
- **extra,
- )
-
-
-class File(Form):
- def __init__(
- self,
- default: Any = Undefined,
- *,
- media_type: str = "multipart/form-data",
- alias: Optional[str] = None,
- title: Optional[str] = None,
- description: Optional[str] = None,
- gt: Optional[float] = None,
- ge: Optional[float] = None,
- lt: Optional[float] = None,
- le: Optional[float] = None,
- min_length: Optional[int] = None,
- max_length: Optional[int] = None,
- regex: Optional[str] = None,
- example: Any = Undefined,
- examples: Optional[Dict[str, Any]] = None,
- **extra: Any,
- ):
- super().__init__(
- default=default,
- media_type=media_type,
- alias=alias,
- title=title,
- description=description,
- gt=gt,
- ge=ge,
- lt=lt,
- le=le,
- min_length=min_length,
- max_length=max_length,
- regex=regex,
- example=example,
- examples=examples,
- **extra,
- )
-
-
-class Depends:
- def __init__(
- self, dependency: Optional[Callable[..., Any]] = None, *, use_cache: bool = True
- ):
- self.dependency = dependency
- self.use_cache = use_cache
-
- def __repr__(self) -> str:
- attr = getattr(self.dependency, "__name__", type(self.dependency).__name__)
- cache = "" if self.use_cache else ", use_cache=False"
- return f"{self.__class__.__name__}({attr}{cache})"
-
-
-class Security(Depends):
- def __init__(
- self,
- dependency: Optional[Callable[..., Any]] = None,
- *,
- scopes: Optional[Sequence[str]] = None,
- use_cache: bool = True,
- ):
- super().__init__(dependency=dependency, use_cache=use_cache)
- self.scopes = scopes or []
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/O_S_2f_2.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/O_S_2f_2.py
deleted file mode 100644
index fc50b228a8c8d73463e5a1ff9e9730812306ee85..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/O_S_2f_2.py
+++ /dev/null
@@ -1,610 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.roundTools import otRound
-from fontTools.misc.textTools import safeEval, num2binary, binary2num
-from fontTools.ttLib.tables import DefaultTable
-import bisect
-import logging
-
-
-log = logging.getLogger(__name__)
-
-# panose classification
-
-panoseFormat = """
- bFamilyType: B
- bSerifStyle: B
- bWeight: B
- bProportion: B
- bContrast: B
- bStrokeVariation: B
- bArmStyle: B
- bLetterForm: B
- bMidline: B
- bXHeight: B
-"""
-
-
-class Panose(object):
- def toXML(self, writer, ttFont):
- formatstring, names, fixes = sstruct.getformat(panoseFormat)
- for name in names:
- writer.simpletag(name, value=getattr(self, name))
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- setattr(self, name, safeEval(attrs["value"]))
-
-
-# 'sfnt' OS/2 and Windows Metrics table - 'OS/2'
-
-OS2_format_0 = """
- > # big endian
- version: H # version
- xAvgCharWidth: h # average character width
- usWeightClass: H # degree of thickness of strokes
- usWidthClass: H # aspect ratio
- fsType: H # type flags
- ySubscriptXSize: h # subscript horizontal font size
- ySubscriptYSize: h # subscript vertical font size
- ySubscriptXOffset: h # subscript x offset
- ySubscriptYOffset: h # subscript y offset
- ySuperscriptXSize: h # superscript horizontal font size
- ySuperscriptYSize: h # superscript vertical font size
- ySuperscriptXOffset: h # superscript x offset
- ySuperscriptYOffset: h # superscript y offset
- yStrikeoutSize: h # strikeout size
- yStrikeoutPosition: h # strikeout position
- sFamilyClass: h # font family class and subclass
- panose: 10s # panose classification number
- ulUnicodeRange1: L # character range
- ulUnicodeRange2: L # character range
- ulUnicodeRange3: L # character range
- ulUnicodeRange4: L # character range
- achVendID: 4s # font vendor identification
- fsSelection: H # font selection flags
- usFirstCharIndex: H # first unicode character index
- usLastCharIndex: H # last unicode character index
- sTypoAscender: h # typographic ascender
- sTypoDescender: h # typographic descender
- sTypoLineGap: h # typographic line gap
- usWinAscent: H # Windows ascender
- usWinDescent: H # Windows descender
-"""
-
-OS2_format_1_addition = """
- ulCodePageRange1: L
- ulCodePageRange2: L
-"""
-
-OS2_format_2_addition = (
- OS2_format_1_addition
- + """
- sxHeight: h
- sCapHeight: h
- usDefaultChar: H
- usBreakChar: H
- usMaxContext: H
-"""
-)
-
-OS2_format_5_addition = (
- OS2_format_2_addition
- + """
- usLowerOpticalPointSize: H
- usUpperOpticalPointSize: H
-"""
-)
-
-bigendian = " > # big endian\n"
-
-OS2_format_1 = OS2_format_0 + OS2_format_1_addition
-OS2_format_2 = OS2_format_0 + OS2_format_2_addition
-OS2_format_5 = OS2_format_0 + OS2_format_5_addition
-OS2_format_1_addition = bigendian + OS2_format_1_addition
-OS2_format_2_addition = bigendian + OS2_format_2_addition
-OS2_format_5_addition = bigendian + OS2_format_5_addition
-
-
-class table_O_S_2f_2(DefaultTable.DefaultTable):
-
- """the OS/2 table"""
-
- dependencies = ["head"]
-
- def decompile(self, data, ttFont):
- dummy, data = sstruct.unpack2(OS2_format_0, data, self)
-
- if self.version == 1:
- dummy, data = sstruct.unpack2(OS2_format_1_addition, data, self)
- elif self.version in (2, 3, 4):
- dummy, data = sstruct.unpack2(OS2_format_2_addition, data, self)
- elif self.version == 5:
- dummy, data = sstruct.unpack2(OS2_format_5_addition, data, self)
- self.usLowerOpticalPointSize /= 20
- self.usUpperOpticalPointSize /= 20
- elif self.version != 0:
- from fontTools import ttLib
-
- raise ttLib.TTLibError(
- "unknown format for OS/2 table: version %s" % self.version
- )
- if len(data):
- log.warning("too much 'OS/2' table data")
-
- self.panose = sstruct.unpack(panoseFormat, self.panose, Panose())
-
- def compile(self, ttFont):
- self.updateFirstAndLastCharIndex(ttFont)
- panose = self.panose
- head = ttFont["head"]
- if (self.fsSelection & 1) and not (head.macStyle & 1 << 1):
- log.warning(
- "fsSelection bit 0 (italic) and "
- "head table macStyle bit 1 (italic) should match"
- )
- if (self.fsSelection & 1 << 5) and not (head.macStyle & 1):
- log.warning(
- "fsSelection bit 5 (bold) and "
- "head table macStyle bit 0 (bold) should match"
- )
- if (self.fsSelection & 1 << 6) and (self.fsSelection & 1 + (1 << 5)):
- log.warning(
- "fsSelection bit 6 (regular) is set, "
- "bits 0 (italic) and 5 (bold) must be clear"
- )
- if self.version < 4 and self.fsSelection & 0b1110000000:
- log.warning(
- "fsSelection bits 7, 8 and 9 are only defined in "
- "OS/2 table version 4 and up: version %s",
- self.version,
- )
- self.panose = sstruct.pack(panoseFormat, self.panose)
- if self.version == 0:
- data = sstruct.pack(OS2_format_0, self)
- elif self.version == 1:
- data = sstruct.pack(OS2_format_1, self)
- elif self.version in (2, 3, 4):
- data = sstruct.pack(OS2_format_2, self)
- elif self.version == 5:
- d = self.__dict__.copy()
- d["usLowerOpticalPointSize"] = round(self.usLowerOpticalPointSize * 20)
- d["usUpperOpticalPointSize"] = round(self.usUpperOpticalPointSize * 20)
- data = sstruct.pack(OS2_format_5, d)
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError(
- "unknown format for OS/2 table: version %s" % self.version
- )
- self.panose = panose
- return data
-
- def toXML(self, writer, ttFont):
- writer.comment(
- "The fields 'usFirstCharIndex' and 'usLastCharIndex'\n"
- "will be recalculated by the compiler"
- )
- writer.newline()
- if self.version == 1:
- format = OS2_format_1
- elif self.version in (2, 3, 4):
- format = OS2_format_2
- elif self.version == 5:
- format = OS2_format_5
- else:
- format = OS2_format_0
- formatstring, names, fixes = sstruct.getformat(format)
- for name in names:
- value = getattr(self, name)
- if name == "panose":
- writer.begintag("panose")
- writer.newline()
- value.toXML(writer, ttFont)
- writer.endtag("panose")
- elif name in (
- "ulUnicodeRange1",
- "ulUnicodeRange2",
- "ulUnicodeRange3",
- "ulUnicodeRange4",
- "ulCodePageRange1",
- "ulCodePageRange2",
- ):
- writer.simpletag(name, value=num2binary(value))
- elif name in ("fsType", "fsSelection"):
- writer.simpletag(name, value=num2binary(value, 16))
- elif name == "achVendID":
- writer.simpletag(name, value=repr(value)[1:-1])
- else:
- writer.simpletag(name, value=value)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "panose":
- self.panose = panose = Panose()
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- panose.fromXML(name, attrs, content, ttFont)
- elif name in (
- "ulUnicodeRange1",
- "ulUnicodeRange2",
- "ulUnicodeRange3",
- "ulUnicodeRange4",
- "ulCodePageRange1",
- "ulCodePageRange2",
- "fsType",
- "fsSelection",
- ):
- setattr(self, name, binary2num(attrs["value"]))
- elif name == "achVendID":
- setattr(self, name, safeEval("'''" + attrs["value"] + "'''"))
- else:
- setattr(self, name, safeEval(attrs["value"]))
-
- def updateFirstAndLastCharIndex(self, ttFont):
- if "cmap" not in ttFont:
- return
- codes = set()
- for table in getattr(ttFont["cmap"], "tables", []):
- if table.isUnicode():
- codes.update(table.cmap.keys())
- if codes:
- minCode = min(codes)
- maxCode = max(codes)
- # USHORT cannot hold codepoints greater than 0xFFFF
- self.usFirstCharIndex = min(0xFFFF, minCode)
- self.usLastCharIndex = min(0xFFFF, maxCode)
-
- # misspelled attributes kept for legacy reasons
-
- @property
- def usMaxContex(self):
- return self.usMaxContext
-
- @usMaxContex.setter
- def usMaxContex(self, value):
- self.usMaxContext = value
-
- @property
- def fsFirstCharIndex(self):
- return self.usFirstCharIndex
-
- @fsFirstCharIndex.setter
- def fsFirstCharIndex(self, value):
- self.usFirstCharIndex = value
-
- @property
- def fsLastCharIndex(self):
- return self.usLastCharIndex
-
- @fsLastCharIndex.setter
- def fsLastCharIndex(self, value):
- self.usLastCharIndex = value
-
- def getUnicodeRanges(self):
- """Return the set of 'ulUnicodeRange*' bits currently enabled."""
- bits = set()
- ul1, ul2 = self.ulUnicodeRange1, self.ulUnicodeRange2
- ul3, ul4 = self.ulUnicodeRange3, self.ulUnicodeRange4
- for i in range(32):
- if ul1 & (1 << i):
- bits.add(i)
- if ul2 & (1 << i):
- bits.add(i + 32)
- if ul3 & (1 << i):
- bits.add(i + 64)
- if ul4 & (1 << i):
- bits.add(i + 96)
- return bits
-
- def setUnicodeRanges(self, bits):
- """Set the 'ulUnicodeRange*' fields to the specified 'bits'."""
- ul1, ul2, ul3, ul4 = 0, 0, 0, 0
- for bit in bits:
- if 0 <= bit < 32:
- ul1 |= 1 << bit
- elif 32 <= bit < 64:
- ul2 |= 1 << (bit - 32)
- elif 64 <= bit < 96:
- ul3 |= 1 << (bit - 64)
- elif 96 <= bit < 123:
- ul4 |= 1 << (bit - 96)
- else:
- raise ValueError("expected 0 <= int <= 122, found: %r" % bit)
- self.ulUnicodeRange1, self.ulUnicodeRange2 = ul1, ul2
- self.ulUnicodeRange3, self.ulUnicodeRange4 = ul3, ul4
-
- def recalcUnicodeRanges(self, ttFont, pruneOnly=False):
- """Intersect the codepoints in the font's Unicode cmap subtables with
- the Unicode block ranges defined in the OpenType specification (v1.7),
- and set the respective 'ulUnicodeRange*' bits if there is at least ONE
- intersection.
- If 'pruneOnly' is True, only clear unused bits with NO intersection.
- """
- unicodes = set()
- for table in ttFont["cmap"].tables:
- if table.isUnicode():
- unicodes.update(table.cmap.keys())
- if pruneOnly:
- empty = intersectUnicodeRanges(unicodes, inverse=True)
- bits = self.getUnicodeRanges() - empty
- else:
- bits = intersectUnicodeRanges(unicodes)
- self.setUnicodeRanges(bits)
- return bits
-
- def recalcAvgCharWidth(self, ttFont):
- """Recalculate xAvgCharWidth using metrics from ttFont's 'hmtx' table.
-
- Set it to 0 if the unlikely event 'hmtx' table is not found.
- """
- avg_width = 0
- hmtx = ttFont.get("hmtx")
- if hmtx is not None:
- widths = [width for width, _ in hmtx.metrics.values() if width > 0]
- if widths:
- avg_width = otRound(sum(widths) / len(widths))
- self.xAvgCharWidth = avg_width
- return avg_width
-
-
-# Unicode ranges data from the OpenType OS/2 table specification v1.7
-
-OS2_UNICODE_RANGES = (
- (("Basic Latin", (0x0000, 0x007F)),),
- (("Latin-1 Supplement", (0x0080, 0x00FF)),),
- (("Latin Extended-A", (0x0100, 0x017F)),),
- (("Latin Extended-B", (0x0180, 0x024F)),),
- (
- ("IPA Extensions", (0x0250, 0x02AF)),
- ("Phonetic Extensions", (0x1D00, 0x1D7F)),
- ("Phonetic Extensions Supplement", (0x1D80, 0x1DBF)),
- ),
- (
- ("Spacing Modifier Letters", (0x02B0, 0x02FF)),
- ("Modifier Tone Letters", (0xA700, 0xA71F)),
- ),
- (
- ("Combining Diacritical Marks", (0x0300, 0x036F)),
- ("Combining Diacritical Marks Supplement", (0x1DC0, 0x1DFF)),
- ),
- (("Greek and Coptic", (0x0370, 0x03FF)),),
- (("Coptic", (0x2C80, 0x2CFF)),),
- (
- ("Cyrillic", (0x0400, 0x04FF)),
- ("Cyrillic Supplement", (0x0500, 0x052F)),
- ("Cyrillic Extended-A", (0x2DE0, 0x2DFF)),
- ("Cyrillic Extended-B", (0xA640, 0xA69F)),
- ),
- (("Armenian", (0x0530, 0x058F)),),
- (("Hebrew", (0x0590, 0x05FF)),),
- (("Vai", (0xA500, 0xA63F)),),
- (("Arabic", (0x0600, 0x06FF)), ("Arabic Supplement", (0x0750, 0x077F))),
- (("NKo", (0x07C0, 0x07FF)),),
- (("Devanagari", (0x0900, 0x097F)),),
- (("Bengali", (0x0980, 0x09FF)),),
- (("Gurmukhi", (0x0A00, 0x0A7F)),),
- (("Gujarati", (0x0A80, 0x0AFF)),),
- (("Oriya", (0x0B00, 0x0B7F)),),
- (("Tamil", (0x0B80, 0x0BFF)),),
- (("Telugu", (0x0C00, 0x0C7F)),),
- (("Kannada", (0x0C80, 0x0CFF)),),
- (("Malayalam", (0x0D00, 0x0D7F)),),
- (("Thai", (0x0E00, 0x0E7F)),),
- (("Lao", (0x0E80, 0x0EFF)),),
- (("Georgian", (0x10A0, 0x10FF)), ("Georgian Supplement", (0x2D00, 0x2D2F))),
- (("Balinese", (0x1B00, 0x1B7F)),),
- (("Hangul Jamo", (0x1100, 0x11FF)),),
- (
- ("Latin Extended Additional", (0x1E00, 0x1EFF)),
- ("Latin Extended-C", (0x2C60, 0x2C7F)),
- ("Latin Extended-D", (0xA720, 0xA7FF)),
- ),
- (("Greek Extended", (0x1F00, 0x1FFF)),),
- (
- ("General Punctuation", (0x2000, 0x206F)),
- ("Supplemental Punctuation", (0x2E00, 0x2E7F)),
- ),
- (("Superscripts And Subscripts", (0x2070, 0x209F)),),
- (("Currency Symbols", (0x20A0, 0x20CF)),),
- (("Combining Diacritical Marks For Symbols", (0x20D0, 0x20FF)),),
- (("Letterlike Symbols", (0x2100, 0x214F)),),
- (("Number Forms", (0x2150, 0x218F)),),
- (
- ("Arrows", (0x2190, 0x21FF)),
- ("Supplemental Arrows-A", (0x27F0, 0x27FF)),
- ("Supplemental Arrows-B", (0x2900, 0x297F)),
- ("Miscellaneous Symbols and Arrows", (0x2B00, 0x2BFF)),
- ),
- (
- ("Mathematical Operators", (0x2200, 0x22FF)),
- ("Supplemental Mathematical Operators", (0x2A00, 0x2AFF)),
- ("Miscellaneous Mathematical Symbols-A", (0x27C0, 0x27EF)),
- ("Miscellaneous Mathematical Symbols-B", (0x2980, 0x29FF)),
- ),
- (("Miscellaneous Technical", (0x2300, 0x23FF)),),
- (("Control Pictures", (0x2400, 0x243F)),),
- (("Optical Character Recognition", (0x2440, 0x245F)),),
- (("Enclosed Alphanumerics", (0x2460, 0x24FF)),),
- (("Box Drawing", (0x2500, 0x257F)),),
- (("Block Elements", (0x2580, 0x259F)),),
- (("Geometric Shapes", (0x25A0, 0x25FF)),),
- (("Miscellaneous Symbols", (0x2600, 0x26FF)),),
- (("Dingbats", (0x2700, 0x27BF)),),
- (("CJK Symbols And Punctuation", (0x3000, 0x303F)),),
- (("Hiragana", (0x3040, 0x309F)),),
- (
- ("Katakana", (0x30A0, 0x30FF)),
- ("Katakana Phonetic Extensions", (0x31F0, 0x31FF)),
- ),
- (("Bopomofo", (0x3100, 0x312F)), ("Bopomofo Extended", (0x31A0, 0x31BF))),
- (("Hangul Compatibility Jamo", (0x3130, 0x318F)),),
- (("Phags-pa", (0xA840, 0xA87F)),),
- (("Enclosed CJK Letters And Months", (0x3200, 0x32FF)),),
- (("CJK Compatibility", (0x3300, 0x33FF)),),
- (("Hangul Syllables", (0xAC00, 0xD7AF)),),
- (("Non-Plane 0 *", (0xD800, 0xDFFF)),),
- (("Phoenician", (0x10900, 0x1091F)),),
- (
- ("CJK Unified Ideographs", (0x4E00, 0x9FFF)),
- ("CJK Radicals Supplement", (0x2E80, 0x2EFF)),
- ("Kangxi Radicals", (0x2F00, 0x2FDF)),
- ("Ideographic Description Characters", (0x2FF0, 0x2FFF)),
- ("CJK Unified Ideographs Extension A", (0x3400, 0x4DBF)),
- ("CJK Unified Ideographs Extension B", (0x20000, 0x2A6DF)),
- ("Kanbun", (0x3190, 0x319F)),
- ),
- (("Private Use Area (plane 0)", (0xE000, 0xF8FF)),),
- (
- ("CJK Strokes", (0x31C0, 0x31EF)),
- ("CJK Compatibility Ideographs", (0xF900, 0xFAFF)),
- ("CJK Compatibility Ideographs Supplement", (0x2F800, 0x2FA1F)),
- ),
- (("Alphabetic Presentation Forms", (0xFB00, 0xFB4F)),),
- (("Arabic Presentation Forms-A", (0xFB50, 0xFDFF)),),
- (("Combining Half Marks", (0xFE20, 0xFE2F)),),
- (
- ("Vertical Forms", (0xFE10, 0xFE1F)),
- ("CJK Compatibility Forms", (0xFE30, 0xFE4F)),
- ),
- (("Small Form Variants", (0xFE50, 0xFE6F)),),
- (("Arabic Presentation Forms-B", (0xFE70, 0xFEFF)),),
- (("Halfwidth And Fullwidth Forms", (0xFF00, 0xFFEF)),),
- (("Specials", (0xFFF0, 0xFFFF)),),
- (("Tibetan", (0x0F00, 0x0FFF)),),
- (("Syriac", (0x0700, 0x074F)),),
- (("Thaana", (0x0780, 0x07BF)),),
- (("Sinhala", (0x0D80, 0x0DFF)),),
- (("Myanmar", (0x1000, 0x109F)),),
- (
- ("Ethiopic", (0x1200, 0x137F)),
- ("Ethiopic Supplement", (0x1380, 0x139F)),
- ("Ethiopic Extended", (0x2D80, 0x2DDF)),
- ),
- (("Cherokee", (0x13A0, 0x13FF)),),
- (("Unified Canadian Aboriginal Syllabics", (0x1400, 0x167F)),),
- (("Ogham", (0x1680, 0x169F)),),
- (("Runic", (0x16A0, 0x16FF)),),
- (("Khmer", (0x1780, 0x17FF)), ("Khmer Symbols", (0x19E0, 0x19FF))),
- (("Mongolian", (0x1800, 0x18AF)),),
- (("Braille Patterns", (0x2800, 0x28FF)),),
- (("Yi Syllables", (0xA000, 0xA48F)), ("Yi Radicals", (0xA490, 0xA4CF))),
- (
- ("Tagalog", (0x1700, 0x171F)),
- ("Hanunoo", (0x1720, 0x173F)),
- ("Buhid", (0x1740, 0x175F)),
- ("Tagbanwa", (0x1760, 0x177F)),
- ),
- (("Old Italic", (0x10300, 0x1032F)),),
- (("Gothic", (0x10330, 0x1034F)),),
- (("Deseret", (0x10400, 0x1044F)),),
- (
- ("Byzantine Musical Symbols", (0x1D000, 0x1D0FF)),
- ("Musical Symbols", (0x1D100, 0x1D1FF)),
- ("Ancient Greek Musical Notation", (0x1D200, 0x1D24F)),
- ),
- (("Mathematical Alphanumeric Symbols", (0x1D400, 0x1D7FF)),),
- (
- ("Private Use (plane 15)", (0xF0000, 0xFFFFD)),
- ("Private Use (plane 16)", (0x100000, 0x10FFFD)),
- ),
- (
- ("Variation Selectors", (0xFE00, 0xFE0F)),
- ("Variation Selectors Supplement", (0xE0100, 0xE01EF)),
- ),
- (("Tags", (0xE0000, 0xE007F)),),
- (("Limbu", (0x1900, 0x194F)),),
- (("Tai Le", (0x1950, 0x197F)),),
- (("New Tai Lue", (0x1980, 0x19DF)),),
- (("Buginese", (0x1A00, 0x1A1F)),),
- (("Glagolitic", (0x2C00, 0x2C5F)),),
- (("Tifinagh", (0x2D30, 0x2D7F)),),
- (("Yijing Hexagram Symbols", (0x4DC0, 0x4DFF)),),
- (("Syloti Nagri", (0xA800, 0xA82F)),),
- (
- ("Linear B Syllabary", (0x10000, 0x1007F)),
- ("Linear B Ideograms", (0x10080, 0x100FF)),
- ("Aegean Numbers", (0x10100, 0x1013F)),
- ),
- (("Ancient Greek Numbers", (0x10140, 0x1018F)),),
- (("Ugaritic", (0x10380, 0x1039F)),),
- (("Old Persian", (0x103A0, 0x103DF)),),
- (("Shavian", (0x10450, 0x1047F)),),
- (("Osmanya", (0x10480, 0x104AF)),),
- (("Cypriot Syllabary", (0x10800, 0x1083F)),),
- (("Kharoshthi", (0x10A00, 0x10A5F)),),
- (("Tai Xuan Jing Symbols", (0x1D300, 0x1D35F)),),
- (
- ("Cuneiform", (0x12000, 0x123FF)),
- ("Cuneiform Numbers and Punctuation", (0x12400, 0x1247F)),
- ),
- (("Counting Rod Numerals", (0x1D360, 0x1D37F)),),
- (("Sundanese", (0x1B80, 0x1BBF)),),
- (("Lepcha", (0x1C00, 0x1C4F)),),
- (("Ol Chiki", (0x1C50, 0x1C7F)),),
- (("Saurashtra", (0xA880, 0xA8DF)),),
- (("Kayah Li", (0xA900, 0xA92F)),),
- (("Rejang", (0xA930, 0xA95F)),),
- (("Cham", (0xAA00, 0xAA5F)),),
- (("Ancient Symbols", (0x10190, 0x101CF)),),
- (("Phaistos Disc", (0x101D0, 0x101FF)),),
- (
- ("Carian", (0x102A0, 0x102DF)),
- ("Lycian", (0x10280, 0x1029F)),
- ("Lydian", (0x10920, 0x1093F)),
- ),
- (("Domino Tiles", (0x1F030, 0x1F09F)), ("Mahjong Tiles", (0x1F000, 0x1F02F))),
-)
-
-
-_unicodeStarts = []
-_unicodeValues = [None]
-
-
-def _getUnicodeRanges():
- # build the ranges of codepoints for each unicode range bit, and cache result
- if not _unicodeStarts:
- unicodeRanges = [
- (start, (stop, bit))
- for bit, blocks in enumerate(OS2_UNICODE_RANGES)
- for _, (start, stop) in blocks
- ]
- for start, (stop, bit) in sorted(unicodeRanges):
- _unicodeStarts.append(start)
- _unicodeValues.append((stop, bit))
- return _unicodeStarts, _unicodeValues
-
-
-def intersectUnicodeRanges(unicodes, inverse=False):
- """Intersect a sequence of (int) Unicode codepoints with the Unicode block
- ranges defined in the OpenType specification v1.7, and return the set of
- 'ulUnicodeRanges' bits for which there is at least ONE intersection.
- If 'inverse' is True, return the the bits for which there is NO intersection.
-
- >>> intersectUnicodeRanges([0x0410]) == {9}
- True
- >>> intersectUnicodeRanges([0x0410, 0x1F000]) == {9, 57, 122}
- True
- >>> intersectUnicodeRanges([0x0410, 0x1F000], inverse=True) == (
- ... set(range(len(OS2_UNICODE_RANGES))) - {9, 57, 122})
- True
- """
- unicodes = set(unicodes)
- unicodestarts, unicodevalues = _getUnicodeRanges()
- bits = set()
- for code in unicodes:
- stop, bit = unicodevalues[bisect.bisect(unicodestarts, code)]
- if code <= stop:
- bits.add(bit)
- # The spec says that bit 57 ("Non Plane 0") implies that there's
- # at least one codepoint beyond the BMP; so I also include all
- # the non-BMP codepoints here
- if any(0x10000 <= code < 0x110000 for code in unicodes):
- bits.add(57)
- return set(range(len(OS2_UNICODE_RANGES))) - bits if inverse else bits
-
-
-if __name__ == "__main__":
- import doctest, sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-5fa4dd09.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-5fa4dd09.css
deleted file mode 100644
index c47d6f6f010f0626b0036068fe41d683b37b2954..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-5fa4dd09.css
+++ /dev/null
@@ -1 +0,0 @@
-.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)}
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_gtk.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_gtk.py
deleted file mode 100644
index 1fadc49a0d372405543234b3068abb508a629d27..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/_backend_gtk.py
+++ /dev/null
@@ -1,332 +0,0 @@
-"""
-Common code for GTK3 and GTK4 backends.
-"""
-
-import logging
-import sys
-
-import matplotlib as mpl
-from matplotlib import _api, backend_tools, cbook
-from matplotlib._pylab_helpers import Gcf
-from matplotlib.backend_bases import (
- _Backend, FigureCanvasBase, FigureManagerBase, NavigationToolbar2,
- TimerBase)
-from matplotlib.backend_tools import Cursors
-
-import gi
-# The GTK3/GTK4 backends will have already called `gi.require_version` to set
-# the desired GTK.
-from gi.repository import Gdk, Gio, GLib, Gtk
-
-
-try:
- gi.require_foreign("cairo")
-except ImportError as e:
- raise ImportError("Gtk-based backends require cairo") from e
-
-_log = logging.getLogger(__name__)
-_application = None # Placeholder
-
-
-def _shutdown_application(app):
- # The application might prematurely shut down if Ctrl-C'd out of IPython,
- # so close all windows.
- for win in app.get_windows():
- win.close()
- # The PyGObject wrapper incorrectly thinks that None is not allowed, or we
- # would call this:
- # Gio.Application.set_default(None)
- # Instead, we set this property and ignore default applications with it:
- app._created_by_matplotlib = True
- global _application
- _application = None
-
-
-def _create_application():
- global _application
-
- if _application is None:
- app = Gio.Application.get_default()
- if app is None or getattr(app, '_created_by_matplotlib', False):
- # display_is_valid returns False only if on Linux and neither X11
- # nor Wayland display can be opened.
- if not mpl._c_internal_utils.display_is_valid():
- raise RuntimeError('Invalid DISPLAY variable')
- _application = Gtk.Application.new('org.matplotlib.Matplotlib3',
- Gio.ApplicationFlags.NON_UNIQUE)
- # The activate signal must be connected, but we don't care for
- # handling it, since we don't do any remote processing.
- _application.connect('activate', lambda *args, **kwargs: None)
- _application.connect('shutdown', _shutdown_application)
- _application.register()
- cbook._setup_new_guiapp()
- else:
- _application = app
-
- return _application
-
-
-def mpl_to_gtk_cursor_name(mpl_cursor):
- return _api.check_getitem({
- Cursors.MOVE: "move",
- Cursors.HAND: "pointer",
- Cursors.POINTER: "default",
- Cursors.SELECT_REGION: "crosshair",
- Cursors.WAIT: "wait",
- Cursors.RESIZE_HORIZONTAL: "ew-resize",
- Cursors.RESIZE_VERTICAL: "ns-resize",
- }, cursor=mpl_cursor)
-
-
-class TimerGTK(TimerBase):
- """Subclass of `.TimerBase` using GTK timer events."""
-
- def __init__(self, *args, **kwargs):
- self._timer = None
- super().__init__(*args, **kwargs)
-
- def _timer_start(self):
- # Need to stop it, otherwise we potentially leak a timer id that will
- # never be stopped.
- self._timer_stop()
- self._timer = GLib.timeout_add(self._interval, self._on_timer)
-
- def _timer_stop(self):
- if self._timer is not None:
- GLib.source_remove(self._timer)
- self._timer = None
-
- def _timer_set_interval(self):
- # Only stop and restart it if the timer has already been started.
- if self._timer is not None:
- self._timer_stop()
- self._timer_start()
-
- def _on_timer(self):
- super()._on_timer()
-
- # Gtk timeout_add() requires that the callback returns True if it
- # is to be called again.
- if self.callbacks and not self._single:
- return True
- else:
- self._timer = None
- return False
-
-
-class _FigureCanvasGTK(FigureCanvasBase):
- _timer_cls = TimerGTK
-
-
-class _FigureManagerGTK(FigureManagerBase):
- """
- Attributes
- ----------
- canvas : `FigureCanvas`
- The FigureCanvas instance
- num : int or str
- The Figure number
- toolbar : Gtk.Toolbar or Gtk.Box
- The toolbar
- vbox : Gtk.VBox
- The Gtk.VBox containing the canvas and toolbar
- window : Gtk.Window
- The Gtk.Window
- """
-
- def __init__(self, canvas, num):
- self._gtk_ver = gtk_ver = Gtk.get_major_version()
-
- app = _create_application()
- self.window = Gtk.Window()
- app.add_window(self.window)
- super().__init__(canvas, num)
-
- if gtk_ver == 3:
- self.window.set_wmclass("matplotlib", "Matplotlib")
- icon_ext = "png" if sys.platform == "win32" else "svg"
- self.window.set_icon_from_file(
- str(cbook._get_data_path(f"images/matplotlib.{icon_ext}")))
-
- self.vbox = Gtk.Box()
- self.vbox.set_property("orientation", Gtk.Orientation.VERTICAL)
-
- if gtk_ver == 3:
- self.window.add(self.vbox)
- self.vbox.show()
- self.canvas.show()
- self.vbox.pack_start(self.canvas, True, True, 0)
- elif gtk_ver == 4:
- self.window.set_child(self.vbox)
- self.vbox.prepend(self.canvas)
-
- # calculate size for window
- w, h = self.canvas.get_width_height()
-
- if self.toolbar is not None:
- if gtk_ver == 3:
- self.toolbar.show()
- self.vbox.pack_end(self.toolbar, False, False, 0)
- elif gtk_ver == 4:
- sw = Gtk.ScrolledWindow(vscrollbar_policy=Gtk.PolicyType.NEVER)
- sw.set_child(self.toolbar)
- self.vbox.append(sw)
- min_size, nat_size = self.toolbar.get_preferred_size()
- h += nat_size.height
-
- self.window.set_default_size(w, h)
-
- self._destroying = False
- self.window.connect("destroy", lambda *args: Gcf.destroy(self))
- self.window.connect({3: "delete_event", 4: "close-request"}[gtk_ver],
- lambda *args: Gcf.destroy(self))
- if mpl.is_interactive():
- self.window.show()
- self.canvas.draw_idle()
-
- self.canvas.grab_focus()
-
- def destroy(self, *args):
- if self._destroying:
- # Otherwise, this can be called twice when the user presses 'q',
- # which calls Gcf.destroy(self), then this destroy(), then triggers
- # Gcf.destroy(self) once again via
- # `connect("destroy", lambda *args: Gcf.destroy(self))`.
- return
- self._destroying = True
- self.window.destroy()
- self.canvas.destroy()
-
- @classmethod
- def start_main_loop(cls):
- global _application
- if _application is None:
- return
-
- try:
- _application.run() # Quits when all added windows close.
- except KeyboardInterrupt:
- # Ensure all windows can process their close event from
- # _shutdown_application.
- context = GLib.MainContext.default()
- while context.pending():
- context.iteration(True)
- raise
- finally:
- # Running after quit is undefined, so create a new one next time.
- _application = None
-
- def show(self):
- # show the figure window
- self.window.show()
- self.canvas.draw()
- if mpl.rcParams["figure.raise_window"]:
- meth_name = {3: "get_window", 4: "get_surface"}[self._gtk_ver]
- if getattr(self.window, meth_name)():
- self.window.present()
- else:
- # If this is called by a callback early during init,
- # self.window (a GtkWindow) may not have an associated
- # low-level GdkWindow (on GTK3) or GdkSurface (on GTK4) yet,
- # and present() would crash.
- _api.warn_external("Cannot raise window yet to be setup")
-
- def full_screen_toggle(self):
- is_fullscreen = {
- 3: lambda w: (w.get_window().get_state()
- & Gdk.WindowState.FULLSCREEN),
- 4: lambda w: w.is_fullscreen(),
- }[self._gtk_ver]
- if is_fullscreen(self.window):
- self.window.unfullscreen()
- else:
- self.window.fullscreen()
-
- def get_window_title(self):
- return self.window.get_title()
-
- def set_window_title(self, title):
- self.window.set_title(title)
-
- def resize(self, width, height):
- width = int(width / self.canvas.device_pixel_ratio)
- height = int(height / self.canvas.device_pixel_ratio)
- if self.toolbar:
- min_size, nat_size = self.toolbar.get_preferred_size()
- height += nat_size.height
- canvas_size = self.canvas.get_allocation()
- if self._gtk_ver >= 4 or canvas_size.width == canvas_size.height == 1:
- # A canvas size of (1, 1) cannot exist in most cases, because
- # window decorations would prevent such a small window. This call
- # must be before the window has been mapped and widgets have been
- # sized, so just change the window's starting size.
- self.window.set_default_size(width, height)
- else:
- self.window.resize(width, height)
-
-
-class _NavigationToolbar2GTK(NavigationToolbar2):
- # Must be implemented in GTK3/GTK4 backends:
- # * __init__
- # * save_figure
-
- def set_message(self, s):
- escaped = GLib.markup_escape_text(s)
- self.message.set_markup(f'{escaped} ')
-
- def draw_rubberband(self, event, x0, y0, x1, y1):
- height = self.canvas.figure.bbox.height
- y1 = height - y1
- y0 = height - y0
- rect = [int(val) for val in (x0, y0, x1 - x0, y1 - y0)]
- self.canvas._draw_rubberband(rect)
-
- def remove_rubberband(self):
- self.canvas._draw_rubberband(None)
-
- def _update_buttons_checked(self):
- for name, active in [("Pan", "PAN"), ("Zoom", "ZOOM")]:
- button = self._gtk_ids.get(name)
- if button:
- with button.handler_block(button._signal_handler):
- button.set_active(self.mode.name == active)
-
- def pan(self, *args):
- super().pan(*args)
- self._update_buttons_checked()
-
- def zoom(self, *args):
- super().zoom(*args)
- self._update_buttons_checked()
-
- def set_history_buttons(self):
- can_backward = self._nav_stack._pos > 0
- can_forward = self._nav_stack._pos < len(self._nav_stack._elements) - 1
- if 'Back' in self._gtk_ids:
- self._gtk_ids['Back'].set_sensitive(can_backward)
- if 'Forward' in self._gtk_ids:
- self._gtk_ids['Forward'].set_sensitive(can_forward)
-
-
-class RubberbandGTK(backend_tools.RubberbandBase):
- def draw_rubberband(self, x0, y0, x1, y1):
- _NavigationToolbar2GTK.draw_rubberband(
- self._make_classic_style_pseudo_toolbar(), None, x0, y0, x1, y1)
-
- def remove_rubberband(self):
- _NavigationToolbar2GTK.remove_rubberband(
- self._make_classic_style_pseudo_toolbar())
-
-
-class ConfigureSubplotsGTK(backend_tools.ConfigureSubplotsBase):
- def trigger(self, *args):
- _NavigationToolbar2GTK.configure_subplots(self, None)
-
-
-class _BackendGTK(_Backend):
- backend_version = "%s.%s.%s" % (
- Gtk.get_major_version(),
- Gtk.get_minor_version(),
- Gtk.get_micro_version(),
- )
- mainloop = _FigureManagerGTK.start_main_loop
diff --git a/spaces/lafi23333/aikomori/commons.py b/spaces/lafi23333/aikomori/commons.py
deleted file mode 100644
index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000
--- a/spaces/lafi23333/aikomori/commons.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-def slice_pitch_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size)
- return ret, ret_pitch, ids_str
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def rand_spec_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/lewisliuX123/wechatglm_demo/scripts/shutdown.sh b/spaces/lewisliuX123/wechatglm_demo/scripts/shutdown.sh
deleted file mode 100644
index c2bf6b14adcafd46e7278ab3730ab7f78b82c593..0000000000000000000000000000000000000000
--- a/spaces/lewisliuX123/wechatglm_demo/scripts/shutdown.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-
-#关闭服务
-cd `dirname $0`/..
-export BASE_DIR=`pwd`
-pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'`
-if [ -z "$pid" ] ; then
- echo "No chatgpt-on-wechat running."
- exit -1;
-fi
-
-echo "The chatgpt-on-wechat(${pid}) is running..."
-
-kill ${pid}
-
-echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK"
diff --git a/spaces/lewispons/GrammarGuru/src/features/build_features.py b/spaces/lewispons/GrammarGuru/src/features/build_features.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/limcheekin/openchat_3.5-GGUF/main.py b/spaces/limcheekin/openchat_3.5-GGUF/main.py
deleted file mode 100644
index 978fc6a7d35d4512c44d5f75531c09e832c35e1f..0000000000000000000000000000000000000000
--- a/spaces/limcheekin/openchat_3.5-GGUF/main.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from llama_cpp.server.app import create_app, Settings
-from fastapi.responses import HTMLResponse
-import os
-
-app = create_app(
- Settings(
- n_threads=2, # set to number of cpu cores
- model="model/gguf-model.bin",
- embedding=True
- )
-)
-
-# Read the content of index.html once and store it in memory
-with open("index.html", "r") as f:
- content = f.read()
-
-
-@app.get("/", response_class=HTMLResponse)
-async def read_items():
- return content
-
-if __name__ == "__main__":
- import uvicorn
- uvicorn.run(app,
- host=os.environ["HOST"],
- port=int(os.environ["PORT"])
- )
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Deewaar Movie Mp4 Download TOP.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Deewaar Movie Mp4 Download TOP.md
deleted file mode 100644
index 8d568e7910894e0ebdcc3384cdca928f7d777c74..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Deewaar Movie Mp4 Download TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Deewaar Movie Mp4 Download DOWNLOAD 🌟 https://bytlly.com/2uGvGe
-
-Download Deewaar 1975 Full Movie Hindi 720p HDRip. Sumair ... Title .... Mp4 Movie Free Download In HindiFree Download Deewaar 3gpMoments Of .. 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hotarare Aga Dizolvare Si Lichidare Firma [UPDATED].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hotarare Aga Dizolvare Si Lichidare Firma [UPDATED].md
deleted file mode 100644
index f9bcab3e6654db0f1076b34569624735af3ecf64..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hotarare Aga Dizolvare Si Lichidare Firma [UPDATED].md
+++ /dev/null
@@ -1,38 +0,0 @@
-
----> ServiceClient failure for DeepLeo[/ERROR]
-hotarare aga dizolvare si lichidare firma Download File ★ https://bytlly.com/2uGyvY
-Ce este o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei este un act juridic prin care asociații sau acționarii unei societăți comerciale decid să pună capăt existenței acesteia și să împartă patrimoniul rămas după stingerea tuturor obligațiilor. O astfel de hotărâre trebuie să fie adoptată cu majoritatea prevăzută de lege sau de actul constitutiv al societății și să cuprindă elementele esențiale pentru realizarea procedurii de dizolvare și lichidare.
-Când se poate adopta o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei se poate adopta în orice moment, atunci când asociații sau acționarii consideră că nu mai există motive pentru continuarea activității societății sau că aceasta nu mai este rentabilă sau viabilă. De asemenea, o hotărâre aga de dizolvare și lichidare a firmei se poate adopta în cazul în care societatea se află în una dintre situațiile prevăzute de lege pentru dizolvarea de drept, cum ar fi: expirarea duratei de funcționare, realizarea sau imposibilitatea realizării obiectului de activitate, pierderea întregului capital social, reducerea numărului asociaților sub minimul legal etc.
-Cum se înregistrează o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei trebuie să fie înregistrată la Oficiul Registrului Comerțului în termen de 15 zile de la data adoptării. Pentru acest scop, se depun următoarele documente: cererea de înregistrare, hotărârea aga de dizolvare și lichidare a firmei, dovada plății tarifului legal și alte documente specifice în funcție de forma juridică a societății. În urma înregistrării hotărârii aga de dizolvare și lichidare a firmei, societatea își păstrează personalitatea juridică doar pentru efectuarea operațiunilor de lichidare.
-Ce efecte are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei are ca efect principal deschiderea procedurii de lichidare, care presupune realizarea tuturor operațiunilor necesare pentru încetarea activității societății și repartizarea patrimoniului rămas între asociați sau acționari. În timpul procedurii de lichidare, societatea își păstrează personalitatea juridică, dar nu mai poate desfășura acte de comerț sau alte activități decât cele legate de lichidare. De asemenea, societatea trebuie să adauge la denumirea sa mențiunea „în lichidare”.
-Cum se finalizează o hotărâre aga de dizolvare și lichidare a firmei?
-
-O hotărâre aga de dizolvare și lichidare a firmei se finalizează prin radierea societății din registrul comerțului și din alte registre publice în care este înscrisă. Pentru acest scop, se depun la Oficiul Registrului Comerțului următoarele documente: cererea de radiere, situația financiară de lichidare și repartizare a activului societății, raportul cenzorilor sau auditorilor financiari, darea de seamă a administratorilor sau directoratului, dovada plății tarifului legal și alte documente specifice în funcție de forma juridică a societății. În urma radierii societății, aceasta își pierde personalitatea juridică și încetează să existe.
-Ce obligații fiscale are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei implică și anumite obligații fiscale pe care societatea trebuie să le îndeplinească înainte de a fi radiată. Astfel, societatea trebuie să depună la organul fiscal competent declarațiile fiscale corespunzătoare perioadei de lichidare, să achite eventualele datorii fiscale și să solicite eliberarea certificatului de atestare fiscală care să ateste că nu mai are obligații fiscale restante. De asemenea, societatea trebuie să anuleze codul de înregistrare fiscală și să restituie certificatul de înregistrare fiscală.
-Ce se întâmplă cu angajații unei hotărâri aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei are ca efect și încetarea raporturilor de muncă ale angajaților societății. În acest sens, societatea trebuie să respecte prevederile legale privind concedierea colectivă sau individuală a angajaților, în funcție de situație. Societatea trebuie să informeze angajații despre motivele și termenele concedierii, să le plătească salariile și drepturile bănești cuvenite, să le elibereze documentele necesare pentru înregistrarea la agențiile pentru ocuparea forței de muncă și pentru obținerea indemnizațiilor de șomaj.
-Ce riscuri are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei nu este lipsită de riscuri pentru asociați sau acționari. Unul dintre riscurile majore este acela al răspunderii solidare și nelimitate a asociaților sau acționarilor pentru datoriile societății, în cazul în care acestea nu sunt stinse sau regularizate în termen de 6 luni de la data publicării hotărârii de dizolvare și lichidare. Un alt risc este acela al contestației hotărârii de dizolvare și lichidare de către creditorii societății sau de către alte persoane interesate, care pot solicita instanței să anuleze hotărârea sau să dispună alte măsuri pentru protejarea drepturilor lor.
-Ce alternative are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei nu este singura soluție pentru încetarea activității unei societăți comerciale. Există și alte alternative care pot fi mai avantajoase sau mai convenabile pentru asociați sau acționari, cum ar fi: cesiunea părților sociale sau a acțiunilor, fuziunea cu o altă societate, divizarea societății în una sau mai multe societăți noi, transformarea formei juridice a societății, suspendarea temporară a activității societății etc. Fiecare dintre aceste alternative are însă propriile condiții legale și implicații fiscale, juridice și contabile, care trebuie analizate cu atenție înainte de a lua o decizie.
-Ce avantaje are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei poate avea și unele avantaje pentru asociați sau acționari, în funcție de situația concretă a societății. Unul dintre avantaje este acela al simplificării procedurii de încetare a activității societății, care nu mai necesită numirea unui lichidator și efectuarea unor operațiuni complexe de lichidare. Un alt avantaj este acela al evitării unor costuri suplimentare cu plata unui lichidator, a unor taxe și impozite, a unor cheltuieli administrative etc. Un alt avantaj este acela al recuperării rapide a capitalului investit în societate și al repartizării acestuia între asociați sau acționari.
-Ce dezavantaje are o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei poate avea și unele dezavantaje pentru asociați sau acționari, în funcție de situația concretă a societății. Unul dintre dezavantaje este acela al pierderii oportunității de a continua activitatea societății sau de a o redresa prin alte măsuri, cum ar fi restructurarea, reorganizarea, reorientarea etc. Un alt dezavantaj este acela al posibilelor pierderi financiare generate de vânzarea activelor societății la prețuri subevaluate sau de neîncasarea unor creanțe. Un alt dezavantaj este acela al posibilelor litigii cu creditorii sau cu alte persoane interesate, care pot contesta hotărârea de dizolvare și lichidare sau pot solicita plata unor datorii.
-Ce condiții trebuie îndeplinite pentru o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei trebuie să respecte anumite condiții legale și statutare pentru a fi valabilă și eficientă. Printre aceste condiții se numără: convocarea și întrunirea adunării generale a asociaților sau acționarilor în conformitate cu legea și actul constitutiv; adoptarea hotărârii cu majoritatea prevăzută de lege sau de actul constitutiv; publicarea hotărârii în Monitorul Oficial al României, Partea a IV-a; depunerea hotărârii la Oficiul Registrului Comerțului pentru înregistrarea mențiunilor privind dizolvarea și lichidarea; efectuarea tuturor operațiunilor necesare pentru lichidarea patrimoniului societății și repartizarea activului rămas între asociați sau acționari; solicitarea radierii societății din registrul comerțului și din alte registre publice.
-Ce termene trebuie respectate pentru o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei trebuie să respecte anumite termene legale pentru a fi valabilă și eficientă. Printre aceste termene se numără: termenul de convocare a adunării generale a asociaților sau acționarilor, care nu poate fi mai mic de 15 zile de la data publicării convocatorului în Monitorul Oficial al României, Partea a IV-a; termenul de publicare a hotărârii de dizolvare și lichidare, care nu poate fi mai mare de 15 zile de la data adoptării hotărârii; termenul de depunere a hotărârii la Oficiul Registrului Comerțului, care nu poate fi mai mare de 15 zile de la data publicării hotărârii; termenul de stingere sau regularizare a datoriilor societății, care nu poate fi mai mic de 6 luni de la data publicării hotărârii; termenul de solicitare a radierii societății din registrul comerțului și din alte registre publice, care nu poate fi mai mare de 3 ani de la data publicării hotărârii.
-Ce documente trebuie depuse pentru o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei presupune depunerea unor documente la Oficiul Registrului Comerțului pentru înregistrarea mențiunilor privind dizolvarea și lichidarea. Printre aceste documente se numără: cererea de înregistrare, în original; hotărârea adunării generale a asociaților sau acționarilor privind dizolvarea și lichidarea societății, în original; situația financiară de lichidare și repartizare a activului societății, în copie; raportul cenzorilor sau al auditorilor financiari, dacă este cazul, în original; darea de seamă a administratorilor sau directoratului, dacă este cazul, în original; dovada plății tarifului legal; alte documente specifice în funcție de forma juridică a societății.
-Ce pași trebuie urmați pentru o hotărâre aga de dizolvare și lichidare a firmei?
-O hotărâre aga de dizolvare și lichidare a firmei presupune urmarea unor pași legali pentru încetarea activității societății. Printre acești pași se numără: convocarea și întrunirea adunării generale a asociaților sau acționarilor pentru adoptarea hotărârii de dizolvare și lichidare; publicarea hotărârii în Monitorul Oficial al României, Partea a IV-a; depunerea hotărârii la Oficiul Registrului Comerțului pentru înregistrarea mențiunilor privind dizolvarea și lichidarea; efectuarea tuturor operațiunilor necesare pentru lichidarea patrimoniului societății și repartizarea activului rămas între asociați sau acționari; solicitarea radierii societății din registrul comerțului și din alte registre publice.
-Concluzie
-O hotărâre aga de dizolvare și lichidare a firmei este o soluție legală pentru încetarea activității unei societăți comerciale, care presupune adoptarea unei hotărâri de către asociați sau acționari, publicarea acesteia în Monitorul Oficial al României, Partea a IV-a, depunerea acesteia la Oficiul Registrului Comerțului pentru înregistrarea mențiunilor privind dizolvarea și lichidarea, efectuarea tuturor operațiunilor necesare pentru lichidarea patrimoniului societății și repartizarea activului rămas între asociați sau acționari și solicitarea radierii societății din registrul comerțului și din alte registre publice. Această soluție poate avea avantaje și dezavantaje, în funcție de situația concretă a societății, și trebuie să respecte anumite condiții și termene legale pentru a fi valabilă și eficientă.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/IOMEGA DISCOVERY HOME TOOL DOWNLOAD 2021.md b/spaces/lincquiQcaudo/Top-20-Diffusion/IOMEGA DISCOVERY HOME TOOL DOWNLOAD 2021.md
deleted file mode 100644
index 7975351b9ab6d0035f3af16eb62f587212d38af4..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/IOMEGA DISCOVERY HOME TOOL DOWNLOAD 2021.md
+++ /dev/null
@@ -1,6 +0,0 @@
-IOMEGA DISCOVERY HOME TOOL DOWNLOAD Download Zip ✺✺✺ https://bytlly.com/2uGylw
-
-I need to find a discovery tool for the storecenter but when I go to http://iomega-discovery-tool-home.software.informer.com/download/ - it just ... 1fdad05405
-
-
-
diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/solver.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/solver.py
deleted file mode 100644
index aaf0b21591b42fa903424f8d44fef88d7d791e57..0000000000000000000000000000000000000000
--- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/solver.py
+++ /dev/null
@@ -1,195 +0,0 @@
-import os
-import time
-import numpy as np
-import torch
-import librosa
-from diffusion.logger.saver import Saver
-from diffusion.logger import utils
-from torch import autocast
-from torch.cuda.amp import GradScaler
-
-def test(args, model, vocoder, loader_test, saver):
- print(' [*] testing...')
- model.eval()
-
- # losses
- test_loss = 0.
-
- # intialization
- num_batches = len(loader_test)
- rtf_all = []
-
- # run
- with torch.no_grad():
- for bidx, data in enumerate(loader_test):
- fn = data['name'][0].split("/")[-1]
- speaker = data['name'][0].split("/")[-2]
- print('--------')
- print('{}/{} - {}'.format(bidx, num_batches, fn))
-
- # unpack data
- for k in data.keys():
- if not k.startswith('name'):
- data[k] = data[k].to(args.device)
- print('>>', data['name'][0])
-
- # forward
- st_time = time.time()
- mel = model(
- data['units'],
- data['f0'],
- data['volume'],
- data['spk_id'],
- gt_spec=None,
- infer=True,
- infer_speedup=args.infer.speedup,
- method=args.infer.method)
- signal = vocoder.infer(mel, data['f0'])
- ed_time = time.time()
-
- # RTF
- run_time = ed_time - st_time
- song_time = signal.shape[-1] / args.data.sampling_rate
- rtf = run_time / song_time
- print('RTF: {} | {} / {}'.format(rtf, run_time, song_time))
- rtf_all.append(rtf)
-
- # loss
- for i in range(args.train.batch_size):
- loss = model(
- data['units'],
- data['f0'],
- data['volume'],
- data['spk_id'],
- gt_spec=data['mel'],
- infer=False)
- test_loss += loss.item()
-
- # log mel
- saver.log_spec(f"{speaker}_{fn}.wav", data['mel'], mel)
-
- # log audi
- path_audio = data['name_ext'][0]
- audio, sr = librosa.load(path_audio, sr=args.data.sampling_rate)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio)
- audio = torch.from_numpy(audio).unsqueeze(0).to(signal)
- saver.log_audio({f"{speaker}_{fn}_gt.wav": audio,f"{speaker}_{fn}_pred.wav": signal})
- # report
- test_loss /= args.train.batch_size
- test_loss /= num_batches
-
- # check
- print(' [test_loss] test_loss:', test_loss)
- print(' Real Time Factor', np.mean(rtf_all))
- return test_loss
-
-
-def train(args, initial_global_step, model, optimizer, scheduler, vocoder, loader_train, loader_test):
- # saver
- saver = Saver(args, initial_global_step=initial_global_step)
-
- # model size
- params_count = utils.get_network_paras_amount({'model': model})
- saver.log_info('--- model size ---')
- saver.log_info(params_count)
-
- # run
- num_batches = len(loader_train)
- model.train()
- saver.log_info('======= start training =======')
- scaler = GradScaler()
- if args.train.amp_dtype == 'fp32':
- dtype = torch.float32
- elif args.train.amp_dtype == 'fp16':
- dtype = torch.float16
- elif args.train.amp_dtype == 'bf16':
- dtype = torch.bfloat16
- else:
- raise ValueError(' [x] Unknown amp_dtype: ' + args.train.amp_dtype)
- saver.log_info("epoch|batch_idx/num_batches|output_dir|batch/s|lr|time|step")
- for epoch in range(args.train.epochs):
- for batch_idx, data in enumerate(loader_train):
- saver.global_step_increment()
- optimizer.zero_grad()
-
- # unpack data
- for k in data.keys():
- if not k.startswith('name'):
- data[k] = data[k].to(args.device)
-
- # forward
- if dtype == torch.float32:
- loss = model(data['units'].float(), data['f0'], data['volume'], data['spk_id'],
- aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False)
- else:
- with autocast(device_type=args.device, dtype=dtype):
- loss = model(data['units'], data['f0'], data['volume'], data['spk_id'],
- aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False)
-
- # handle nan loss
- if torch.isnan(loss):
- raise ValueError(' [x] nan loss ')
- else:
- # backpropagate
- if dtype == torch.float32:
- loss.backward()
- optimizer.step()
- else:
- scaler.scale(loss).backward()
- scaler.step(optimizer)
- scaler.update()
- scheduler.step()
-
- # log loss
- if saver.global_step % args.train.interval_log == 0:
- current_lr = optimizer.param_groups[0]['lr']
- saver.log_info(
- 'epoch: {} | {:3d}/{:3d} | {} | batch/s: {:.2f} | lr: {:.6} | loss: {:.3f} | time: {} | step: {}'.format(
- epoch,
- batch_idx,
- num_batches,
- args.env.expdir,
- args.train.interval_log/saver.get_interval_time(),
- current_lr,
- loss.item(),
- saver.get_total_time(),
- saver.global_step
- )
- )
-
- saver.log_value({
- 'train/loss': loss.item()
- })
-
- saver.log_value({
- 'train/lr': current_lr
- })
-
- # validation
- if saver.global_step % args.train.interval_val == 0:
- optimizer_save = optimizer if args.train.save_opt else None
-
- # save latest
- saver.save_model(model, optimizer_save, postfix=f'{saver.global_step}')
- last_val_step = saver.global_step - args.train.interval_val
- if last_val_step % args.train.interval_force_save != 0:
- saver.delete_model(postfix=f'{last_val_step}')
-
- # run testing set
- test_loss = test(args, model, vocoder, loader_test, saver)
-
- # log loss
- saver.log_info(
- ' --- --- \nloss: {:.3f}. '.format(
- test_loss,
- )
- )
-
- saver.log_value({
- 'validation/loss': test_loss
- })
-
- model.train()
-
-
diff --git a/spaces/luisoala/glide-test/glide_text2im/__init__.py b/spaces/luisoala/glide-test/glide_text2im/__init__.py
deleted file mode 100644
index a3c197bb932cfc9cf3447b7a3b52ce76db262fc9..0000000000000000000000000000000000000000
--- a/spaces/luisoala/glide-test/glide_text2im/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-"""
-A codebase for performing model inference with a text-conditional diffusion model.
-"""
diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py
deleted file mode 100644
index 590cb5db9213b22d00ce0e650a3e632725213a67..0000000000000000000000000000000000000000
--- a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py
+++ /dev/null
@@ -1,223 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from controlnet_aux import HEDdetector
-from diffusers import ControlNetModel
-from PIL import Image
-
-from diffusion_webui.diffusion_models.controlnet.controlnet_inpaint.pipeline_stable_diffusion_controlnet_inpaint import (
- StableDiffusionControlNetInpaintPipeline,
-)
-from diffusion_webui.utils.model_list import (
- controlnet_hed_model_list,
- stable_inpiant_model_list,
-)
-from diffusion_webui.utils.scheduler_list import (
- SCHEDULER_LIST,
- get_scheduler_list,
-)
-
-# https://github.com/mikonvergence/ControlNetInpaint
-
-
-class StableDiffusionControlNetInpaintHedGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, stable_model_path, controlnet_model_path, scheduler):
- if self.pipe is None:
- controlnet = ControlNetModel.from_pretrained(
- controlnet_model_path, torch_dtype=torch.float16
- )
- self.pipe = (
- StableDiffusionControlNetInpaintPipeline.from_pretrained(
- pretrained_model_name_or_path=stable_model_path,
- controlnet=controlnet,
- safety_checker=None,
- torch_dtype=torch.float16,
- )
- )
-
- self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler)
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
-
- return self.pipe
-
- def load_image(self, image_path):
- image = np.array(image_path)
- image = Image.fromarray(image)
- return image
-
- def controlnet_inpaint_hed(self, image_path: str):
- hed = HEDdetector.from_pretrained("lllyasviel/ControlNet")
- image = image_path["image"].convert("RGB").resize((512, 512))
- image = np.array(image)
- image = hed(image)
-
- return image
-
- def generate_image(
- self,
- image_path: str,
- stable_model_path: str,
- controlnet_model_path: str,
- prompt: str,
- negative_prompt: str,
- num_images_per_prompt: int,
- guidance_scale: int,
- num_inference_step: int,
- controlnet_conditioning_scale: int,
- scheduler: str,
- seed_generator: int,
- ):
- normal_image = image_path["image"].convert("RGB").resize((512, 512))
- mask_image = image_path["mask"].convert("RGB").resize((512, 512))
-
- normal_image = self.load_image(image_path=normal_image)
- mask_image = self.load_image(image_path=mask_image)
-
- control_image = self.controlnet_inpaint_hed(image_path=image_path)
-
- pipe = self.load_model(
- stable_model_path=stable_model_path,
- controlnet_model_path=controlnet_model_path,
- scheduler=scheduler,
- )
-
- if seed_generator == 0:
- random_seed = torch.randint(0, 1000000, (1,))
- generator = torch.manual_seed(random_seed)
- else:
- generator = torch.manual_seed(seed_generator)
-
- output = pipe(
- prompt=prompt,
- image=normal_image,
- mask_image=mask_image,
- control_image=control_image,
- negative_prompt=negative_prompt,
- num_images_per_prompt=num_images_per_prompt,
- num_inference_steps=num_inference_step,
- guidance_scale=guidance_scale,
- controlnet_conditioning_scale=controlnet_conditioning_scale,
- generator=generator,
- ).images
-
- return output
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_image_file = gr.Image(
- source="upload",
- tool="sketch",
- elem_id="image_upload",
- type="pil",
- label="Upload",
- )
-
- controlnet_hed_inpaint_prompt = gr.Textbox(
- lines=1, placeholder="Prompt", show_label=False
- )
-
- controlnet_hed_inpaint_negative_prompt = gr.Textbox(
- lines=1,
- show_label=False,
- placeholder="Negative Prompt",
- )
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_stable_model_id = (
- gr.Dropdown(
- choices=stable_inpiant_model_list,
- value=stable_inpiant_model_list[0],
- label="Stable Model Id",
- )
- )
-
- controlnet_hed_inpaint_guidance_scale = gr.Slider(
- minimum=0.1,
- maximum=15,
- step=0.1,
- value=7.5,
- label="Guidance Scale",
- )
-
- controlnet_hed_inpaint_num_inference_step = (
- gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=50,
- label="Num Inference Step",
- )
- )
- controlnet_hed_inpaint_num_images_per_prompt = (
- gr.Slider(
- minimum=1,
- maximum=10,
- step=1,
- value=1,
- label="Number Of Images",
- )
- )
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_model_id = gr.Dropdown(
- choices=controlnet_hed_model_list,
- value=controlnet_hed_model_list[0],
- label="Controlnet Model Id",
- )
- controlnet_hed_inpaint_scheduler = gr.Dropdown(
- choices=SCHEDULER_LIST,
- value=SCHEDULER_LIST[0],
- label="Scheduler",
- )
- controlnet_hed_inpaint_controlnet_conditioning_scale = gr.Slider(
- minimum=0.1,
- maximum=1.0,
- step=0.1,
- value=0.5,
- label="Controlnet Conditioning Scale",
- )
-
- controlnet_hed_inpaint_seed_generator = (
- gr.Slider(
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- label="Seed Generator",
- )
- )
-
- controlnet_hed_inpaint_predict = gr.Button(
- value="Generator"
- )
-
- with gr.Column():
- output_image = gr.Gallery(
- label="Generated images",
- show_label=False,
- elem_id="gallery",
- ).style(grid=(1, 2))
-
- controlnet_hed_inpaint_predict.click(
- fn=StableDiffusionControlNetInpaintHedGenerator().generate_image,
- inputs=[
- controlnet_hed_inpaint_image_file,
- controlnet_hed_inpaint_stable_model_id,
- controlnet_hed_inpaint_model_id,
- controlnet_hed_inpaint_prompt,
- controlnet_hed_inpaint_negative_prompt,
- controlnet_hed_inpaint_num_images_per_prompt,
- controlnet_hed_inpaint_guidance_scale,
- controlnet_hed_inpaint_num_inference_step,
- controlnet_hed_inpaint_controlnet_conditioning_scale,
- controlnet_hed_inpaint_scheduler,
- controlnet_hed_inpaint_seed_generator,
- ],
- outputs=[output_image],
- )
diff --git a/spaces/m3hrdadfi/gpt2-persian-qa/utils.py b/spaces/m3hrdadfi/gpt2-persian-qa/utils.py
deleted file mode 100644
index e610786bbf28d2d2cd1f622fdd1b563c9ffe1e5b..0000000000000000000000000000000000000000
--- a/spaces/m3hrdadfi/gpt2-persian-qa/utils.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import streamlit as st
-import json
-from PIL import Image
-
-
-def load_image(image_path, image_resize=None):
- image = Image.open(image_path)
- if isinstance(image_resize, tuple):
- image.resize(image_resize)
- return image
-
-
-def load_text(text_path):
- text = ''
- with open(text_path) as f:
- text = f.read()
-
- return text
-
-
-def load_json(json_path):
- jdata = ''
- with open(json_path) as f:
- jdata = json.load(f)
-
- return jdata
-
-
-def local_css(css_path):
- with open(css_path) as f:
- st.markdown(f'', unsafe_allow_html=True)
-
-
-def remote_css(css_url):
- st.markdown(f' ', unsafe_allow_html=True)
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/scatter.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/scatter.h
deleted file mode 100644
index 4a65a4cc01ea23211330192f69999532f6d60575..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/scatter.h
+++ /dev/null
@@ -1,81 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- void scatter(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- RandomAccessIterator output);
-
-
-template
-__host__ __device__
- void scatter_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output);
-
-
-template
-__host__ __device__
- void scatter_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 map,
- InputIterator3 stencil,
- RandomAccessIterator output,
- Predicate pred);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/macaodha/batdetect2/bat_detect/train/readme.md b/spaces/macaodha/batdetect2/bat_detect/train/readme.md
deleted file mode 100644
index e406c7dd49235080d23b4235e90d606394e8dd85..0000000000000000000000000000000000000000
--- a/spaces/macaodha/batdetect2/bat_detect/train/readme.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## How to train a model from scratch
-`python train_model.py data_dir annotation_dir` e.g.
-`python train_model.py /data1/bat_data/data/ /data1/bat_data/annotations/anns/`
-
-More comprehensive instructions are provided in the finetune directory.
-
-
-## Training on your own data
-You can either use the finetuning scripts to finetune from an existing training dataset. Follow the instructions in the `../finetune/` directory.
-
-Alternatively, you can train from scratch. First, you will need to create your own annotation file (like in the finetune example), and then you will need to edit `train_split.py` to add your new dataset and specify which combination of files you want to train on.
-
-Note, if training from scratch and you want to include the existing data, you may need to set all the class names to the generic class name ('Bat') so that the existing species are not added to your model, but instead just used to help perform the bat/not bat task.
-
-## Additional notes
-Having blank files with no bats in them is also useful, just make sure that the annotation files lists them as not being annotated (i.e. `is_annotated=True`).
-
-Training will be slow without a GPU.
diff --git a/spaces/maksymalist/junk-judge/models.py b/spaces/maksymalist/junk-judge/models.py
deleted file mode 100644
index 3cb0408fd206252add251e57a44a2546049b0668..0000000000000000000000000000000000000000
--- a/spaces/maksymalist/junk-judge/models.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import torch.nn as nn
-from torchvision import models
-from torchvision.models import ConvNeXt_Base_Weights, ConvNeXt_Tiny_Weights
-
-def CONV_NN(num_classes):
-
- # load the pretrained model because what's the point of re-inventing the wheel??
- model = models.convnext_tiny(weights=ConvNeXt_Tiny_Weights.DEFAULT) # in the new version of PyTorch, you can't use pretrained=True
-
- # Freeze all the layers because we only want to train the new layers
-
- for param in model.parameters():
- # disable gradients because we don't need to include pretrained data in the backprop
- param.requires_grad = False
-
-
- # Replace the last layer with a MLP mixer with dropout
-
- model.classifier[-1] = nn.Sequential(
- nn.Linear(768, 256),
- nn.ReLU(),
- nn.Dropout(0.5),
- nn.Linear(256, num_classes)
- )
-
- return model
-
-def CONV_NN_V2(num_classes):
-
- # load the pretrained model because what's the point of re-inventing the wheel??
- model = models.convnext_base(weights=ConvNeXt_Base_Weights.DEFAULT) # in the new version of PyTorch, you can't use pretrained=True
-
- # Freeze all the layers because we only want to train the new layers
-
- for param in model.parameters():
- # disable gradients because we don't need to include pretrained data in the backprop
- param.requires_grad = False
-
-
- # Replace the last layer with a MLP mixer with dropout
-
- model.classifier[-1] = nn.Sequential(
- nn.Linear(1024, 256),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(256, num_classes)
- )
-
- return model
-
-
-class MorpheusModel(nn.Module):
- def __init__(self, num_inputs, num_outputs):
- super(MorpheusModel, self).__init__()
-
- self.l1 = nn.Linear(num_inputs, 256)
- self.relu = nn.ReLU()
- self.l2 = nn.Linear(256, num_outputs)
-
-
- def forward(self, x):
- output = self.l1(x)
- output = self.relu(output)
- output = self.l2(output)
- return output
-
-def CONV_NN_V2(num_classes):
-
- # load the pretrained model because what's the point of re-inventing the wheel??
- model = models.convnext_base(weights=ConvNeXt_Base_Weights.DEFAULT) # in the new version of PyTorch, you can't use pretrained=True
-
- # Freeze all the layers because we only want to train the new layers
-
- for param in model.parameters():
- # disable gradients because we don't need to include pretrained data in the backprop
- param.requires_grad = False
-
-
- # Replace the last layer with a MLP mixer with dropout
-
- model.classifier[-1] = nn.Sequential(
- nn.Linear(1024, 256),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(256, num_classes)
- )
-
- return model
\ No newline at end of file
diff --git a/spaces/mattricesound/RemFx/scripts/eval.sh b/spaces/mattricesound/RemFx/scripts/eval.sh
deleted file mode 100644
index 96c00190fc8b6a9724f0d86442c2eb7a6d4eeeee..0000000000000000000000000000000000000000
--- a/spaces/mattricesound/RemFx/scripts/eval.sh
+++ /dev/null
@@ -1,48 +0,0 @@
-#! /bin/bash
-
-# Example usage:
-# scripts/eval.sh remfx_detect 0-0
-# scripts/eval.sh distortion_aug 0-0 -ckpt logs/ckpts/2023-01-21-12-21-44
-# First 2 arguments are required, third argument is optional
-
-# Default value for the optional parameter
-ckpt_path=""
-export DATASET_ROOT=RemFX_eval_datasets
-# Function to display script usage
-function display_usage {
- echo "Usage: $0