diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 2 Patch 1.51 HOT Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 2 Patch 1.51 HOT Crack.md
deleted file mode 100644
index 812e31fcbb5c41f8061f4fcf46e1d646a06c78df..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Battlefield 2 Patch 1.51 HOT Crack.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
Battlefield 2 is one of the most popular first-person shooter games ever released. It was launched in 2005 by EA Games and DICE, and it features modern warfare scenarios with realistic weapons, vehicles, and maps. The game has a single-player mode, where you can play against bots, and a multiplayer mode, where you can join online servers and fight with or against other players.
-However, to enjoy the full potential of Battlefield 2, you need to update it to the latest version, which is patch 1.51. This patch was released in 2009, and it adds new content, fixes bugs, and improves performance. Some of the new features include two new maps (Operation Blue Pearl and Highway Tampa), two new booster packs (Euro Force and Armored Fury), widescreen support, improved server browser, and more.
-DOWNLOAD ⚡ https://byltly.com/2uKz9h
But what if you don't have a legal copy of Battlefield 2? Or what if you want to play on servers that are not supported by EA Games? In that case, you might need a crack for patch 1.51. A crack is a modified version of the game executable that bypasses the copy protection or the online authentication. With a crack, you can play Battlefield 2 without a CD or DVD, or without an internet connection.
-However, using a crack also has some risks and drawbacks. You might encounter compatibility issues, viruses, malware, or legal problems. You might also miss out on some features or updates that are only available for the official version of the game. Therefore, you should use a crack at your own discretion and responsibility.
-In this article, we will show you how to download, install, and play Battlefield 2 with patch 1.51 and crack. We will also give you some tips on how to enjoy the game with mods and cheats. Follow the steps below and get ready for some intense action on the battlefield.
-Before you can use a crack for patch 1.51, you need to install the patch itself. Here are the requirements and download links for patch 1.51:
-Once you have downloaded the patches, follow these instructions to install them:
-Now that you have installed patch 1.51, you can use a crack to play Battlefield 2 without any restrictions. However, before you do that, be aware of the risks and warnings:
- -If you are still willing to use a crack for patch 1.51, here are some sources and download links for the crack files:
-Once you have downloaded the crack files, follow these instructions to use them:
-Now that you have installed patch 1.51 and used a crack, you can play Battlefield 2 with more features and options. Here are some tips on how to play the game with patch 1.51 and crack:
-In single-player mode, you can play against bots on any map and mode that you want. You can also customize the number and difficulty of the bots, as well as other settings such as friendly fire, respawn time, or ticket ratio. To do that, you need to edit the AI files in your Battlefield 2 folder. You can find detailed instructions on how to do that [here].
-In multiplayer mode, you can join online servers and play with or against other players. However, not all servers will accept your cracked version of the game. Some servers might require the official version of the game, or a specific mod or patch. To find servers that match your version of the game, you can use a server browser such as [GameRanger] or [Battlelog.co]. These server browsers will show you the server name, map, mode, players, ping, and other information. You can also filter the servers by region, game type, password, or mod.
-If you want to enhance your gaming experience with patch 1.51 and crack, you can also use mods and cheats. Mods are modifications that add new content or change existing content in the game. Cheats are codes or programs that give you an unfair advantage in the game. However, be careful when using mods and cheats, as they might cause compatibility issues, errors, crashes, or performance problems. They might also get you banned from some online servers that do not allow them.
-Some of the most popular mods for Battlefield 2 are:
-Battlefield 2 is a great game that offers a lot of fun and excitement for fans of first-person shooter games. However, to enjoy the game to the fullest, you need to update it to patch 1.51, which adds new content, fixes bugs, and improves performance. If you don't have a legal copy of the game, or if you want to play on unsupported servers, you might need a crack for patch 1.51. A crack is a modified version of the game executable that bypasses the copy protection or the online authentication. However, using a crack also has some risks and drawbacks, such as viruses, malware, compatibility issues, or legal problems.
-In this article, we have shown you how to download, install, and play Battlefield 2 with patch 1.51 and crack. We have also given you some tips on how to play the game with mods and cheats. We hope that this article was helpful and informative for you. However, we also advise you to play the game without cheating, as it is more fun and rewarding that way. Cheating can also cause problems with your game or your account, so be careful and responsible.
-If you liked this article, please share it with your friends and fellow gamers. Also, feel free to leave a comment below and let us know what you think about Battlefield 2, patch 1.51, crack, mods, or cheats. We would love to hear from you and chat with you.
-Thank you for reading this article and happy gaming!
-Here are some frequently asked questions about Battlefield 2, patch 1.51, crack, mods, or cheats:
-A: Yes, Battlefield 2 is still playable in 2023. However, you might need to use a third-party server browser such as GameRanger or Battlelog.co to find online servers that are still active.
-A: Yes, patch 1.51 is the latest official version of Battlefield 2. It was released in 2009 by EA Games and DICE.
-A: Using a crack for patch 1.51 might violate the terms of service or the end-user license agreement of EA Games or DICE. It might also infringe the intellectual property rights of the game developers or publishers. Therefore, using a crack for patch 1.51 might be illegal in some countries or regions.
-A: Some of the best mods for Battlefield 2 are Project Reality, Forgotten Hope 2, AIX2 (Allied Intent Xtended), Point of Existence 2, and Eve of Destruction 2.
-A: You can get more help or support for Battlefield 2 by visiting the official website of EA Games or DICE, or by joining online forums or communities such as [Battlefield Forums], [Battlefield Reddit], or [Battlefield Wiki].
b2dd77e56bIf you are a fan of roguelike games, you have probably heard of Binding of Isaac, one of the most popular and challenging titles in the genre. But what if you want to spice up your gameplay with some cheats and hacks? In this article, we will show you how to download and use a cheat table for Binding of Isaac Repentance, the latest expansion of the game. But before we get into that, let's take a look at what Binding of Isaac is and why it is so fun and addictive.
-Download Zip >>> https://byltly.com/2uKxvA
Binding of Isaac is a roguelike game, which means that it is a game that features randomly generated levels, permadeath, and high difficulty. The game was created by Edmund McMillen and Florian Himsl, and was released in 2011. The game is inspired by McMillen's religious upbringing and personal experiences, as well as by classic games like The Legend of Zelda and Rogue.
-The game follows the story of Isaac, a young boy who escapes to his basement after his mother hears a voice from God telling her to sacrifice him. In the basement, Isaac encounters various monsters, bosses, items, secrets, and challenges. The game is played from a top-down perspective, and the player controls Isaac with the keyboard or a controller. The player can shoot tears at enemies, as well as use bombs, keys, coins, cards, pills, and other items. The player can also collect various power-ups that alter Isaac's appearance, stats, abilities, and interactions with the environment. The game has multiple endings, depending on the player's choices and actions.
-The original version of Binding of Isaac was made with Adobe Flash, which limited its performance and content. In 2014, McMillen released The Binding of Isaac: Rebirth, a remake of the game with a new engine, graphics, music, gameplay features, items, enemies, bosses, modes, secrets, and endings. Rebirth also introduced co-op multiplayer, allowing two players to play together on the same screen. Rebirth was followed by two expansions: The Binding of Isaac: Afterbirth in 2015, which added more content and features to the game; and The Binding of Isaac: Afterbirth+ in 2017, which added even more content and features, as well as mod support. In 2021, McMillen released The Binding of Isaac: Repentance, the final expansion for Rebirth, which added new content based on a fan-made mod called Antibirth, as well as new content created by McMillen himself. Repentance is considered by many fans to be the definitive version of Binding of Isaac.
-A cheat table is a file that contains a list of cheats or hacks for a specific game or application. A cheat table can be used with a program called Cheat Engine, which is a software that allows users to modify the memory and data of any running process on their computer. Cheat Engine can scan the memory for values that correspond to certain aspects of the game or application, such as health points, money, inventory items, etc. The user can then change these values to whatever they want, giving them an advantage or altering the gameplay in various ways.
-Using cheat tables can be fun and rewarding for some players who want to experiment with different possibilities or overcome difficult challenges in their games. For example, using a cheat table for Binding of Isaac Repentance can allow you to access all the items in the game without having to unlock them first; or give you infinite health or damage; or enable you to fly over obstacles; or change your character's appearance; or spawn any enemy or boss you want; or create custom rooms; or do many other things that are normally impossible or very hard to do in the game.
-However, using cheat tables also comes with some risks and drawbacks. For one thing, cheating can ruin the fun and satisfaction that comes from playing the game legitimately. It can also make the game too easy or boring for some players who enjoy the challenge and randomness that roguelike games offer. Moreover, cheating can cause glitches or crashes in some games or applications that are not designed to handle such modifications. And finally, cheating can get you banned from some online games or platforms that have anti-cheat measures or policies.
-Binding Of Isaac Rebirth Cheat Engine Table
-Binding Of Isaac Repentance Cheat Engine Table
-Binding Of Isaac Afterbirth Plus Cheat Engine Table
-Binding Of Isaac Rebirth Fearless Cheat Engine
-Binding Of Isaac Repentance Fearless Cheat Engine
-Binding Of Isaac Afterbirth Plus Fearless Cheat Engine
-Binding Of Isaac Rebirth Platinum God Cheat Sheet
-Binding Of Isaac Repentance Platinum God Cheat Sheet
-Binding Of Isaac Afterbirth Plus Platinum God Cheat Sheet
-Binding Of Isaac Rebirth Reddit Cheat Engine
-Binding Of Isaac Repentance Reddit Cheat Engine
-Binding Of Isaac Afterbirth Plus Reddit Cheat Engine
-Binding Of Isaac Rebirth Guided Hacking Cheat Engine Table
-Binding Of Isaac Repentance Guided Hacking Cheat Engine Table
-Binding Of Isaac Afterbirth Plus Guided Hacking Cheat Engine Table
-Binding Of Isaac Rebirth DLCs Cheat Engine Table Download
-Binding Of Isaac Repentance DLCs Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus DLCs Cheat Engine Table Download
-Binding Of Isaac Rebirth Steam Cheat Engine Table Download
-Binding Of Isaac Repentance Steam Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Steam Cheat Engine Table Download
-Binding Of Isaac Rebirth Items Cheat Engine Table Download
-Binding Of Isaac Repentance Items Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Items Cheat Engine Table Download
-Binding Of Isaac Rebirth Trinkets Cheat Engine Table Download
-Binding Of Isaac Repentance Trinkets Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Trinkets Cheat Engine Table Download
-Binding Of Isaac Rebirth Consumables Cheat Engine Table Download
-Binding Of Isaac Repentance Consumables Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Consumables Cheat Engine Table Download
-Binding Of Isaac Rebirth Mods Cheat Engine Table Download
-Binding Of Isaac Repentance Mods Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Mods Cheat Engine Table Download
-Binding Of Isaac Rebirth Infinite Health Cheat Engine Table Download
-Binding Of Isaac Repentance Infinite Health Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Infinite Health Cheat Engine Table Download
-Binding Of Isaac Rebirth Infinite Money Cheat Engine Table Download
-Binding Of Isaac Repentance Infinite Money Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Infinite Money Cheat Engine Table Download
-Binding Of Isaac Rebirth Infinite Keys Cheat Engine Table Download
-Binding Of Isaac Repentance Infinite Keys Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Infinite Keys Cheat Engine Table Download
-Binding Of Isaac Rebirth Infinite Bombs Cheat Engine Table Download
-Binding Of Isaac Repentance Infinite Bombs Cheat Engine Table Download
-Binding Of Isaac Afterbirth Plus Infinite Bombs Cheat Engine Table Download
-How To Use The Binding of Isaac: Rebirth + DLCs - FearLess Cheat ...
Cheating in games is not illegal per se (unless it involves hacking or stealing someone else's data), but it can be considered unethical or immoral by some people. Some people argue that cheating is unfair to other players who play by the rules; that it violates the developer's vision and intention for their game; that it harms the gaming industry by discouraging innovation and quality; that it disrespects the artistry and effort that goes into making games; that it encourages laziness and dishonesty among gamers; that it sets a bad example for younger generations; etc.
-On the other hand, some people defend cheating as a form of personal freedom and expression; that it enhances creativity and experimentation among gamers; that it adds variety and replay value to games; that it allows players to customize their gaming experience according to their preferences; that it challenges
0a6ba089ebGrand Theft Auto V (GTA V) is one of the most popular and successful video games of all time. Released in 2013 by Rockstar Games, GTA V is an open-world action-adventure game that lets you explore a fictional version of Los Angeles called Los Santos. You can play as one of three main characters: Michael, a retired bank robber; Franklin, a street hustler; or Trevor, a psychopathic drug dealer. You can also switch between them at any time and experience different aspects of their lives.
-GTA V is known for its stunning graphics, immersive gameplay, rich story, diverse missions, online multiplayer mode, and endless possibilities for fun and chaos. However, like any other game, GTA V is not perfect and may have some bugs, glitches, or performance issues that can affect your gaming experience. That's why Rockstar Games regularly releases patches and updates to fix these problems and improve the game.
-Download ⚙⚙⚙ https://byltly.com/2uKwC5
One of these patches is the patch fix v1.0.231.0 core x, which was released in 2022 by a group of modders called Core X. This patch fix is designed to enhance GTA V's performance, stability, graphics, loading times, compatibility, and security. It also adds new content, features, and improvements to the game that make it more enjoyable and realistic.
-In this article, we will tell you everything you need to know about this patch fix: how to install it, what's new in it, and how to enjoy it. So buckle up and get ready for a wild ride!
-Grand theft auto 5 patch fix v1.0.231.0 core x download
-GTA V patch fix v1.0.231.0 core x free download
-How to install Grand theft auto v patch fix v1.0.231.0 core x
-Grand theft auto v patch fix v1.0.231.0 core x crack
-GTA 5 patch fix v1.0.231.0 core x update
-Grand theft auto v patch fix v1.0.231.0 core x error
-GTA V patch fix v1.0.231.0 core x gameplay
-Grand theft auto 5 patch fix v1.0.231.0 core x review
-GTA 5 patch fix v1.0.231.0 core x mods
-Grand theft auto v patch fix v1.0.231.0 core x cheats
-GTA V patch fix v1.0.231.0 core x trainer
-Grand theft auto 5 patch fix v1.0.231.0 core x online
-GTA 5 patch fix v1.0.231.0 core x multiplayer
-Grand theft auto v patch fix v1.0.231.0 core x steam
-GTA V patch fix v1.0.231.0 core x torrent
-Grand theft auto 5 patch fix v1.0.231.0 core x skidrow
-GTA 5 patch fix v1.0.231.0 core x reloaded
-Grand theft auto v patch fix v1.0.231.0 core x codex
-GTA V patch fix v1.0.231.0 core x fitgirl
-Grand theft auto 5 patch fix v1.0.231.0 core x repack
-GTA 5 patch fix v1.0.231.0 core x nosteam
-Grand theft auto v patch fix v1.0.231.0 core x rg mechanics
-GTA V patch fix v1.o23l.o core x cpy
-Grand theft auto 5 patch fix vl.o23l.o core x plaza
-GTA 5 patch fix vl.o23l.o core x hoodlum
-Grand theft auto v patch fix vl.o23l.o core x razor191l
-GTA V patch fix vl.o23l.o core x prophet
-Grand theft auto 5 patch fix vl.o23l.o core x elamigos
-GTA 5 patch fix vl.o23l.o core x darksiders
-Grand theft auto v patch fix vl.o23l.o core x gog
-GTA V patch fix vl.o23l.o core x epic games
-Grand theft auto 5 patch fix vl.o23l.o core x rockstar games
-GTA 5 patch fix vl.o23l.o core x social club
-Grand theft auto v patch fix vl.o23l.o core x windows 10
-GTA V patch fix vl.o23l.o core x windows 7
-Grand theft auto 5 patch fix vl.o23l.o core x windows 8
-GTA 5 patch fix vl.o23l.o core x mac os
-Grand theft auto v patch fix vl.o23l.o core x linux
-GTA V patch fix vl.o23l.o core x android
-Grand theft auto 5 patch fix vl.o23l.o core x ios
-GTA 5 patch fix vl.o23l.o core x ps4
-Grand theft auto v patch fix vl.o23l.o core x ps3
-GTA V patch fix vl.o23l.o core x xbox one
-Grand theft auto 5 patch fix vl.o23l.o core x xbox 360
-GTA 5 patch fix vl.o23l.o core x switch
-Grand theft auto v patch fix vl.o23l.o core x wii u
-GTA V patch fix vl.o23l.o core x vr
-Grand theft auto 5 patch fix vl.o23l.o core x oculus rift
-GTA 5 patch fix vl.o23l.o core x htc vive
-Grand theft auto v patch fix vl.o23l.o core x valve index
Installing the patch fix v1.0.231.0 core x is not very difficult, but it does require some attention and care. Here are the steps you need to follow:
-Note: If you encounter any errors or issues during or after installation, such as missing files, corrupted data, crashes, freezes, etc., you may need to verify your game files integrity using Steam/Rockstar Games Launcher or reinstall GTA V completely.
-The patch fix v1.0.231.0 core x brings a lot of new content, features and improvements to GTA V that make it more fun and realistic than ever before. Here are some of them:
-Now that you have installed the patch fix v1.0.231.0 core x and learned about its new content, features, and improvements, you may wonder how to enjoy it fully. Here are some tips and tricks that can help you:
-The patch fix v1.0.231.0 core x is a great addition to GTA V that enhances its performance, stability, graphics, loading times, compatibility, and security. It also adds new content, features, and improvements to the game that make it more fun and realistic than ever before. You can install the patch fix easily and enjoy it fully by joining or creating an MC, trying out new vehicles, exploring new properties, and experimenting with new features. So what are you waiting for? Download the patch fix today and experience GTA V like never before!
-Welcome to MovieMora.com with the new address Bookmark the URL, because you don't have to search to another place anymore to freely watch and download the movie Baghban. Direct link for downloading or online streaming movie Baghban on your mobile phone or laptop.
-Download File >> https://imgfil.com/2uxYNc
If you are looking for a fun and unique simulation game that lets you experience the life of a Japanese high school student, you might want to check out Sakura School Simulator. This game is developed by Garusoft Development Inc., a Japanese indie game studio, and has been downloaded over 100 million times on the Google Play Store. However, did you know that there are different versions of this game available for different regions? And that the original Japanese version has some advantages over the others? In this article, we will tell you everything you need to know about Sakura School Simulator, why you might want to download the Japanese version, and how to do it. Read on to find out more!
-Download Zip ⇒⇒⇒ https://urlin.us/2uSZJ6
Sakura School Simulator is a simulation game that lets you create your own character and explore a fictional town called Sakura. You can attend school, make friends, fall in love, join clubs, fight enemies, borrow weapons from the yakuza, fly around with a jetpack, and much more. The game has no end or goal, so you can play as you like and create your own scenarios. The game also features a lot of customization options for your character's appearance, clothes, accessories, hairstyles, etc. You can also control and change up to four characters in the same stage.
-The game is categorized as a "school simulator", but it also incorporates elements from other genres such as action, adventure, comedy, romance, fantasy, and horror. The game has a lot of humor and references to Japanese culture and anime. The game also has no blood or death, so even if you get attacked or stunned by enemies, you will wake up the next day and continue your adventure.
-One of the main reasons why you might want to download the Japanese version of Sakura School Simulator is that it has more content and updates than the other versions. The Japanese version is the original version of the game, so it gets updated more frequently and receives new features and improvements before the other versions. For example, some of the recent updates in the Japanese version include new locations such as a haunted house, a hospital, a shrine, a temple, etc., new characters such as ghosts, zombies, ninjas, monks, etc., new items such as masks, hats, glasses, etc., new vehicles such as bikes, cars, helicopters, etc., new weapons such as swords, guns, bombs, etc., new animations such as dancing, singing, playing instruments, etc., new interactions such as kissing, hugging, holding hands, etc., and much more.
-Another reason why you might want to download the Japanese version of Sakura School Simulator is that it has better graphics and performance than the other versions. The Japanese version has
higher resolution and smoother frame rate than the other versions. The Japanese version also has more options to adjust the graphics quality and performance according to your device's specifications. You can choose from low, medium, high, or ultra settings for the graphics quality, and from 30, 60, or 120 fps for the frame rate. The Japanese version also has less bugs and glitches than the other versions, as it is more stable and optimized.
-A third reason why you might want to download the Japanese version of Sakura School Simulator is that it offers a more authentic and immersive experience of Japanese culture and school life. The Japanese version has more dialogue and text in Japanese, which adds to the realism and atmosphere of the game. You can also learn some Japanese words and phrases by playing the game, as the game has a built-in dictionary that explains the meaning and pronunciation of some words. The Japanese version also has more details and features that reflect the Japanese culture and school life, such as school uniforms, school rules, school events, festivals, holidays, food, music, etc. You can also interact with more characters that have different personalities and backgrounds, such as teachers, classmates, friends, rivals, lovers, etc.
-download sakura school simulator versi jepang apk
-download sakura school simulator versi jepang mod
-download sakura school simulator versi jepang terbaru
-download sakura school simulator versi jepang offline
-download sakura school simulator versi jepang gratis
-download sakura school simulator versi jepang android
-download sakura school simulator versi jepang pc
-download sakura school simulator versi jepang google play
-download sakura school simulator versi jepang update
-download sakura school simulator versi jepang full version
-download sakura school simulator versi jepang tanpa iklan
-download sakura school simulator versi jepang unlimited money
-download sakura school simulator versi jepang no ads
-download sakura school simulator versi jepang 2023
-download sakura school simulator versi jepang latest version
-download sakura school simulator versi jepang for windows
-download sakura school simulator versi jepang for mac
-download sakura school simulator versi jepang for laptop
-download sakura school simulator versi jepang for chromebook
-download sakura school simulator versi jepang for ios
-download sakura school simulator versi jepang for iphone
-download sakura school simulator versi jepang for ipad
-download sakura school simulator versi jepang bluestacks
-download sakura school simulator versi jepang emulator
-download sakura school simulator versi jepang garusoft development inc.
-cara download sakura school simulator versi jepang
-link download sakura school simulator versi jepang
-situs download sakura school simulator versi jepang
-website download sakura school simulator versi jepang
-aplikasi download sakura school simulator versi jepang
-game download sakura school simulator versi jepang
-review download sakura school simulator versi jepang
-tips download sakura school simulator versi jepang
-tutorial download sakura school simulator versi jepang
-video download sakura school simulator versi jepang
-youtube download sakura school simulator versi jepang
-blog download sakura school simulator versi jepang
-forum download sakura school simulator versi jepang
-guide download sakura school simulator versi jepang
-walkthrough download sakura school simulator versi jepang
Before you download the Japanese version of Sakura School Simulator, you need to make sure that your device meets the requirements and that you are aware of some precautions. The requirements for downloading the Japanese version are:
-The precautions for downloading the Japanese version are:
-If you meet the requirements and are ready to take the precautions, you can follow these steps to download and install the Japanese version of Sakura School Simulator:
-Now that you have downloaded and installed the Japanese version of Sakura School Simulator, you can start playing and enjoying it. Here are some tips and tricks for playing:
-Sakura School Simulator is a simulation game that lets you experience the life of a Japanese high school student in a fictional town called Sakura. You can create your own character and explore a vast open world full of possibilities and surprises. You can also download the Japanese version of Sakura School Simulator, which has more content and updates, better graphics and performance, and more authentic and immersive experience than the other versions. To download the Japanese version, you need to meet some requirements and take some precautions, then follow some steps to download and install it. You can also use some tips and tricks to play it better and have more fun. We hope this article has helped you learn more about Sakura School Simulator and how to download the Japanese version. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-A: Yes, Sakura School Simulator is free to play. However, it contains ads and in-app purchases that can enhance your gameplay or remove ads.
-A: Sakura School Simulator is rated 12+ on the Google Play Store. It contains mild violence, suggestive themes, crude humor, and simulated gambling. Parental guidance is recommended for younger players.
-A: You can contact Garusoft Development Inc., the developer of Sakura School Simulator, by visiting their official website or sending them an email at garusoft@gmail.com.
-A: You can support Garusoft Development Inc., by rating and reviewing their game on the Google Play Store, sharing it with your friends, or making a donation through their official website.
-A: You can play Sakura School Simulator on PC by using an Android emulator such as BlueStacks or NoxPlayer. However, this method is not officially supported by Garusoft Development Inc., so you might encounter some issues or errors.
197e85843dMultiMAE: Multi-modal Multi-task Masked Autoencoders | \ - Github Repo
" - -css = '.output-image{height: 713px !important}' - -# Example images -#os.system("wget https://i.imgur.com/c9ObJdK.jpg") -#os.system("wget https://i.imgur.com/KTKgYKi.jpg") -#os.system("wget https://i.imgur.com/lWYuRI7.jpg") - -examples = [ - ['c9ObJdK.jpg', 15, False, 15, 15, 15, 0], - ['KTKgYKi.jpg', 15, False, 15, 15, 15, 0], - ['lWYuRI7.jpg', 15, False, 15, 15, 15, 0], -] - -gr.Interface( - fn=inference, - inputs=[ - gr.inputs.Image(label='RGB input image', type='filepath'), - gr.inputs.Slider(label='Percentage of input tokens', default=15, step=0.1, minimum=0, maximum=100), - gr.inputs.Checkbox(label='Manual mode: Check this to manually set the number of input tokens per modality using the sliders below', default=False), - gr.inputs.Slider(label='Percentage of RGB input tokens (for manual mode only)', default=15, step=0.1, minimum=0, maximum=100), - gr.inputs.Slider(label='Percentage of depth input tokens (for manual mode only)', default=15, step=0.1, minimum=0, maximum=100), - gr.inputs.Slider(label='Percentage of semantic input tokens (for manual mode only)', default=15, step=0.1, minimum=0, maximum=100), - gr.inputs.Number(label='Random seed: Change this to sample different masks (for manual mode only)', default=0), - ], - outputs=[ - gr.outputs.Image(label='MultiMAE predictions', type='filepath') - ], - css=css, - title=title, - description=description, - article=article, - examples=examples -).launch(enable_queue=True) diff --git a/spaces/Ekitl02/stabilityai-stable-diffusion-xl-base-1.0/README.md b/spaces/Ekitl02/stabilityai-stable-diffusion-xl-base-1.0/README.md deleted file mode 100644 index 32718b2b0f4a6f2f331cb263ad1d9a97d4d041d9..0000000000000000000000000000000000000000 --- a/spaces/Ekitl02/stabilityai-stable-diffusion-xl-base-1.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stabilityai Stable Diffusion Xl Base 1.0 -emoji: 📉 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/vqvae/quantize.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/vqvae/quantize.py deleted file mode 100644 index 9c8caffad7fd4e90b2b5c627dda60d4c9fc496de..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/vqvae/quantize.py +++ /dev/null @@ -1,329 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from torch import einsum -from einops import rearrange - - -class VectorQuantizer(nn.Module): - """ - see https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py - ____________________________________________ - Discretization bottleneck part of the VQ-VAE. - Inputs: - - n_e : number of embeddings - - e_dim : dimension of embedding - - beta : commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - _____________________________________________ - """ - - # NOTE: this class contains a bug regarding beta; see VectorQuantizer2 for - # a fix and use legacy=False to apply that fix. VectorQuantizer2 can be - # used wherever VectorQuantizer has been used before and is additionally - # more efficient. - def __init__(self, n_e, e_dim, beta): - super(VectorQuantizer, self).__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - def forward(self, z): - """ - Inputs the output of the encoder network z and maps it to a discrete - one-hot vector that is the index of the closest embedding vector e_j - z (continuous) -> z_q (discrete) - z.shape = (batch, channel, height, width) - quantization pipeline: - 1. get encoder input (B,C,H,W) - 2. flatten input to (B*H*W,C) - """ - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.matmul(z_flattened, self.embedding.weight.t()) - - ## could possible replace this here - # #\start... - # find closest encodings - min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - - min_encodings = torch.zeros( - min_encoding_indices.shape[0], self.n_e).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # dtype min encodings: torch.float32 - # min_encodings shape: torch.Size([2048, 512]) - # min_encoding_indices.shape: torch.Size([2048, 1]) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - #.........\end - - # with: - # .........\start - #min_encoding_indices = torch.argmin(d, dim=1) - #z_q = self.embedding(min_encoding_indices) - # ......\end......... (TODO) - - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - # TODO: check for more easy handling with nn.Embedding - min_encodings = torch.zeros(indices.shape[0], self.n_e).to(indices) - min_encodings.scatter_(1, indices[:,None], 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: - z_q = z_q.view(shape) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantize(nn.Module): - """ - credit to @karpathy: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py (thanks!) - Gumbel Softmax trick quantizer - Categorical Reparameterization with Gumbel-Softmax, Jang et al. 2016 - https://arxiv.org/abs/1611.01144 - """ - def __init__(self, num_hiddens, embedding_dim, n_embed, straight_through=True, - kl_weight=5e-4, temp_init=1.0, use_vqinterface=True, - remap=None, unknown_index="random"): - super().__init__() - - self.embedding_dim = embedding_dim - self.n_embed = n_embed - - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - - self.proj = nn.Conv2d(num_hiddens, n_embed, 1) - self.embed = nn.Embedding(n_embed, embedding_dim) - - self.use_vqinterface = use_vqinterface - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_embed - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, return_logits=False): - # force hard = True when we are in eval mode, as we must quantize. actually, always true seems to work - hard = self.straight_through if self.training else True - temp = self.temperature if temp is None else temp - - logits = self.proj(z) - if self.remap is not None: - # continue only with used logits - full_zeros = torch.zeros_like(logits) - logits = logits[:,self.used,...] - - soft_one_hot = F.gumbel_softmax(logits, tau=temp, dim=1, hard=hard) - if self.remap is not None: - # go back to all entries but unused set to zero - full_zeros[:,self.used,...] = soft_one_hot - soft_one_hot = full_zeros - z_q = einsum('b n h w, n d -> b d h w', soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.n_embed + 1e-10), dim=1).mean() - - ind = soft_one_hot.argmax(dim=1) - if self.remap is not None: - ind = self.remap_to_used(ind) - if self.use_vqinterface: - if return_logits: - return z_q, diff, (None, None, ind), logits - return z_q, diff, (None, None, ind) - return z_q, diff, ind - - def get_codebook_entry(self, indices, shape): - b, h, w, c = shape - assert b*h*w == indices.shape[0] - indices = rearrange(indices, '(b h w) -> b h w', b=b, h=h, w=w) - if self.remap is not None: - indices = self.unmap_to_all(indices) - one_hot = F.one_hot(indices, num_classes=self.n_embed).permute(0, 3, 1, 2).float() - z_q = einsum('b n h w, n d -> b d h w', one_hot, self.embed.weight) - return z_q - - -class VectorQuantizer2(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random", - sane_index_shape=False, legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, rescale_logits=False, return_logits=False): - assert temp is None or temp==1.0, "Only for interface compatible with Gumbel" - assert rescale_logits==False, "Only for interface compatible with Gumbel" - assert return_logits==False, "Only for interface compatible with Gumbel" - # reshape z -> (batch, height, width, channel) and flatten - z = rearrange(z, 'b c h w -> b h w c').contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n')) - - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape(z.shape[0],-1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1,1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape( - z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0],-1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q diff --git a/spaces/Epitech/Money-Recognition/app.py b/spaces/Epitech/Money-Recognition/app.py deleted file mode 100644 index a786afd70d20ca8bb9065b69fab87ef4a298a4b4..0000000000000000000000000000000000000000 --- a/spaces/Epitech/Money-Recognition/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import os -import cv2 -import numpy as np -from collections import Counter -from time import time -import tkinter.filedialog -from tkinter import * -import sys -import gradio as gr - -def k_nearest_neighbors(predict, k): - distances = [] - for image in training_data: - distances.append([np.linalg.norm(image[0] - predict), image[1]]) # calcul de distance euclidienne - distances.sort() - votes = [i[1] for i in distances[:k]] - votes = ''.join(str(e) for e in votes) - votes = votes.replace(',', '') - votes = votes.replace(' ', '') - result = Counter(votes).most_common(1)[0][0] - return result - - -def test(): - start = time() - correct = 0 - total = 0 - skipped = 0 - for i in range(len(x_test)+1): - try: - prediction = k_nearest_neighbors(x_test[i], 5) - if int(prediction) == y_test[i]: - correct += 1 - total += 1 - except Exception as e: - print('An exception occured') - skipped += 1 - accuracy = correct/total - end = time() - print(end-start) - print(accuracy) - -def ia_handler(image): - pred = k_nearest_neighbors(img, 10) - if pred == 0: - return 'It\'s a coin' - return 'It\'s a banknote' - -def main(): - if len(sys.argv) > 1 and sys.argv[1] == '--cli': - root = Tk() - root.withdraw() - root.update() - filename = tkinter.filedialog.askopenfilename(title="Ouvrir fichier", filetypes=[('all files', '.*')]) # sélectionner la photo - src = cv2.imread(cv2.samples.findFile(filename), cv2.IMREAD_COLOR) # charger la photo - root.destroy() - img = resize_img(src) - pred = k_nearest_neighbors(img, 10) - if pred == '0': - print('Coin') - else: - print('Banknote') - else: - iface = gr.Interface(fn=ia_handler, inputs="image", outputs="text") - iface.launch() - - -def resize_img(img): - dim = (150, 150) - new_img = cv2.resize(img, dim) - return new_img - -if __name__=="__main__": - coin_datadir_train = '../coins-dataset/classified/train' - coin_datadir_test = '../coins-dataset/classified/test' - note_datadir_train = '../banknote-dataset/classified/train' - note_datadir_test = '../banknote-dataset/classified/test' - - categories = ['1c', '2c', '5c', '10c', '20c', '50c', '1e', '2e', '5e', '10e', '20e', '50e'] - coin_index = 8 - - training_data = [] - - for category in categories[:coin_index]: - path = os.path.join(coin_datadir_train, category) - label = 0 - for img in os.listdir(path): - img_array = cv2.imread(os.path.join(path, img)) - training_data.append([img_array, label]) - - for category in categories[coin_index:]: - path = os.path.join(note_datadir_train, category) - label = 1 - for img in os.listdir(path): - img_array = resize_img(cv2.imread(os.path.join(path, img))) - training_data.append([img_array, label]) - - - testing_data = [] - - for category in categories[:coin_index]: - path = os.path.join(coin_datadir_test, category) - label = 0 - for img in os.listdir(path): - img_array = cv2.imread(os.path.join(path, img)) - testing_data.append([img_array, label]) - - for category in categories[coin_index:]: - path = os.path.join(note_datadir_test, category) - label = 1 - for img in os.listdir(path): - img_array = resize_img(cv2.imread(os.path.join(path, img))) - testing_data.append([img_array, label]) - - - x_train = [] - y_train = [] - - for features, label in training_data: - x_train.append(features) - y_train.append(label) - - x_train = np.array(x_train) - - - x_test = [] - y_test = [] - - for features, label in testing_data: - x_test.append(features) - y_test.append(label) - - x_test = np.array(x_test) - main() diff --git a/spaces/Faridmaruf/RVCV2MODEL/config.py b/spaces/Faridmaruf/RVCV2MODEL/config.py deleted file mode 100644 index 6797dd748da4a2ddf57a97fd80ce9776d98ac82e..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/config.py +++ /dev/null @@ -1,99 +0,0 @@ -import argparse -import sys -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.colab, - self.api, - self.unsupported - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument("--api", action="store_true", help="Launch with api") - parser.add_argument("--unsupported", action="store_true", help="Enable unsupported feature") - cmd_opts = parser.parse_args() - - return ( - cmd_opts.colab, - cmd_opts.api, - cmd_opts.unsupported - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("INFO: Found GPU", self.gpu_name, ", force to fp32") - self.is_half = False - else: - print("INFO: Found GPU", self.gpu_name) - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif self.has_mps(): - print("INFO: No supported Nvidia GPU found, use MPS instead") - self.device = "mps" - self.is_half = False - else: - print("INFO: No supported Nvidia GPU found, use CPU instead") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/inference/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/logger/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/logger/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/clip/__init__.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/clip/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/FritsLyneborg/kunstnerfrits/tools/train/train.py b/spaces/FritsLyneborg/kunstnerfrits/tools/train/train.py deleted file mode 100644 index 3e22d31d88f865d5db0d2e9fb5757d723d6f1c96..0000000000000000000000000000000000000000 --- a/spaces/FritsLyneborg/kunstnerfrits/tools/train/train.py +++ /dev/null @@ -1,1436 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021-2022 The HuggingFace & DALL·E Mini team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Training DALL·E Mini. -Script adapted from run_summarization_flax.py -""" - -import io -import logging -import os -import sys -import tempfile -import time -from dataclasses import asdict, dataclass, field -from pathlib import Path -from typing import Any, Callable, NamedTuple, Optional - -import datasets -import flax -import jax -import jax.numpy as jnp -import jaxlib -import numpy as np -import optax -import transformers -import wandb -from datasets import Dataset -from flax.core.frozen_dict import FrozenDict, freeze, unfreeze -from flax.serialization import from_bytes, to_bytes -from flax.training import train_state -from flax.training.common_utils import onehot -from jax.experimental import PartitionSpec, maps -from jax.experimental.compilation_cache import compilation_cache as cc -from jax.experimental.pjit import pjit, with_sharding_constraint -from scalable_shampoo.distributed_shampoo import GraftingType, distributed_shampoo -from tqdm import tqdm -from transformers import HfArgumentParser - -import dalle_mini -from dalle_mini.data import Dataset -from dalle_mini.model import ( - DalleBart, - DalleBartConfig, - DalleBartTokenizer, - set_partitions, -) - -try: - from google.cloud import storage -except: - storage = None - -cc.initialize_cache("./jax_cache", max_cache_size_bytes=10 * 2**30) - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. - """ - - model_name_or_path: Optional[str] = field( - default=None, - metadata={ - "help": "The model checkpoint for weights initialization. " - "Don't set if you want to train a model from scratch. " - "W&B artifact references are supported in addition to the sources supported by `PreTrainedModel`." - }, - ) - config_name: Optional[str] = field( - default=None, - metadata={ - "help": "Pretrained config name or path if not the same as model_name_or_path" - }, - ) - tokenizer_name: Optional[str] = field( - default=None, - metadata={ - "help": "Pretrained tokenizer name or path if not the same as model_name_or_path" - }, - ) - dtype: Optional[str] = field( - default="float32", - metadata={ - "help": "Floating-point format in which the computations will be performed (not the model weights). Choose one of `[float32, float16, bfloat16]`." - }, - ) - restore_state: Optional[bool] = field( - default=False, - metadata={ - "help": "Restore optimizer and training state. Can be True (will retrieve associated wandb artifact), a local directory or a Google bucket path." - }, - ) - - def __post_init__(self): - if self.tokenizer_name is None: - self.tokenizer_name = self.model_name_or_path - assert ( - self.tokenizer_name is not None - ), "Tokenizer name or model name/path needs to be specified" - if self.restore_state: - assert self.model_name_or_path is not None and ( - "/model-" in self.model_name_or_path - ), "Restoring state only available with W&B artifact reference" - - def get_metadata(self): - if self.restore_state: - if jax.process_index() == 0: - artifact = wandb.run.use_artifact(self.model_name_or_path) - else: - artifact = wandb.Api().artifact(self.model_name_or_path) - return artifact.metadata - else: - return dict() - - def get_opt_state(self): - with tempfile.TemporaryDirectory() as tmp_dir: # avoid multiple artifact copies - if self.restore_state is True: - # wandb artifact - state_artifact = self.model_name_or_path.replace( - "/model-", "/state-", 1 - ) - if jax.process_index() == 0: - artifact = wandb.run.use_artifact(state_artifact) - else: - artifact = wandb.Api().artifact(state_artifact) - if artifact.metadata.get("bucket_path"): - # we will read directly file contents - self.restore_state = artifact.metadata["bucket_path"] - else: - artifact_dir = artifact.download(tmp_dir) - self.restore_state = str(Path(artifact_dir) / "opt_state.msgpack") - - if self.restore_state.startswith("gs://"): - bucket_path = Path(self.restore_state[5:]) / "opt_state.msgpack" - bucket, blob_name = str(bucket_path).split("/", 1) - assert ( - storage is not None - ), 'Could not find google.storage. Install with "pip install google-cloud-storage"' - client = storage.Client() - bucket = client.bucket(bucket) - blob = bucket.blob(blob_name) - return blob.download_as_bytes() - - with Path(self.restore_state).open("rb") as f: - return f.read() - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - text_column: Optional[str] = field( - default="caption", - metadata={ - "help": "The name of the column in the datasets containing the full texts (for summarization)." - }, - ) - encoding_column: Optional[str] = field( - default="encoding", - metadata={ - "help": "The name of the column in the datasets containing the image encodings." - }, - ) - dataset_repo_or_path: str = field( - default=None, - metadata={"help": "The dataset repository containing encoded files."}, - ) - train_file: Optional[str] = field( - default=None, - metadata={ - "help": "The input training data file (glob & braceexpand acceptable)." - }, - ) - validation_file: Optional[str] = field( - default=None, - metadata={ - "help": "An optional input evaluation data file (glob & braceexpand acceptable)." - }, - ) - # data loading should not be a bottleneck so we use "streaming" mode by default - streaming: Optional[bool] = field( - default=True, - metadata={"help": "Whether to stream the dataset."}, - ) - use_auth_token: Optional[bool] = field( - default=False, - metadata={ - "help": "Whether to use the authentication token for private datasets." - }, - ) - shard_by_host: Optional[bool] = field( - default=False, - metadata={ - "help": "Whether to shard data files by host in multi-host environments." - }, - ) - blank_caption_prob: Optional[float] = field( - default=0.0, - metadata={ - "help": "Probability of removing some captions for classifier-free guidance." - }, - ) - clip_score_column: Optional[str] = field( - default="clip_score", - metadata={"help": "Column that containts clip score for filtering."}, - ) - min_clip_score: Optional[float] = field( - default=None, - metadata={"help": "Minimum clip score required."}, - ) - max_clip_score: Optional[float] = field( - default=None, - metadata={"help": "Maximum clip score required."}, - ) - filter_column: Optional[str] = field( - default=None, - metadata={"help": "Column that containts classes to be filtered."}, - ) - filter_value: Optional[str] = field( - default=None, - metadata={"help": "Class value to be kept during filtering."}, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": "For debugging purposes or quicker training, truncate the number of training examples." - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": "For debugging purposes or quicker training, truncate the number of evaluation examples." - }, - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={ - "help": "The number of processes to use for the preprocessing. Not used in streaming mode." - }, - ) - overwrite_cache: bool = field( - default=False, - metadata={ - "help": "Overwrite the cached training and evaluation sets. Not used in streaming mode." - }, - ) - # default seed of None ensures we don't repeat the same items if script was interrupted during an epoch - seed_dataset: int = field( - default=None, - metadata={ - "help": "Random seed for the dataset that will be set at the beginning of training." - }, - ) - - def __post_init__(self): - if self.dataset_repo_or_path is None: - raise ValueError("Need a dataset repository or path.") - - -@dataclass -class TrainingArguments: - """ - Arguments pertaining to training parameters. - """ - - output_dir: str = field( - metadata={ - "help": "The output directory where the model predictions and checkpoints will be written." - }, - ) - overwrite_output_dir: bool = field( - default=False, - metadata={ - "help": ( - "Overwrite the content of the output directory. " - "Use this to continue training if output_dir points to a checkpoint directory." - ) - }, - ) - - do_train: bool = field(default=False, metadata={"help": "Whether to run training."}) - do_eval: bool = field( - default=False, metadata={"help": "Whether to run eval on the validation set."} - ) - - per_device_train_batch_size: int = field( - default=8, - metadata={"help": "Batch size per data parallel device for training."}, - ) - per_device_eval_batch_size: Optional[int] = field( - default=None, - metadata={ - "help": "Batch size per data parallel device for evaluation. Same as training batch size if not set." - }, - ) - - gradient_accumulation_steps: int = field( - default=1, - metadata={ - "help": "Number of updates steps to accumulate before performing an update pass." - }, - ) - gradient_checkpointing: bool = field( - default=False, metadata={"help": "Use gradient checkpointing."} - ) - - learning_rate: float = field( - default=5e-5, metadata={"help": "The initial learning rate."} - ) - optim: str = field( - default="distributed_shampoo", - metadata={ - "help": 'The optimizer to use. Can be "distributed_shampoo" (default), "adam" or "adafactor"' - }, - ) - beta1: float = field( - default=0.9, - metadata={"help": "Beta1 for Adam & Distributed Shampoo."}, - ) - beta2: float = field( - default=0.999, - metadata={"help": "Beta2 for for Adam & Distributed Shampoo."}, - ) - adam_epsilon: float = field( - default=1e-8, metadata={"help": "Epsilon for AdamW optimizer."} - ) - max_grad_norm: float = field( - default=1.0, metadata={"help": "Max gradient norm for Adafactor."} - ) - block_size: int = field( - default=1024, - metadata={"help": "Chunked size for large layers with Distributed Shampoo."}, - ) - preconditioning_compute_steps: int = field( - default=10, metadata={"help": "Number of steps to update preconditioner."} - ) - skip_preconditioning_dim_size_gt: int = field( - default=4096, - metadata={"help": "Max size for preconditioning with Distributed Shampoo."}, - ) - graft_type: str = field( - default="rmsprop_normalized", - metadata={ - "help": "The type of grafting to use. Can be 'rmsprop_normalized' (default), 'rmsprop', 'adagrad', 'adagrad_normalized', 'sgd' or 'sqrt_n'" - }, - ) - optim_quantized: bool = field( - default=False, - metadata={ - "help": "Whether to quantize optimizer (only supported with Distributed Shampoo)." - }, - ) - - num_train_epochs: int = field( - default=3, metadata={"help": "Total number of training epochs to perform."} - ) - - warmup_steps: int = field( - default=0, metadata={"help": "Linear warmup over warmup_steps."} - ) - lr_decay: str = field( - default=None, - metadata={ - "help": "Decay to be used in the learning rate scheduler. Can be None (default), linear or exponential." - }, - ) - lr_transition_steps: int = field( - default=None, - metadata={ - "help": "Number of transition steps associated with learning rate decay when using exponential decay." - }, - ) - lr_decay_rate: float = field( - default=None, - metadata={ - "help": "Decay rate associated with learning rate when using exponential decay." - }, - ) - lr_staircase: bool = field( - default=False, - metadata={ - "help": "Whether to use staircase or continuous learning rate when using exponential decay." - }, - ) - - logging_steps: int = field( - default=40, metadata={"help": "Log every X updates steps."} - ) - eval_steps: int = field( - default=400, metadata={"help": "Run an evaluation every X steps."} - ) - save_steps: int = field( - default=4000, metadata={"help": "Save checkpoint every X updates steps."} - ) - log_model: bool = field( - default=False, - metadata={"help": "Log model to wandb at `save_steps` frequency."}, - ) - log_norm_steps: int = field( - default=True, - metadata={"help": "Log parameters and gradients norm at this frequency."}, - ) - log_histogram_steps: int = field( - default=False, - metadata={ - "help": "Log parameters and gradients histograms at this frequency. Slows down training." - }, - ) - - seed_model: int = field( - default=42, - metadata={ - "help": "Random seed for the model that will be set at the beginning of training." - }, - ) - - wandb_entity: Optional[str] = field( - default=None, - metadata={"help": "The wandb entity to use (for teams)."}, - ) - wandb_project: str = field( - default="dalle-mini", - metadata={"help": "The name of the wandb project."}, - ) - wandb_job_type: str = field( - default="Seq2Seq", - metadata={"help": "The name of the wandb job type."}, - ) - - assert_TPU_available: bool = field( - default=False, - metadata={"help": "Verify that TPU is not in use."}, - ) - - mp_devices: Optional[int] = field( - default=1, - metadata={ - "help": "Number of devices required for model parallelism. The other dimension of available devices is used for data parallelism." - }, - ) - - dp_devices: int = field(init=False) - - def __post_init__(self): - if self.assert_TPU_available: - assert ( - jax.local_device_count() == 8 - ), "TPUs in use, please check running processes" - if self.output_dir.startswith("gs://"): - assert ( - storage is not None - ), 'Could not find google.storage. Install with "pip install google-cloud-storage"' - assert self.optim in [ - "distributed_shampoo", - "adam", - "adafactor", - ], f"Selected optimizer not supported: {self.optim}" - assert self.graft_type in [ - "rmsprop_normalized", - "rmsprop", - "adagrad", - "adagrad_normalized", - "sgd", - "sqrt_n", - ], f"Selected graft type not supported: {self.graft_type}" - assert self.lr_decay in [ - None, - "linear", - "exponential", - ], f"Selected learning rate decay not supported: {self.lr_decay}" - if self.per_device_eval_batch_size is None: - self.per_device_eval_batch_size = self.per_device_train_batch_size - if self.log_norm_steps is True: - self.log_norm_steps = self.logging_steps - if ( - os.path.exists(self.output_dir) - and os.listdir(self.output_dir) - and self.do_train - and not self.overwrite_output_dir - ): - raise ValueError( - f"Output directory ({self.output_dir}) already exists and is not empty." - "Use --overwrite_output_dir to overcome." - ) - assert ( - self.mp_devices > 0 - ), f"Number of devices for model parallelism must be > 0" - assert ( - jax.device_count() % self.mp_devices == 0 - ), f"Number of available devices ({jax.device_count()} must be divisible by number of devices used for model parallelism ({self.mp_devices})." - self.dp_devices = jax.device_count() // self.mp_devices - - -class TrainState(train_state.TrainState): - dropout_rng: jnp.ndarray = None - epoch: int = 0 - train_time: float = 0.0 # total time the model trained - train_samples: int = 0 # number of samples seen - - -def main(): - # See all possible arguments by passing the --help flag to this script. - parser = HfArgumentParser( - (ModelArguments, DataTrainingArguments, TrainingArguments) - ) - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file( - json_file=os.path.abspath(sys.argv[1]) - ) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - - # Set the verbosity to info of the Transformers logger (on main process only): - logger.info(f"Training/evaluation parameters {training_args}") - - # Load dataset - dataset = Dataset( - **asdict(data_args), - do_train=training_args.do_train, - do_eval=training_args.do_eval, - ) - - logger.info(f"Local TPUs: {jax.local_device_count()}") - logger.info(f"Global TPUs: {jax.device_count()}") - - # Set up wandb run - if jax.process_index() == 0: - wandb.init( - entity=training_args.wandb_entity, - project=training_args.wandb_project, - job_type=training_args.wandb_job_type, - config=parser.parse_args(), - ) - - # Set up our new model config - if model_args.config_name: - config = DalleBartConfig.from_pretrained(model_args.config_name) - config.gradient_checkpointing = training_args.gradient_checkpointing - else: - config = None - - # Load or create new model - if model_args.model_name_or_path: - model = DalleBart.from_pretrained( - model_args.model_name_or_path, - config=config, - seed=training_args.seed_model, - dtype=getattr(jnp, model_args.dtype), - abstract_init=True, # we overwrite them with loaded checkpoint - gradient_checkpointing=training_args.gradient_checkpointing, - ) - else: - model = DalleBart( - config, - seed=training_args.seed_model, - dtype=getattr(jnp, model_args.dtype), - abstract_init=True, - ) - - # get model metadata - model_metadata = model_args.get_metadata() - - # get PartitionSpec for model params (required to be a dict) - param_spec = set_partitions(model.params) - - # convert params to frozen dict - model._params = freeze(model.params) - - # Load tokenizer - tokenizer = DalleBartTokenizer.from_pretrained( - model_args.tokenizer_name, use_fast=True - ) - - # Preprocessing the datasets. - # We need to normalize and tokenize inputs and targets. - dataset.preprocess(tokenizer=tokenizer, config=model.config) - - # Initialize our training - dropout_rng = jax.random.PRNGKey(training_args.seed_model) - - # Store some constant - num_epochs = training_args.num_train_epochs - # batch size - batch_size_per_node_per_grad_step = ( - training_args.per_device_train_batch_size - * jax.local_device_count() - // training_args.mp_devices - ) - batch_size_per_node = ( - batch_size_per_node_per_grad_step * training_args.gradient_accumulation_steps - ) - batch_size_per_step = batch_size_per_node * jax.process_count() - eval_batch_size_per_node = ( - training_args.per_device_eval_batch_size - * jax.local_device_count() - // training_args.mp_devices - ) - eval_batch_size_per_step = eval_batch_size_per_node * jax.process_count() - len_train_dataset, len_eval_dataset = dataset.length - steps_per_epoch = ( - len_train_dataset // batch_size_per_node - if len_train_dataset is not None - else None - ) - num_train_steps = ( - steps_per_epoch * num_epochs if steps_per_epoch is not None else None - ) - num_params = model.num_params - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len_train_dataset}") - logger.info(f" Num Epochs = {num_epochs}") - logger.info( - f" Batch size per dp device = {training_args.per_device_train_batch_size}" - ) - logger.info(f" Number of devices = {jax.device_count()}") - logger.info( - f" Gradient accumulation steps = {training_args.gradient_accumulation_steps}" - ) - logger.info(f" Batch size per update = {batch_size_per_step}") - logger.info(f" Model parameters = {num_params:,}") - - # set up wandb run - if jax.process_index() == 0: - # set default x-axis as 'train/step' - wandb.define_metric("*", step_metric="train/step") - - # add interesting config parameters - wandb.config.update( - { - "len_train_dataset": len_train_dataset, - "len_eval_dataset": len_eval_dataset, - "batch_size_per_step": batch_size_per_step, - "num_params": num_params, - "model_config": model.config.to_dict(), - "num_devices": jax.device_count(), - "versions": { - "jax": jax.__version__, - "jaxlib": jaxlib.__version__, - "flax": flax.__version__, - "transformers": transformers.__version__, - "datasets": datasets.__version__, - "wandb": wandb.__version__, - "dalle_mini": dalle_mini.__version__, - }, - } - ) - - # Create learning rate schedule - def create_learning_rate_fn() -> Callable[[int], jnp.array]: - """Create the learning rate function.""" - warmup_fn = optax.linear_schedule( - init_value=0.0, - end_value=training_args.learning_rate, - transition_steps=training_args.warmup_steps + 1, # ensure not 0 - ) - # offset step when resuming - if model_metadata.get("step", 0): - warmup_fn = optax.join_schedules( - schedules=[optax.constant_schedule(0.0), warmup_fn], - boundaries=[model_metadata["step"]], - ) - if training_args.lr_decay is None: - return warmup_fn - elif training_args.lr_decay == "linear": - assert ( - num_train_steps is not None - ), "linear decay requires knowing the dataset length" - decay_fn = optax.linear_schedule( - init_value=training_args.learning_rate, - end_value=0, - transition_steps=num_train_steps - training_args.warmup_steps, - ) - elif training_args.lr_decay == "exponential": - decay_fn = optax.exponential_decay( - init_value=training_args.learning_rate, - transition_steps=training_args.lr_transition_steps, - decay_rate=training_args.lr_decay_rate, - staircase=training_args.lr_staircase, - ) - schedule_fn = optax.join_schedules( - schedules=[warmup_fn, decay_fn], - boundaries=[model_metadata.get("step", 0) + training_args.warmup_steps], - ) - return schedule_fn - - learning_rate_fn = create_learning_rate_fn() - - # create adam optimizer - if training_args.optim == "distributed_shampoo": - # parameters from https://github.com/tensorflow/lingvo/blob/03ee9d7cd50764b0424c7c863733c91fc0b053ec/lingvo/jax/optimizers.py#L729 - graft_type = { - "sgd": GraftingType.SGD, - "adagrad": GraftingType.ADAGRAD, - "rmsprop": GraftingType.RMSPROP, - "rmsprop_normalized": GraftingType.RMSPROP_NORMALIZED, - "sqrt_n": GraftingType.SQRT_N, - "adagrad_normalized": GraftingType.ADAGRAD_NORMALIZED, - }[training_args.graft_type] - optimizer = distributed_shampoo( - learning_rate_fn, - block_size=training_args.block_size, - beta1=training_args.beta1, - beta2=training_args.beta2, - diagonal_epsilon=1e-10, - matrix_epsilon=1e-6, - start_preconditioning_step=max( - training_args.preconditioning_compute_steps + 1, 101 - ), - preconditioning_compute_steps=training_args.preconditioning_compute_steps, - statistics_compute_steps=1, - best_effort_shape_interpretation=True, - graft_type=graft_type, - nesterov=False, - exponent_override=0, - statistics_partition_spec=PartitionSpec(None, "dp", None), - preconditioner_partition_spec=PartitionSpec("dp", None, None), - num_devices_for_pjit=training_args.dp_devices, - shard_optimizer_states=True, - inverse_failure_threshold=0.1, - moving_average_for_momentum=True, - skip_preconditioning_dim_size_gt=training_args.skip_preconditioning_dim_size_gt, - clip_by_scaled_gradient_norm=None, - precision=jax.lax.Precision.HIGHEST, - best_effort_memory_usage_reduction=training_args.optim_quantized, - ) - # get the real optimizer and helper functions - update_fn = optimizer.update - optimizer = optimizer.init(model.params) - opt_fn = NamedTuple("opt_fn", pspec_fn=Any, shape_and_dtype_fn=Any)( - optimizer.pspec_fn, optimizer.shape_and_dtype_fn - ) - optimizer = optax.GradientTransformation(optimizer.init_fn, update_fn) - - elif training_args.optim == "adam": - optimizer = optax.adamw( - learning_rate=learning_rate_fn, - b1=training_args.beta1, - b2=training_args.beta2, - eps=training_args.adam_epsilon, - ) - elif training_args.optim == "adafactor": - # We use the default parameters here to initialize adafactor, - # For more details about the parameters please check https://github.com/deepmind/optax/blob/ed02befef9bf81cbbf236be3d2b0e032e9ed4a40/optax/_src/alias.py#L74 - optimizer = optax.adafactor( - learning_rate=learning_rate_fn, - clipping_threshold=training_args.max_grad_norm, - ) - - # get PartitionSpec for optimizer state - def get_opt_state_spec_and_shape(param_spec): - # get opt_state shape without actual init - opt_state_shape = jax.eval_shape(optimizer.init, model.params) - - if training_args.optim == "adam": - - def _opt_state_spec_per_leaf(x): - if isinstance(x, FrozenDict): - # variables with same structure as params - return param_spec - else: - # other variables such as count - return None - - opt_state_spec = jax.tree_map( - _opt_state_spec_per_leaf, - opt_state_shape, - # return None spec for empty elements - is_leaf=lambda x: isinstance(x, (FrozenDict, optax.EmptyState)), - ) - - elif training_args.optim == "adafactor": - # factorized state must be replicated (rank different than params) - opt_state_spec = None - - elif training_args.optim == "distributed_shampoo": - opt_state_spec = opt_fn.pspec_fn( - params=model.params, - params_partition_spec=param_spec, - partition_spec_for_statistics=PartitionSpec(None, "dp", None), - ) - else: - raise NotImplementedError - return opt_state_spec, opt_state_shape - - opt_state_spec, opt_state_shape = get_opt_state_spec_and_shape(param_spec) - - # create a mesh - mesh_shape = (training_args.dp_devices, training_args.mp_devices) - devices = np.asarray(jax.devices()).reshape(*mesh_shape) - mesh = maps.Mesh(devices, ("dp", "mp")) - logger.info(f" Mesh shape: {mesh_shape}") - - # define state spec - state_spec = TrainState( - params=param_spec, - opt_state=opt_state_spec, - dropout_rng=None, - step=None, - epoch=None, - train_time=None, - train_samples=None, - apply_fn=model.__call__, - tx=optimizer, - ) - - # init params if not available yet - def maybe_init_params(params): - if model_args.model_name_or_path: - # model params are correctly loaded - return params - else: - # params have not been initialized yet - return model.init_weights() - - with mesh: - logger.info(" Creating state") - if not model_args.restore_state: - - def init_state(params): - return TrainState.create( - apply_fn=model.__call__, - tx=optimizer, - params=maybe_init_params(params), - dropout_rng=dropout_rng, - ) - - state = pjit( - init_state, - in_axis_resources=(param_spec,) - if model_args.model_name_or_path - else None, - out_axis_resources=state_spec, - donate_argnums=(0,), - )(model.params if model_args.model_name_or_path else None) - - else: - # load opt_state - opt_state = from_bytes(opt_state_shape, model_args.get_opt_state()) - - # restore other attributes - attr_state = { - k: model_metadata[k] - for k in ["step", "epoch", "train_time", "train_samples"] - } - - def restore_state(params, opt_state): - return TrainState( - apply_fn=model.__call__, - tx=optimizer, - params=params, - opt_state=opt_state, - dropout_rng=dropout_rng, - **attr_state, - ) - - state = pjit( - restore_state, - in_axis_resources=( - param_spec, - opt_state_spec, - ), - out_axis_resources=state_spec, - donate_argnums=(0, 1), - )(model.params, opt_state) - - # remove opt_state from CPU - del opt_state - - # free CPU memory - del model._params, opt_state_spec, opt_state_shape - - # define batch specs - batch_spec = PartitionSpec("dp") - grad_batch_spec = PartitionSpec(None, "dp") - - # define loss - def loss_fn(logits, labels): - loss = optax.softmax_cross_entropy(logits, onehot(labels, logits.shape[-1])) - loss = loss.mean() - return loss - - # "vmap trick" avoids a crash when mp_devices > 1 (not sure why it happens) - # lead to better perf: see https://wandb.ai/dalle-mini/dalle-mini/reports/JAX-pmap-vs-pjit--VmlldzoxNDg1ODA2 - use_vmap_trick = True - - # make grad_param_spec for vmap - if use_vmap_trick: - grad_param_spec = jax.tree_map( - lambda x: PartitionSpec(*("dp",) + (x if x is not None else (None,))), - param_spec, - ) - - # Define gradient update step fn - def train_step(state, batch, train_time): - - # get a minibatch (one gradient accumulation slice) - def get_minibatch(batch, grad_idx): - return jax.tree_map( - lambda x: jax.lax.dynamic_index_in_dim(x, grad_idx, keepdims=False), - batch, - ) - - def compute_loss(params, minibatch, dropout_rng): - # minibatch has dim (batch_size, ...) - minibatch, labels = minibatch.pop("labels") - logits = state.apply_fn( - **minibatch, params=params, dropout_rng=dropout_rng, train=True - )[0] - return loss_fn(logits, labels) - - grad_fn = jax.value_and_grad(compute_loss) - - def loss_and_grad(grad_idx, dropout_rng): - # minibatch at grad_idx for gradient accumulation (None otherwise) - minibatch = ( - get_minibatch(batch, grad_idx) if grad_idx is not None else batch - ) - # ensure it is sharded properly - minibatch = with_sharding_constraint(minibatch, batch_spec) - # only 1 single rng per grad step, let us handle larger batch size (not sure why) - dropout_rng, _ = jax.random.split(dropout_rng) - - if use_vmap_trick: - # "vmap trick", calculate loss and grads independently per dp_device - loss, grads = jax.vmap( - grad_fn, in_axes=(None, 0, None), out_axes=(0, 0) - )(state.params, minibatch, dropout_rng) - # ensure they are sharded correctly - loss = with_sharding_constraint(loss, batch_spec) - grads = with_sharding_constraint(grads, grad_param_spec) - # average across all devices - # Note: we could average per device only after gradient accumulation, right before params update - loss, grads = jax.tree_map(lambda x: jnp.mean(x, axis=0), (loss, grads)) - else: - # "vmap trick" does not work in multi-hosts and requires too much hbm - loss, grads = grad_fn(state.params, minibatch, dropout_rng) - # ensure grads are sharded - grads = with_sharding_constraint(grads, param_spec) - # return loss and grads - return loss, grads, dropout_rng - - if training_args.gradient_accumulation_steps == 1: - loss, grads, dropout_rng = loss_and_grad(None, state.dropout_rng) - else: - # create initial state for cumul_minibatch_step loop - init_minibatch_step = ( - 0.0, - with_sharding_constraint( - jax.tree_map(jnp.zeros_like, state.params), param_spec - ), - state.dropout_rng, - ) - - # accumulate gradients - def cumul_minibatch_step(grad_idx, cumul_loss_grad_dropout): - cumul_loss, cumul_grads, dropout_rng = cumul_loss_grad_dropout - loss, grads, dropout_rng = loss_and_grad(grad_idx, dropout_rng) - cumul_loss, cumul_grads = jax.tree_map( - jnp.add, (cumul_loss, cumul_grads), (loss, grads) - ) - cumul_grads = with_sharding_constraint(cumul_grads, param_spec) - return cumul_loss, cumul_grads, dropout_rng - - # loop over gradients - loss, grads, dropout_rng = jax.lax.fori_loop( - 0, - training_args.gradient_accumulation_steps, - cumul_minibatch_step, - init_minibatch_step, - ) - grads = with_sharding_constraint(grads, param_spec) - # sum -> mean - loss, grads = jax.tree_map( - lambda x: x / training_args.gradient_accumulation_steps, (loss, grads) - ) - - grads = with_sharding_constraint(grads, param_spec) - - # update state - state = state.apply_gradients( - grads=grads, - dropout_rng=dropout_rng, - train_time=train_time, - train_samples=state.train_samples + batch_size_per_step, - ) - - metrics = { - "loss": loss, - "learning_rate": learning_rate_fn(state.step), - } - - def maybe_fn(fn, val, zeros, freq): - """Call fn only if it is a logging step""" - return jax.lax.cond( - state.step % freq == 0, - fn, - lambda _: zeros, - val, - ) - - if training_args.log_norm_steps: - zeros_norm = jax.tree_map(lambda _: jnp.float32(0), state.params) - - def norm(val): - return jax.tree_map(lambda x: jnp.linalg.norm(x), val) - - gradients_norm = maybe_fn( - norm, grads, zeros_norm, training_args.log_norm_steps - ) - params_norm = maybe_fn( - norm, state.params, zeros_norm, training_args.log_norm_steps - ) - - metrics.update( - { - "gradients_norm": gradients_norm, - "params_norm": params_norm, - } - ) - - if training_args.log_histogram_steps: - zeros_hist = jax.tree_map( - lambda _: jnp.histogram(jnp.zeros(1), density=True), state.params - ) - - def histogram(val): - return jax.tree_map(lambda x: jnp.histogram(x, density=True), val) - - gradients_hist = maybe_fn( - histogram, grads, zeros_hist, training_args.log_histogram_steps - ) - params_hist = maybe_fn( - histogram, state.params, zeros_hist, training_args.log_histogram_steps - ) - - metrics.update( - { - "params_hist": params_hist, - "gradients_hist": gradients_hist, - } - ) - - return state, metrics - - # Define eval fn - def eval_step(state, batch): - def compute_eval_loss(batch): - batch, labels = batch.pop("labels") - logits = model(**batch, params=state.params, train=False)[0] - return loss_fn(logits, labels) - - if use_vmap_trick: - loss = jax.vmap(compute_eval_loss)(batch) - # ensure they are sharded correctly - loss = with_sharding_constraint(loss, batch_spec) - # average across all devices - loss = jnp.mean(loss) - else: - loss = compute_eval_loss(batch) - - return loss - - # Create parallel version of the train and eval step - p_train_step = pjit( - train_step, - in_axis_resources=( - state_spec, - grad_batch_spec - if training_args.gradient_accumulation_steps > 1 - else batch_spec, - None, - ), - out_axis_resources=(state_spec, None), - donate_argnums=(0,), - ) - p_eval_step = pjit( - eval_step, - in_axis_resources=(state_spec, batch_spec), - out_axis_resources=None, - ) - - # define metrics logger - class MetricsLogger: - def __init__(self, step): - # keep state - self.state_dict = {} - # estimate speed - self.step = step - self.time = time.perf_counter() - self.offset_time = 0.0 - - def update_state_metrics(self, state): - """Update internal state metrics (logged at each call to be used as x-axis)""" - self.state_dict = { - f'train/{k.split("_")[-1]}': state[k] - for k in ["step", "epoch", "train_time", "train_samples"] - } - # timing metrics - new_step = int(state["step"]) - new_time = time.perf_counter() - if new_step > self.step: - # remove time for eval & save - delta_time = new_time - self.time - self.offset_time - self.offset_time = 0 - time_per_step = delta_time / (new_step - self.step) - self.step = new_step - self.time = new_time - self.log_time("train_per_step", time_per_step, offset=False) - self.log_time("train_per_log", delta_time, offset=False) - - def log_time(self, key, duration, offset=True): - wandb.log({f"time/{key}": duration, **self.state_dict}) - if offset: - self.offset_time += duration - - def log(self, metrics, prefix=None): - if jax.process_index() == 0: - log_metrics = {} - for k, v in metrics.items(): - if "_norm" in k: - if self.step % training_args.log_norm_steps == 0: - log_metrics[f"{k}/"] = unfreeze(v) - elif "_hist" in k: - if self.step % training_args.log_histogram_steps == 0: - v = jax.tree_map(lambda x: jax.device_get(x), unfreeze(v)) - v = jax.tree_map( - lambda x: wandb.Histogram(np_histogram=x), - v, - is_leaf=lambda x: isinstance(x, tuple), - ) - log_metrics[f"{k}/"] = v - else: - if prefix is not None: - k = f"{prefix}/{k}" - log_metrics[k] = v - wandb.log({**log_metrics, **self.state_dict}) - - # keep local copy of state - local_state = { - k: jax.device_get(getattr(state, k)).item() - for k in ["step", "epoch", "train_time", "train_samples"] - } - # init variables - start_time = time.perf_counter() - local_state["train_time"] - train_metrics = None - metrics_logger = MetricsLogger(local_state["step"]) - epochs = tqdm( - range(local_state["epoch"], num_epochs), - desc=f"Epoch ... (1/{num_epochs})", - position=0, - disable=jax.process_index() > 0, - ) - - def run_evaluation(): - # ======================== Evaluating ============================== - if training_args.do_eval: - start_eval_time = time.perf_counter() - eval_loader = dataset.dataloader("eval", eval_batch_size_per_step) - eval_steps = ( - len_eval_dataset // eval_batch_size_per_step - if len_eval_dataset is not None - else None - ) - eval_loss = [] - for batch in tqdm( - eval_loader, - desc="Evaluating...", - position=2, - leave=False, - total=eval_steps, - disable=jax.process_index() > 0, - ): - # need to keep only eval_batch_size_per_node items relevant to the node - batch = jax.tree_map( - lambda x: x.reshape( - (jax.process_count(), eval_batch_size_per_node) + x.shape[1:] - ), - batch, - ) - batch = jax.tree_map(lambda x: x[jax.process_index()], batch) - - # add dp dimension when using "vmap trick" - if use_vmap_trick: - bs_shape = ( - jax.local_device_count() // training_args.mp_devices, - training_args.per_device_eval_batch_size, - ) - batch = jax.tree_map( - lambda x: x.reshape(bs_shape + x.shape[1:]), batch - ) - - # freeze batch to pass safely to jax transforms - batch = freeze(batch) - # accumulate losses async - eval_loss.append(p_eval_step(state, batch)) - - # get the mean of the loss - eval_loss = jnp.stack(eval_loss) - eval_loss = jnp.mean(eval_loss) - eval_metrics = {"loss": eval_loss} - - # log metrics - metrics_logger.log(eval_metrics, prefix="eval") - metrics_logger.log_time("eval", time.perf_counter() - start_eval_time) - - # Print metrics and update progress bar - desc = f"Epoch... ({epoch + 1}/{num_epochs} | Eval Loss: {eval_metrics['loss']})" - epochs.write(desc) - epochs.desc = desc - - return eval_metrics - - def run_save_model(state, eval_metrics=None): - if jax.process_index() == 0: - - start_save_time = time.perf_counter() - output_dir = training_args.output_dir - use_bucket = output_dir.startswith("gs://") - if use_bucket: - bucket_path = Path(output_dir[5:]) / wandb.run.id / f"step_{state.step}" - bucket, dir_path = str(bucket_path).split("/", 1) - tmp_dir = tempfile.TemporaryDirectory() - output_dir = tmp_dir.name - - # save model - params = jax.device_get(state.params) - model.save_pretrained( - output_dir, - params=params, - ) - - # save tokenizer - tokenizer.save_pretrained(output_dir) - - # copy to bucket - if use_bucket: - client = storage.Client() - bucket = client.bucket(bucket) - for filename in Path(output_dir).glob("*"): - blob_name = str(Path(dir_path) / "model" / filename.name) - blob = bucket.blob(blob_name) - blob.upload_from_filename(str(filename)) - tmp_dir.cleanup() - - # save state - opt_state = jax.device_get(state.opt_state) - if use_bucket: - blob_name = str(Path(dir_path) / "state" / "opt_state.msgpack") - blob = bucket.blob(blob_name) - blob.upload_from_file(io.BytesIO(to_bytes(opt_state))) - else: - with (Path(output_dir) / "opt_state.msgpack").open("wb") as f: - f.write(to_bytes(opt_state)) - - # save to W&B - if training_args.log_model: - # save some space - c = wandb.wandb_sdk.wandb_artifacts.get_artifacts_cache() - c.cleanup(wandb.util.from_human_size("20GB")) - - metadata = { - k: jax.device_get(getattr(state, k)).item() - for k in ["step", "epoch", "train_time", "train_samples"] - } - metadata["num_params"] = num_params - if eval_metrics is not None: - metadata["eval"] = eval_metrics - - # create model artifact - if use_bucket: - metadata["bucket_path"] = f"gs://{bucket_path}/model" - artifact = wandb.Artifact( - name=f"model-{wandb.run.id}", - type="DalleBart_model", - metadata=metadata, - ) - if use_bucket: - artifact.add_reference(metadata["bucket_path"]) - else: - for filename in [ - "config.json", - "flax_model.msgpack", - "merges.txt", - "special_tokens_map.json", - "tokenizer.json", - "tokenizer_config.json", - "vocab.json", - ]: - artifact.add_file( - f"{Path(training_args.output_dir) / filename}" - ) - wandb.run.log_artifact(artifact) - - # create state artifact - if use_bucket: - metadata["bucket_path"] = f"gs://{bucket_path}/state" - artifact_state = wandb.Artifact( - name=f"state-{wandb.run.id}", - type="DalleBart_state", - metadata=metadata, - ) - if use_bucket: - artifact_state.add_reference(metadata["bucket_path"]) - else: - artifact_state.add_file( - f"{Path(training_args.output_dir) / 'opt_state.msgpack'}" - ) - wandb.run.log_artifact(artifact_state) - metrics_logger.log_time("save_model", time.perf_counter() - start_save_time) - - logger.info(" Ready to start training") - with mesh: - for epoch in epochs: - state.replace(epoch=epoch) - local_state["epoch"] = epoch - # ======================== Training ================================ - metrics_logger.update_state_metrics(local_state) - metrics_logger.log({}) - - # Generate an epoch by shuffling sampling indices from the train dataset - train_loader = dataset.dataloader( - "train", - batch_size_per_node, - epoch, - ) - # train - for batch in tqdm( - train_loader, - desc="Training...", - position=1, - leave=False, - total=steps_per_epoch, - disable=jax.process_index() > 0, - ): - # calculate delta time (we have a lag of one step but it's ok) - train_time = time.perf_counter() - start_time - - # set correct shape to batch - # - add grad_step dim if gradient_accumulation_steps > 1 - # - split per dp device if not multi-host for vmap trick (does not work in multi-host) - bs_shape = ( - (batch_size_per_node_per_grad_step,) - if not use_vmap_trick - else ( - jax.local_device_count() - // training_args.mp_devices, # local dp devices - training_args.per_device_train_batch_size, - ) - ) - if training_args.gradient_accumulation_steps > 1: - # reshape data into (gradient_accumulation_steps, batch_per_node, ...) - # to avoid any data redistribution when sharding - bs_shape = (training_args.gradient_accumulation_steps,) + bs_shape - - # reshape batch - batch = jax.tree_map( - lambda x: x.reshape(bs_shape + x.shape[1:]), - batch, - ) - # freeze batch to pass safely to jax transforms - batch = freeze(batch) - - # train step - state, train_metrics = p_train_step(state, batch, train_time) - local_state["step"] += 1 - local_state["train_time"] = train_time - local_state["train_samples"] += batch_size_per_step - - if ( - local_state["step"] % training_args.logging_steps == 0 - and jax.process_index() == 0 - ): - metrics_logger.update_state_metrics(local_state) - metrics_logger.log(train_metrics, prefix="train") - - eval_metrics = None - if local_state["step"] % training_args.eval_steps == 0: - eval_metrics = run_evaluation() - - if local_state["step"] % training_args.save_steps == 0: - run_save_model(state, eval_metrics) - - # log final train metrics - if train_metrics is not None: - metrics_logger.update_state_metrics(state) - metrics_logger.log(train_metrics, prefix="train") - - epochs.write( - f"Epoch... ({epoch + 1}/{num_epochs} | Loss: {train_metrics['loss']}, Learning Rate: {train_metrics['learning_rate']})" - ) - - # Final evaluation - eval_metrics = run_evaluation() - - # save checkpoint after each epoch - run_save_model(state, eval_metrics) - - -if __name__ == "__main__": - main() diff --git a/spaces/FrozenWolf/Neural-Style-Transfer/loss_functions.py b/spaces/FrozenWolf/Neural-Style-Transfer/loss_functions.py deleted file mode 100644 index 20ec3ea1b072b973749c2b943918dca2308860cd..0000000000000000000000000000000000000000 --- a/spaces/FrozenWolf/Neural-Style-Transfer/loss_functions.py +++ /dev/null @@ -1,40 +0,0 @@ - -import torch -import torch.nn as nn -import torch.nn.functional as F - -class ContentLoss(nn.Module): - def __init__(self, target,): - super().__init__() - self.target = target.detach() - - def forward(self, input): - self.loss = F.mse_loss(input, self.target) - return input - - -class StyleLoss(nn.Module): - def __init__(self, target_feature): - super().__init__() - self.target = self.gram_matrix(target_feature).detach() - - def gram_matrix(self,input): - a, b, c, d = input.size() - features = input.view(a * b, c * d) - G = torch.mm(features, features.t()) - return G.div(a * b * c * d) - - def forward(self, input): - G = self.gram_matrix(input) - self.loss = F.mse_loss(G, self.target) - return input - - -class Normalization(nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = torch.tensor(mean).view(-1, 1, 1) - self.std = torch.tensor(std).view(-1, 1, 1) - - def forward(self, img): - return (img - self.mean) / self.std \ No newline at end of file diff --git a/spaces/Fu-chiang/Bit-50-Glaucoma/README.md b/spaces/Fu-chiang/Bit-50-Glaucoma/README.md deleted file mode 100644 index 7d848c8a3bf25bbc549b8aab96aecd4c9ff6545d..0000000000000000000000000000000000000000 --- a/spaces/Fu-chiang/Bit-50-Glaucoma/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bit 50 Glaucoma -emoji: 🐨 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act.py b/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act.py deleted file mode 100644 index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act.py +++ /dev/null @@ -1,86 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/yolact_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/yolact_head.py deleted file mode 100644 index 10d311f94ee99e1bf65ee3e5827f1699c28a23e3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/yolact_head.py +++ /dev/null @@ -1,943 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, xavier_init -from mmcv.runner import force_fp32 - -from mmdet.core import build_sampler, fast_nms, images_to_levels, multi_apply -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class YOLACTHead(AnchorHead): - """YOLACT box head used in https://arxiv.org/abs/1904.02689. - - Note that YOLACT head is a light version of RetinaNet head. - Four differences are described as follows: - - 1. YOLACT box head has three-times fewer anchors. - 2. YOLACT box head shares the convs for box and cls branches. - 3. YOLACT box head uses OHEM instead of Focal loss. - 4. YOLACT box head predicts a set of mask coefficients for each box. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): Config dict for anchor generator - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - num_head_convs (int): Number of the conv layers shared by - box and cls branches. - num_protos (int): Number of the mask coefficients. - use_ohem (bool): If true, ``loss_single_OHEM`` will be used for - cls loss calculation. If false, ``loss_single`` will be used. - conv_cfg (dict): Dictionary to construct and config conv layer. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - num_classes, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=3, - scales_per_octave=1, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - reduction='none', - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1.5), - num_head_convs=1, - num_protos=32, - use_ohem=True, - conv_cfg=None, - norm_cfg=None, - **kwargs): - self.num_head_convs = num_head_convs - self.num_protos = num_protos - self.use_ohem = use_ohem - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(YOLACTHead, self).__init__( - num_classes, - in_channels, - loss_cls=loss_cls, - loss_bbox=loss_bbox, - anchor_generator=anchor_generator, - **kwargs) - if self.use_ohem: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.sampling = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.head_convs = nn.ModuleList() - for i in range(self.num_head_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.head_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.conv_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.conv_reg = nn.Conv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - self.conv_coeff = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.num_protos, - 3, - padding=1) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.head_convs: - xavier_init(m.conv, distribution='uniform', bias=0) - xavier_init(self.conv_cls, distribution='uniform', bias=0) - xavier_init(self.conv_reg, distribution='uniform', bias=0) - xavier_init(self.conv_coeff, distribution='uniform', bias=0) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - coeff_pred (Tensor): Mask coefficients for a single scale \ - level, the channels number is num_anchors * num_protos. - """ - for head_conv in self.head_convs: - x = head_conv(x) - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - coeff_pred = self.conv_coeff(x).tanh() - return cls_score, bbox_pred, coeff_pred - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """A combination of the func:``AnchorHead.loss`` and - func:``SSDHead.loss``. - - When ``self.use_ohem == True``, it functions like ``SSDHead.loss``, - otherwise, it follows ``AnchorHead.loss``. Besides, it additionally - returns ``sampling_results``. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - tuple: - dict[str, Tensor]: A dictionary of loss components. - List[:obj:``SamplingResult``]: Sampler results for each image. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - unmap_outputs=not self.use_ohem, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results) = cls_reg_targets - - if self.use_ohem: - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single_OHEM, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - else: - num_total_samples = ( - num_total_pos + - num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox), sampling_results - - def loss_single_OHEM(self, cls_score, bbox_pred, anchors, labels, - label_weights, bbox_targets, bbox_weights, - num_total_samples): - """"See func:``SSDHead.loss``.""" - loss_cls_all = self.loss_cls(cls_score, labels, label_weights) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero( - as_tuple=False).reshape(-1) - neg_inds = (labels == self.num_classes).nonzero( - as_tuple=False).view(-1) - - num_pos_samples = pos_inds.size(0) - if num_pos_samples == 0: - num_neg_samples = neg_inds.size(0) - else: - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'coeff_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - coeff_preds, - img_metas, - cfg=None, - rescale=False): - """"Similiar to func:``AnchorHead.get_bboxes``, but additionally - processes coeff_preds. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - coeff_preds (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - list[tuple[Tensor, Tensor, Tensor]]: Each item in result_list is - a 3-tuple. The first item is an (n, 5) tensor, where the - first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score - between 0 and 1. The second item is an (n,) tensor where each - item is the predicted class label of the corresponding box. - The third item is an (n, num_protos) tensor where each item - is the predicted mask coefficients of instance inside the - corresponding box. - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - det_bboxes = [] - det_labels = [] - det_coeffs = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - coeff_pred_list = [ - coeff_preds[i][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - bbox_res = self._get_bboxes_single(cls_score_list, bbox_pred_list, - coeff_pred_list, mlvl_anchors, - img_shape, scale_factor, cfg, - rescale) - det_bboxes.append(bbox_res[0]) - det_labels.append(bbox_res[1]) - det_coeffs.append(bbox_res[2]) - return det_bboxes, det_labels, det_coeffs - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - coeff_preds_list, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """"Similiar to func:``AnchorHead._get_bboxes_single``, but - additionally processes coeff_preds_list and uses fast NMS instead of - traditional NMS. - - Args: - cls_score_list (list[Tensor]): Box scores for a single scale level - Has shape (num_anchors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas for a single - scale level with shape (num_anchors * 4, H, W). - coeff_preds_list (list[Tensor]): Mask coefficients for a single - scale level with shape (num_anchors * num_protos, H, W). - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - tuple[Tensor, Tensor, Tensor]: The first item is an (n, 5) tensor, - where the first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between - 0 and 1. The second item is an (n,) tensor where each item is - the predicted class label of the corresponding box. The third - item is an (n, num_protos) tensor where each item is the - predicted mask coefficients of instance inside the - corresponding box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_score_list) == len(bbox_pred_list) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_coeffs = [] - for cls_score, bbox_pred, coeff_pred, anchors in \ - zip(cls_score_list, bbox_pred_list, - coeff_preds_list, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - coeff_pred = coeff_pred.permute(1, 2, - 0).reshape(-1, self.num_protos) - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - coeff_pred = coeff_pred[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_coeffs.append(coeff_pred) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - mlvl_coeffs = torch.cat(mlvl_coeffs) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - det_bboxes, det_labels, det_coeffs = fast_nms(mlvl_bboxes, mlvl_scores, - mlvl_coeffs, - cfg.score_thr, - cfg.iou_thr, cfg.top_k, - cfg.max_per_img) - return det_bboxes, det_labels, det_coeffs - - -@HEADS.register_module() -class YOLACTSegmHead(nn.Module): - """YOLACT segmentation head used in https://arxiv.org/abs/1904.02689. - - Apply a semantic segmentation loss on feature space using layers that are - only evaluated during training to increase performance with no speed - penalty. - - Args: - in_channels (int): Number of channels in the input feature map. - num_classes (int): Number of categories excluding the background - category. - loss_segm (dict): Config of semantic segmentation loss. - """ - - def __init__(self, - num_classes, - in_channels=256, - loss_segm=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0)): - super(YOLACTSegmHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.loss_segm = build_loss(loss_segm) - self._init_layers() - self.fp16_enabled = False - - def _init_layers(self): - """Initialize layers of the head.""" - self.segm_conv = nn.Conv2d( - self.in_channels, self.num_classes, kernel_size=1) - - def init_weights(self): - """Initialize weights of the head.""" - xavier_init(self.segm_conv, distribution='uniform') - - def forward(self, x): - """Forward feature from the upstream network. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - - Returns: - Tensor: Predicted semantic segmentation map with shape - (N, num_classes, H, W). - """ - return self.segm_conv(x) - - @force_fp32(apply_to=('segm_pred', )) - def loss(self, segm_pred, gt_masks, gt_labels): - """Compute loss of the head. - - Args: - segm_pred (list[Tensor]): Predicted semantic segmentation map - with shape (N, num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_labels (list[Tensor]): Class indices corresponding to each box. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_segm = [] - num_imgs, num_classes, mask_h, mask_w = segm_pred.size() - for idx in range(num_imgs): - cur_segm_pred = segm_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_labels = gt_labels[idx] - segm_targets = self.get_targets(cur_segm_pred, cur_gt_masks, - cur_gt_labels) - if segm_targets is None: - loss = self.loss_segm(cur_segm_pred, - torch.zeros_like(cur_segm_pred), - torch.zeros_like(cur_segm_pred)) - else: - loss = self.loss_segm( - cur_segm_pred, - segm_targets, - avg_factor=num_imgs * mask_h * mask_w) - loss_segm.append(loss) - return dict(loss_segm=loss_segm) - - def get_targets(self, segm_pred, gt_masks, gt_labels): - """Compute semantic segmentation targets for each image. - - Args: - segm_pred (Tensor): Predicted semantic segmentation map - with shape (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - gt_labels (Tensor): Class indices corresponding to each box. - - Returns: - Tensor: Semantic segmentation targets with shape - (num_classes, H, W). - """ - if gt_masks.size(0) == 0: - return None - num_classes, mask_h, mask_w = segm_pred.size() - with torch.no_grad(): - downsampled_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - downsampled_masks = downsampled_masks.gt(0.5).float() - segm_targets = torch.zeros_like(segm_pred, requires_grad=False) - for obj_idx in range(downsampled_masks.size(0)): - segm_targets[gt_labels[obj_idx] - 1] = torch.max( - segm_targets[gt_labels[obj_idx] - 1], - downsampled_masks[obj_idx]) - return segm_targets - - -@HEADS.register_module() -class YOLACTProtonet(nn.Module): - """YOLACT mask head used in https://arxiv.org/abs/1904.02689. - - This head outputs the mask prototypes for YOLACT. - - Args: - in_channels (int): Number of channels in the input feature map. - proto_channels (tuple[int]): Output channels of protonet convs. - proto_kernel_sizes (tuple[int]): Kernel sizes of protonet convs. - include_last_relu (Bool): If keep the last relu of protonet. - num_protos (int): Number of prototypes. - num_classes (int): Number of categories excluding the background - category. - loss_mask_weight (float): Reweight the mask loss by this factor. - max_masks_to_train (int): Maximum number of masks to train for - each image. - """ - - def __init__(self, - num_classes, - in_channels=256, - proto_channels=(256, 256, 256, None, 256, 32), - proto_kernel_sizes=(3, 3, 3, -2, 3, 1), - include_last_relu=True, - num_protos=32, - loss_mask_weight=1.0, - max_masks_to_train=100): - super(YOLACTProtonet, self).__init__() - self.in_channels = in_channels - self.proto_channels = proto_channels - self.proto_kernel_sizes = proto_kernel_sizes - self.include_last_relu = include_last_relu - self.protonet = self._init_layers() - - self.loss_mask_weight = loss_mask_weight - self.num_protos = num_protos - self.num_classes = num_classes - self.max_masks_to_train = max_masks_to_train - self.fp16_enabled = False - - def _init_layers(self): - """A helper function to take a config setting and turn it into a - network.""" - # Possible patterns: - # ( 256, 3) -> conv - # ( 256,-2) -> deconv - # (None,-2) -> bilinear interpolate - in_channels = self.in_channels - protonets = nn.ModuleList() - for num_channels, kernel_size in zip(self.proto_channels, - self.proto_kernel_sizes): - if kernel_size > 0: - layer = nn.Conv2d( - in_channels, - num_channels, - kernel_size, - padding=kernel_size // 2) - else: - if num_channels is None: - layer = InterpolateModule( - scale_factor=-kernel_size, - mode='bilinear', - align_corners=False) - else: - layer = nn.ConvTranspose2d( - in_channels, - num_channels, - -kernel_size, - padding=kernel_size // 2) - protonets.append(layer) - protonets.append(nn.ReLU(inplace=True)) - in_channels = num_channels if num_channels is not None \ - else in_channels - if not self.include_last_relu: - protonets = protonets[:-1] - return nn.Sequential(*protonets) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.protonet: - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, x, coeff_pred, bboxes, img_meta, sampling_results=None): - """Forward feature from the upstream network to get prototypes and - linearly combine the prototypes, using masks coefficients, into - instance masks. Finally, crop the instance masks with given bboxes. - - Args: - x (Tensor): Feature from the upstream network, which is - a 4D-tensor. - coeff_pred (list[Tensor]): Mask coefficients for each scale - level with shape (N, num_anchors * num_protos, H, W). - bboxes (list[Tensor]): Box used for cropping with shape - (N, num_anchors * 4, H, W). During training, they are - ground truth boxes. During testing, they are predicted - boxes. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - list[Tensor]: Predicted instance segmentation masks. - """ - prototypes = self.protonet(x) - prototypes = prototypes.permute(0, 2, 3, 1).contiguous() - - num_imgs = x.size(0) - # Training state - if self.training: - coeff_pred_list = [] - for coeff_pred_per_level in coeff_pred: - coeff_pred_per_level = \ - coeff_pred_per_level.permute(0, 2, 3, 1)\ - .reshape(num_imgs, -1, self.num_protos) - coeff_pred_list.append(coeff_pred_per_level) - coeff_pred = torch.cat(coeff_pred_list, dim=1) - - mask_pred_list = [] - for idx in range(num_imgs): - cur_prototypes = prototypes[idx] - cur_coeff_pred = coeff_pred[idx] - cur_bboxes = bboxes[idx] - cur_img_meta = img_meta[idx] - - # Testing state - if not self.training: - bboxes_for_cropping = cur_bboxes - else: - cur_sampling_results = sampling_results[idx] - pos_assigned_gt_inds = \ - cur_sampling_results.pos_assigned_gt_inds - bboxes_for_cropping = cur_bboxes[pos_assigned_gt_inds].clone() - pos_inds = cur_sampling_results.pos_inds - cur_coeff_pred = cur_coeff_pred[pos_inds] - - # Linearly combine the prototypes with the mask coefficients - mask_pred = cur_prototypes @ cur_coeff_pred.t() - mask_pred = torch.sigmoid(mask_pred) - - h, w = cur_img_meta['img_shape'][:2] - bboxes_for_cropping[:, 0] /= w - bboxes_for_cropping[:, 1] /= h - bboxes_for_cropping[:, 2] /= w - bboxes_for_cropping[:, 3] /= h - - mask_pred = self.crop(mask_pred, bboxes_for_cropping) - mask_pred = mask_pred.permute(2, 0, 1).contiguous() - mask_pred_list.append(mask_pred) - return mask_pred_list - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, gt_masks, gt_bboxes, img_meta, sampling_results): - """Compute loss of the head. - - Args: - mask_pred (list[Tensor]): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (list[Tensor]): Ground truth masks for each image with - the same shape of the input image. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_meta (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - sampling_results (List[:obj:``SamplingResult``]): Sampler results - for each image. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - loss_mask = [] - num_imgs = len(mask_pred) - total_pos = 0 - for idx in range(num_imgs): - cur_mask_pred = mask_pred[idx] - cur_gt_masks = gt_masks[idx].float() - cur_gt_bboxes = gt_bboxes[idx] - cur_img_meta = img_meta[idx] - cur_sampling_results = sampling_results[idx] - - pos_assigned_gt_inds = cur_sampling_results.pos_assigned_gt_inds - num_pos = pos_assigned_gt_inds.size(0) - # Since we're producing (near) full image masks, - # it'd take too much vram to backprop on every single mask. - # Thus we select only a subset. - if num_pos > self.max_masks_to_train: - perm = torch.randperm(num_pos) - select = perm[:self.max_masks_to_train] - cur_mask_pred = cur_mask_pred[select] - pos_assigned_gt_inds = pos_assigned_gt_inds[select] - num_pos = self.max_masks_to_train - total_pos += num_pos - - gt_bboxes_for_reweight = cur_gt_bboxes[pos_assigned_gt_inds] - - mask_targets = self.get_targets(cur_mask_pred, cur_gt_masks, - pos_assigned_gt_inds) - if num_pos == 0: - loss = cur_mask_pred.sum() * 0. - elif mask_targets is None: - loss = F.binary_cross_entropy(cur_mask_pred, - torch.zeros_like(cur_mask_pred), - torch.zeros_like(cur_mask_pred)) - else: - cur_mask_pred = torch.clamp(cur_mask_pred, 0, 1) - loss = F.binary_cross_entropy( - cur_mask_pred, mask_targets, - reduction='none') * self.loss_mask_weight - - h, w = cur_img_meta['img_shape'][:2] - gt_bboxes_width = (gt_bboxes_for_reweight[:, 2] - - gt_bboxes_for_reweight[:, 0]) / w - gt_bboxes_height = (gt_bboxes_for_reweight[:, 3] - - gt_bboxes_for_reweight[:, 1]) / h - loss = loss.mean(dim=(1, - 2)) / gt_bboxes_width / gt_bboxes_height - loss = torch.sum(loss) - loss_mask.append(loss) - - if total_pos == 0: - total_pos += 1 # avoid nan - loss_mask = [x / total_pos for x in loss_mask] - - return dict(loss_mask=loss_mask) - - def get_targets(self, mask_pred, gt_masks, pos_assigned_gt_inds): - """Compute instance segmentation targets for each image. - - Args: - mask_pred (Tensor): Predicted prototypes with shape - (num_classes, H, W). - gt_masks (Tensor): Ground truth masks for each image with - the same shape of the input image. - pos_assigned_gt_inds (Tensor): GT indices of the corresponding - positive samples. - Returns: - Tensor: Instance segmentation targets with shape - (num_instances, H, W). - """ - if gt_masks.size(0) == 0: - return None - mask_h, mask_w = mask_pred.shape[-2:] - gt_masks = F.interpolate( - gt_masks.unsqueeze(0), (mask_h, mask_w), - mode='bilinear', - align_corners=False).squeeze(0) - gt_masks = gt_masks.gt(0.5).float() - mask_targets = gt_masks[pos_assigned_gt_inds] - return mask_targets - - def get_seg_masks(self, mask_pred, label_pred, img_meta, rescale): - """Resize, binarize, and format the instance mask predictions. - - Args: - mask_pred (Tensor): shape (N, H, W). - label_pred (Tensor): shape (N, ). - img_meta (dict): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If rescale is False, then returned masks will - fit the scale of imgs[0]. - Returns: - list[ndarray]: Mask predictions grouped by their predicted classes. - """ - ori_shape = img_meta['ori_shape'] - scale_factor = img_meta['scale_factor'] - if rescale: - img_h, img_w = ori_shape[:2] - else: - img_h = np.round(ori_shape[0] * scale_factor[1]).astype(np.int32) - img_w = np.round(ori_shape[1] * scale_factor[0]).astype(np.int32) - - cls_segms = [[] for _ in range(self.num_classes)] - if mask_pred.size(0) == 0: - return cls_segms - - mask_pred = F.interpolate( - mask_pred.unsqueeze(0), (img_h, img_w), - mode='bilinear', - align_corners=False).squeeze(0) > 0.5 - mask_pred = mask_pred.cpu().numpy().astype(np.uint8) - - for m, l in zip(mask_pred, label_pred): - cls_segms[l].append(m) - return cls_segms - - def crop(self, masks, boxes, padding=1): - """Crop predicted masks by zeroing out everything not in the predicted - bbox. - - Args: - masks (Tensor): shape [H, W, N]. - boxes (Tensor): bbox coords in relative point form with - shape [N, 4]. - - Return: - Tensor: The cropped masks. - """ - h, w, n = masks.size() - x1, x2 = self.sanitize_coordinates( - boxes[:, 0], boxes[:, 2], w, padding, cast=False) - y1, y2 = self.sanitize_coordinates( - boxes[:, 1], boxes[:, 3], h, padding, cast=False) - - rows = torch.arange( - w, device=masks.device, dtype=x1.dtype).view(1, -1, - 1).expand(h, w, n) - cols = torch.arange( - h, device=masks.device, dtype=x1.dtype).view(-1, 1, - 1).expand(h, w, n) - - masks_left = rows >= x1.view(1, 1, -1) - masks_right = rows < x2.view(1, 1, -1) - masks_up = cols >= y1.view(1, 1, -1) - masks_down = cols < y2.view(1, 1, -1) - - crop_mask = masks_left * masks_right * masks_up * masks_down - - return masks * crop_mask.float() - - def sanitize_coordinates(self, x1, x2, img_size, padding=0, cast=True): - """Sanitizes the input coordinates so that x1 < x2, x1 != x2, x1 >= 0, - and x2 <= image_size. Also converts from relative to absolute - coordinates and casts the results to long tensors. - - Warning: this does things in-place behind the scenes so - copy if necessary. - - Args: - _x1 (Tensor): shape (N, ). - _x2 (Tensor): shape (N, ). - img_size (int): Size of the input image. - padding (int): x1 >= padding, x2 <= image_size-padding. - cast (bool): If cast is false, the result won't be cast to longs. - - Returns: - tuple: - x1 (Tensor): Sanitized _x1. - x2 (Tensor): Sanitized _x2. - """ - x1 = x1 * img_size - x2 = x2 * img_size - if cast: - x1 = x1.long() - x2 = x2.long() - x1 = torch.min(x1, x2) - x2 = torch.max(x1, x2) - x1 = torch.clamp(x1 - padding, min=0) - x2 = torch.clamp(x2 + padding, max=img_size) - return x1, x2 - - -class InterpolateModule(nn.Module): - """This is a module version of F.interpolate. - - Any arguments you give it just get passed along for the ride. - """ - - def __init__(self, *args, **kwargs): - super().__init__() - - self.args = args - self.kwargs = kwargs - - def forward(self, x): - """Forward features from the upstream network.""" - return F.interpolate(x, *self.args, **self.kwargs) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/__init__.py deleted file mode 100644 index 6b8594f470200ff5c000542ef115375ed69b749c..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a2' diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/mel_processing.py b/spaces/GroveStreet/GTA_SOVITS/modules/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/modules/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/pos_embed.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/pos_embed.py deleted file mode 100644 index 836bd43d0bfe699b0b37bfec81509e06a2a28f27..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/pos_embed.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) EPFL VILAB. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -------------------------------------------------------- -# Based on BEiT, timm, DINO DeiT and MAE-priv code bases -# https://github.com/microsoft/unilm/tree/master/beit -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/facebookresearch/deit -# https://github.com/facebookresearch/dino -# https://github.com/BUPT-PRIV/MAE-priv -# -------------------------------------------------------- - -import re - -import torch - - -def interpolate_pos_embed_vit(model, checkpoint_model): - if 'pos_embed' in checkpoint_model: - pos_embed_checkpoint = checkpoint_model['pos_embed'] - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = model.patch_embed.num_patches - num_extra_tokens = model.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches ** 0.5) - # class_token and dist_token are kept unchanged - if orig_size != new_size: - print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size)) - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - checkpoint_model['pos_embed'] = new_pos_embed - - -def interpolate_pos_embed_multimae(model, checkpoint_model): - pattern = "input_adapters\.(.*)\.pos_emb" - matched_keys = [k for k in checkpoint_model if bool(re.match(pattern, k))] - - for key in matched_keys: - domain = re.match(pattern, key).group(1) # group(0) is entire matched regex - if getattr(model.input_adapters, domain, None) is not None: - pos_embed_checkpoint = checkpoint_model[key] - _, _, orig_H, orig_W = pos_embed_checkpoint.shape - _, _, new_H, new_W = getattr(model.input_adapters, domain).pos_emb.shape - if (orig_H != new_H) or (orig_W != new_W): - print(f"Key {key}: Position interpolate from {orig_H}x{orig_W} to {new_H}x{new_W}") - pos_embed_checkpoint = torch.nn.functional.interpolate( - pos_embed_checkpoint, size=(new_H, new_W), mode='bicubic', align_corners=False) - checkpoint_model[key] = pos_embed_checkpoint diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/tool/makesample.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/tool/makesample.py deleted file mode 100644 index 36276267677360d8238a8dbf71e9753dcc327681..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/tool/makesample.py +++ /dev/null @@ -1,169 +0,0 @@ -''' -A simple tool to generate sample of output of a GAN, -subject to filtering, sorting, or intervention. -''' - -import torch, numpy, os, argparse, numbers, sys, shutil -from PIL import Image -from torch.utils.data import TensorDataset -from netdissect.zdataset import standard_z_sample -from netdissect.progress import default_progress, verbose_progress -from netdissect.autoeval import autoimport_eval -from netdissect.workerpool import WorkerBase, WorkerPool -from netdissect.nethook import edit_layers, retain_layers - -def main(): - parser = argparse.ArgumentParser(description='GAN sample making utility') - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='images', - help='directory for image output') - parser.add_argument('--size', type=int, default=100, - help='number of images to output') - parser.add_argument('--test_size', type=int, default=None, - help='number of images to test') - parser.add_argument('--layer', type=str, default=None, - help='layer to inspect') - parser.add_argument('--seed', type=int, default=1, - help='seed') - parser.add_argument('--maximize_units', type=int, nargs='+', default=None, - help='units to maximize') - parser.add_argument('--ablate_units', type=int, nargs='+', default=None, - help='units to ablate') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - verbose_progress(not args.quiet) - - # Instantiate the model - model = autoimport_eval(args.model) - if args.pthfile is not None: - data = torch.load(args.pthfile) - if 'state_dict' in data: - meta = {} - for key in data: - if isinstance(data[key], numbers.Number): - meta[key] = data[key] - data = data['state_dict'] - model.load_state_dict(data) - # Unwrap any DataParallel-wrapped model - if isinstance(model, torch.nn.DataParallel): - model = next(model.children()) - # Examine first conv in model to determine input feature size. - first_layer = [c for c in model.modules() - if isinstance(c, (torch.nn.Conv2d, torch.nn.ConvTranspose2d, - torch.nn.Linear))][0] - # 4d input if convolutional, 2d input if first layer is linear. - if isinstance(first_layer, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)): - z_channels = first_layer.in_channels - spatialdims = (1, 1) - else: - z_channels = first_layer.in_features - spatialdims = () - # Instrument the model if needed - if args.maximize_units is not None: - retain_layers(model, [args.layer]) - model.cuda() - - # Get the sample of z vectors - if args.maximize_units is None: - indexes = torch.arange(args.size) - z_sample = standard_z_sample(args.size, z_channels, seed=args.seed) - z_sample = z_sample.view(tuple(z_sample.shape) + spatialdims) - else: - # By default, if maximizing units, get a 'top 5%' sample. - if args.test_size is None: - args.test_size = args.size * 20 - z_universe = standard_z_sample(args.test_size, z_channels, - seed=args.seed) - z_universe = z_universe.view(tuple(z_universe.shape) + spatialdims) - indexes = get_highest_znums(model, z_universe, args.maximize_units, - args.size, seed=args.seed) - z_sample = z_universe[indexes] - - if args.ablate_units: - edit_layers(model, [args.layer]) - dims = max(2, max(args.ablate_units) + 1) # >=2 to avoid broadcast - model.ablation[args.layer] = torch.zeros(dims) - model.ablation[args.layer][args.ablate_units] = 1 - - save_znum_images(args.outdir, model, z_sample, indexes, - args.layer, args.ablate_units) - copy_lightbox_to(args.outdir) - - -def get_highest_znums(model, z_universe, max_units, size, - batch_size=100, seed=1): - # The model should have been instrumented already - retained_items = list(model.retained.items()) - assert len(retained_items) == 1 - layer = retained_items[0][0] - # By default, a 10% sample - progress = default_progress() - num_units = None - with torch.no_grad(): - # Pass 1: collect max activation stats - z_loader = torch.utils.data.DataLoader(TensorDataset(z_universe), - batch_size=batch_size, num_workers=2, - pin_memory=True) - scores = [] - for [z] in progress(z_loader, desc='Finding max activations'): - z = z.cuda() - model(z) - feature = model.retained[layer] - num_units = feature.shape[1] - max_feature = feature[:, max_units, ...].view( - feature.shape[0], len(max_units), -1).max(2)[0] - total_feature = max_feature.sum(1) - scores.append(total_feature.cpu()) - scores = torch.cat(scores, 0) - highest = (-scores).sort(0)[1][:size].sort(0)[0] - return highest - - -def save_znum_images(dirname, model, z_sample, indexes, layer, ablated_units, - name_template="image_{}.png", lightbox=False, batch_size=100, seed=1): - progress = default_progress() - os.makedirs(dirname, exist_ok=True) - with torch.no_grad(): - # Pass 2: now generate images - z_loader = torch.utils.data.DataLoader(TensorDataset(z_sample), - batch_size=batch_size, num_workers=2, - pin_memory=True) - saver = WorkerPool(SaveImageWorker) - if ablated_units is not None: - dims = max(2, max(ablated_units) + 1) # >=2 to avoid broadcast - mask = torch.zeros(dims) - mask[ablated_units] = 1 - model.ablation[layer] = mask[None,:,None,None].cuda() - for batch_num, [z] in enumerate(progress(z_loader, - desc='Saving images')): - z = z.cuda() - start_index = batch_num * batch_size - im = ((model(z) + 1) / 2 * 255).clamp(0, 255).byte().permute( - 0, 2, 3, 1).cpu() - for i in range(len(im)): - index = i + start_index - if indexes is not None: - index = indexes[index].item() - filename = os.path.join(dirname, name_template.format(index)) - saver.add(im[i].numpy(), filename) - saver.join() - -def copy_lightbox_to(dirname): - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - shutil.copy(os.path.join(srcdir, 'lightbox.html'), - os.path.join(dirname, '+lightbox.html')) - -class SaveImageWorker(WorkerBase): - def work(self, data, filename): - Image.fromarray(data).save(filename, optimize=True, quality=100) - -if __name__ == '__main__': - main() diff --git a/spaces/Harshitthaa/Harshitthaamyfirstai/README.md b/spaces/Harshitthaa/Harshitthaamyfirstai/README.md deleted file mode 100644 index 8069b2a90ff774731151e2c4f68c3312b50249a0..0000000000000000000000000000000000000000 --- a/spaces/Harshitthaa/Harshitthaamyfirstai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Harshitthaamyfirstai -emoji: 📊 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.b43d8183.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.b43d8183.js deleted file mode 100644 index 4fb8a721b8135fcfe849a9cd9b544874c8fdf442..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.b43d8183.js +++ /dev/null @@ -1,2 +0,0 @@ -import{C as p}from"./Column.06c172ac.js";import"./index.396f4a72.js";const t=["static"];export{p as Component,t as modes}; -//# sourceMappingURL=index.b43d8183.js.map diff --git a/spaces/Hyperion1970/JosefJilek-loliDiffusion/app.py b/spaces/Hyperion1970/JosefJilek-loliDiffusion/app.py deleted file mode 100644 index c6fe0376ea04cdfb76e4a1e5e544b6015b99c517..0000000000000000000000000000000000000000 --- a/spaces/Hyperion1970/JosefJilek-loliDiffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/JosefJilek/loliDiffusion").launch() \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/characters.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/characters.py deleted file mode 100644 index 494ea219392716dc75d2c1e19d71cd55b9b2f4ba..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/characters.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.data.encoders import register_bpe - - -SPACE = chr(32) -SPACE_ESCAPE = chr(9601) - - -@register_bpe("characters") -class Characters(object): - def __init__(self, *unused): - pass - - @staticmethod - def add_args(parser): - pass - - @staticmethod - def encode(x: str) -> str: - escaped = x.replace(SPACE, SPACE_ESCAPE) - return SPACE.join(list(escaped)) - - @staticmethod - def decode(x: str) -> str: - return x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/bart/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/bart/__init__.py deleted file mode 100644 index a701923f7e5a2a8aa9b75e5580ddea22907f53ee..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/bart/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa diff --git a/spaces/Intel/ldm3d/static/three6dof.html b/spaces/Intel/ldm3d/static/three6dof.html deleted file mode 100644 index 9927c447a39bfeb32e0bcf15afd96e645be23828..0000000000000000000000000000000000000000 --- a/spaces/Intel/ldm3d/static/three6dof.html +++ /dev/null @@ -1,180 +0,0 @@ - - - -Instructions: Upload up to 2 photos. For optimal results, upload a clear front-facing image (see example). To do so, either drag and drop your photo or click Upload Face, then press Submit.
-Other information:
-
- Chat with GPT with your voice in your native language !
-
If it fails enter custom session key see video for reference refer
- Bhavesh Baht video
-
Decoupling Magnitude and Phase Estimation with Deep ResUNet for Music Source Separation | Github Repo
" - -examples = [['example.wav']] -gr.Interface( - inference, - gr.inputs.Audio(type="file", label="Input"), - [gr.outputs.Audio(type="file", label="Vocals"),gr.outputs.Audio(type="file", label="Accompaniment")], - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/data/__init__.py b/spaces/akhaliq/Music_Source_Separation/bytesep/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/multi-modal_chinese_stable_diffusion_v1.0/app.py b/spaces/akhaliq/multi-modal_chinese_stable_diffusion_v1.0/app.py deleted file mode 100644 index f818aab7add38f534726b81ef9339c29b65f31b3..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/multi-modal_chinese_stable_diffusion_v1.0/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import os -# os.system("""pip install "modelscope[multi-modal]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html""") -import gradio as gr -import cv2 -from modelscope.pipelines import pipeline -from modelscope.utils.constant import Tasks - -text2image = pipeline(Tasks.text_to_image_synthesis,"damo/multi-modal_chinese_stable_diffusion_v1.0") - -def inference(text): - - result = text2image({"text":text}) - cv2.imwrite("result.png", result["output_imgs"][0]) - - return "result.png" - - - -title = "chinese stable diffusion" - -gr.Interface(inference, "text","image", title=title).launch() \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_inspect.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_inspect.py deleted file mode 100644 index 262695b1c4723bfb57569f3badd6f81f1cccd3df..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_inspect.py +++ /dev/null @@ -1,210 +0,0 @@ -from __future__ import absolute_import - -from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature -from typing import Any, Iterable, Optional, Tuple - -from .console import RenderableType, Group -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - - -def _first_paragraph(doc: str) -> str: - """Get the first paragraph from a docstring.""" - paragraph, _, _ = doc.partition("\n\n") - return paragraph - - -def _reformat_doc(doc: str) -> str: - """Reformat docstring.""" - doc = cleandoc(doc).strip() - return doc - - -class Inspect(JupyterMixin): - """A renderable to inspect any Python Object. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value of object. Defaults to True. - """ - - def __init__( - self, - obj: Any, - *, - title: Optional[TextType] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = True, - value: bool = True, - ) -> None: - self.highlighter = ReprHighlighter() - self.obj = obj - self.title = title or self._make_title(obj) - if all: - methods = private = dunder = True - self.help = help - self.methods = methods - self.docs = docs or help - self.private = private or dunder - self.dunder = dunder - self.sort = sort - self.value = value - - def _make_title(self, obj: Any) -> Text: - """Make a default title.""" - title_str = ( - str(obj) - if (isclass(obj) or callable(obj) or ismodule(obj)) - else str(type(obj)) - ) - title_text = self.highlighter(title_str) - return title_text - - def __rich__(self) -> Panel: - return Panel.fit( - Group(*self._render()), - title=self.title, - border_style="scope.border", - padding=(0, 1), - ) - - def _get_signature(self, name: str, obj: Any) -> Optional[Text]: - """Get a signature for a callable.""" - try: - _signature = str(signature(obj)) + ":" - except ValueError: - _signature = "(...)" - except TypeError: - return None - - source_filename: Optional[str] = None - try: - source_filename = getfile(obj) - except TypeError: - pass - - callable_name = Text(name, style="inspect.callable") - if source_filename: - callable_name.stylize(f"link file://{source_filename}") - signature_text = self.highlighter(_signature) - - qualname = name or getattr(obj, "__qualname__", name) - qual_signature = Text.assemble( - ("def ", "inspect.def"), (qualname, "inspect.callable"), signature_text - ) - - return qual_signature - - def _render(self) -> Iterable[RenderableType]: - """Render object.""" - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - key, (_error, value) = item - return (callable(value), key.strip("_").lower()) - - def safe_getattr(attr_name: str) -> Tuple[Any, Any]: - """Get attribute or any exception.""" - try: - return (None, getattr(obj, attr_name)) - except Exception as error: - return (error, None) - - obj = self.obj - keys = dir(obj) - total_items = len(keys) - if not self.dunder: - keys = [key for key in keys if not key.startswith("__")] - if not self.private: - keys = [key for key in keys if not key.startswith("_")] - not_shown_count = total_items - len(keys) - items = [(key, safe_getattr(key)) for key in keys] - if self.sort: - items.sort(key=sort_items) - - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - add_row = items_table.add_row - highlighter = self.highlighter - - if callable(obj): - signature = self._get_signature("", obj) - if signature is not None: - yield signature - yield "" - - if self.docs: - _doc = getdoc(obj) - if _doc is not None: - if not self.help: - _doc = _first_paragraph(_doc) - doc_text = Text(_reformat_doc(_doc), style="inspect.help") - doc_text = highlighter(doc_text) - yield doc_text - yield "" - - if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)): - yield Panel( - Pretty(obj, indent_guides=True, max_length=10, max_string=60), - border_style="inspect.value.border", - ) - yield "" - - for key, (error, value) in items: - key_text = Text.assemble( - ( - key, - "inspect.attr.dunder" if key.startswith("__") else "inspect.attr", - ), - (" =", "inspect.equals"), - ) - if error is not None: - warning = key_text.copy() - warning.stylize("inspect.error") - add_row(warning, highlighter(repr(error))) - continue - - if callable(value): - if not self.methods: - continue - - _signature_text = self._get_signature(key, value) - if _signature_text is None: - add_row(key_text, Pretty(value, highlighter=highlighter)) - else: - if self.docs: - docs = getdoc(value) - if docs is not None: - _doc = _reformat_doc(str(docs)) - if not self.help: - _doc = _first_paragraph(_doc) - _signature_text.append("\n" if "\n" in _doc else " ") - doc = highlighter(_doc) - doc.stylize("inspect.doc") - _signature_text.append(doc) - - add_row(key_text, _signature_text) - else: - add_row(key_text, Pretty(value, highlighter=highlighter)) - if items_table.row_count: - yield items_table - else: - yield Text.from_markup( - f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options." - ) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/markup.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/markup.py deleted file mode 100644 index 619540202cb554f93112b7779c235cf5309d499f..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/markup.py +++ /dev/null @@ -1,244 +0,0 @@ -from ast import literal_eval -from operator import attrgetter -import re -from typing import Callable, Iterable, List, Match, NamedTuple, Optional, Tuple, Union - -from .errors import MarkupError -from .style import Style -from .text import Span, Text -from .emoji import EmojiVariant -from ._emoji_replace import _emoji_replace - - -RE_TAGS = re.compile( - r"""((\\*)\[([a-z#\/@].*?)\])""", - re.VERBOSE, -) - -RE_HANDLER = re.compile(r"^([\w\.]*?)(\(.*?\))?$") - - -class Tag(NamedTuple): - """A tag in console markup.""" - - name: str - """The tag name. e.g. 'bold'.""" - parameters: Optional[str] - """Any additional parameters after the name.""" - - def __str__(self) -> str: - return ( - self.name if self.parameters is None else f"{self.name} {self.parameters}" - ) - - @property - def markup(self) -> str: - """Get the string representation of this tag.""" - return ( - f"[{self.name}]" - if self.parameters is None - else f"[{self.name}={self.parameters}]" - ) - - -_ReStringMatch = Match[str] # regex match object -_ReSubCallable = Callable[[_ReStringMatch], str] # Callable invoked by re.sub -_EscapeSubMethod = Callable[[_ReSubCallable, str], str] # Sub method of a compiled re - - -def escape( - markup: str, _escape: _EscapeSubMethod = re.compile(r"(\\*)(\[[a-z#\/@].*?\])").sub -) -> str: - """Escapes text so that it won't be interpreted as markup. - - Args: - markup (str): Content to be inserted in to markup. - - Returns: - str: Markup with square brackets escaped. - """ - - def escape_backslashes(match: Match[str]) -> str: - """Called by re.sub replace matches.""" - backslashes, text = match.groups() - return f"{backslashes}{backslashes}\\{text}" - - markup = _escape(escape_backslashes, markup) - return markup - - -def _parse(markup: str) -> Iterable[Tuple[int, Optional[str], Optional[Tag]]]: - """Parse markup in to an iterable of tuples of (position, text, tag). - - Args: - markup (str): A string containing console markup - - """ - position = 0 - _divmod = divmod - _Tag = Tag - for match in RE_TAGS.finditer(markup): - full_text, escapes, tag_text = match.groups() - start, end = match.span() - if start > position: - yield start, markup[position:start], None - if escapes: - backslashes, escaped = _divmod(len(escapes), 2) - if backslashes: - # Literal backslashes - yield start, "\\" * backslashes, None - start += backslashes * 2 - if escaped: - # Escape of tag - yield start, full_text[len(escapes) :], None - position = end - continue - text, equals, parameters = tag_text.partition("=") - yield start, None, _Tag(text, parameters if equals else None) - position = end - if position < len(markup): - yield position, markup[position:], None - - -def render( - markup: str, - style: Union[str, Style] = "", - emoji: bool = True, - emoji_variant: Optional[EmojiVariant] = None, -) -> Text: - """Render console markup in to a Text instance. - - Args: - markup (str): A string containing console markup. - emoji (bool, optional): Also render emoji code. Defaults to True. - - Raises: - MarkupError: If there is a syntax error in the markup. - - Returns: - Text: A test instance. - """ - emoji_replace = _emoji_replace - if "[" not in markup: - return Text( - emoji_replace(markup, default_variant=emoji_variant) if emoji else markup, - style=style, - ) - text = Text(style=style) - append = text.append - normalize = Style.normalize - - style_stack: List[Tuple[int, Tag]] = [] - pop = style_stack.pop - - spans: List[Span] = [] - append_span = spans.append - - _Span = Span - _Tag = Tag - - def pop_style(style_name: str) -> Tuple[int, Tag]: - """Pop tag matching given style name.""" - for index, (_, tag) in enumerate(reversed(style_stack), 1): - if tag.name == style_name: - return pop(-index) - raise KeyError(style_name) - - for position, plain_text, tag in _parse(markup): - if plain_text is not None: - append(emoji_replace(plain_text) if emoji else plain_text) - elif tag is not None: - if tag.name.startswith("/"): # Closing tag - style_name = tag.name[1:].strip() - - if style_name: # explicit close - style_name = normalize(style_name) - try: - start, open_tag = pop_style(style_name) - except KeyError: - raise MarkupError( - f"closing tag '{tag.markup}' at position {position} doesn't match any open tag" - ) from None - else: # implicit close - try: - start, open_tag = pop() - except IndexError: - raise MarkupError( - f"closing tag '[/]' at position {position} has nothing to close" - ) from None - - if open_tag.name.startswith("@"): - if open_tag.parameters: - handler_name = "" - parameters = open_tag.parameters.strip() - handler_match = RE_HANDLER.match(parameters) - if handler_match is not None: - handler_name, match_parameters = handler_match.groups() - parameters = ( - "()" if match_parameters is None else match_parameters - ) - - try: - meta_params = literal_eval(parameters) - except SyntaxError as error: - raise MarkupError( - f"error parsing {parameters!r} in {open_tag.parameters!r}; {error.msg}" - ) - except Exception as error: - raise MarkupError( - f"error parsing {open_tag.parameters!r}; {error}" - ) from None - - if handler_name: - meta_params = ( - handler_name, - meta_params - if isinstance(meta_params, tuple) - else (meta_params,), - ) - - else: - meta_params = () - - append_span( - _Span( - start, len(text), Style(meta={open_tag.name: meta_params}) - ) - ) - else: - append_span(_Span(start, len(text), str(open_tag))) - - else: # Opening tag - normalized_tag = _Tag(normalize(tag.name), tag.parameters) - style_stack.append((len(text), normalized_tag)) - - text_length = len(text) - while style_stack: - start, tag = style_stack.pop() - style = str(tag) - if style: - append_span(_Span(start, text_length, style)) - - text.spans = sorted(spans[::-1], key=attrgetter("start")) - return text - - -if __name__ == "__main__": # pragma: no cover - - MARKUP = [ - "[red]Hello World[/red]", - "[magenta]Hello [b]World[/b]", - "[bold]Bold[italic] bold and italic [/bold]italic[/italic]", - "Click [link=https://www.willmcgugan.com]here[/link] to visit my Blog", - ":warning-emoji: [bold red blink] DANGER![/]", - ] - - from pip._vendor.rich.table import Table - from pip._vendor.rich import print - - grid = Table("Markup", "Result", padding=(0, 1)) - - for markup in MARKUP: - grid.add_row(Text(markup), markup) - - print(grid) diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/simulate_interaction.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/simulate_interaction.py deleted file mode 100644 index fda8af775ed5870a0f3372f4527ce7657d0fb61f..0000000000000000000000000000000000000000 --- a/spaces/alistairmcleay/cambridge-masters-project/scripts/simulate_interaction.py +++ /dev/null @@ -1,167 +0,0 @@ -import sys -import traceback -import pandas as pd - -# from tqdm import tqdm -from UBAR_code.interaction import UBAR_interact -from user_model_code.interaction import multiwoz_interact -from UBAR_code.interaction.UBAR_interact import bcolors - - -# from tqdm import tqdm -from scripts.UBAR_code.interaction import UBAR_interact -from scripts.user_model_code.interaction import multiwoz_interact -from scripts.UBAR_code.interaction.UBAR_interact import bcolors - - -def instantiate_agents(): - - UBAR_checkpoint_path = "cambridge-masters-project/epoch50_trloss0.59_gpt2" - user_model_checkpoint_path = "cambridge-masters-project/MultiWOZ-full_checkpoint_step340k" - - sys_model = UBAR_interact.UbarSystemModel( - "UBAR_sys_model", UBAR_checkpoint_path, "cambridge-masters-project/scripts/UBAR_code/interaction/config.yaml" - ) - - user_model = multiwoz_interact.NeuralAgent( - "user", user_model_checkpoint_path, "cambridge-masters-project/scripts/user_model_code/interaction/config.yaml" - ) - - return sys_model, user_model - - -def read_multiwoz_data(): - """ - Read the multiwoz 2.0 raw data from the .json file - """ - raw_mwoz_20_path = "cambridge-masters-project/data/raw/UBAR/multi-woz/data.json" - df_raw_mwoz = pd.read_json(raw_mwoz_20_path) - return df_raw_mwoz - - -def load_test_val_lists(): - val_list_file = "cambridge-masters-project/data/raw/UBAR/multi-woz/valListFile.json" - test_list_file = "cambridge-masters-project/data/raw/UBAR/multi-woz/testListFile.json" - - -def main( - write_to_file=False, ground_truth_system_responses=False, train_only=True, n_dialogues="all", log_successes=False -): - sys_model, user_model = instantiate_agents() - - # TODO: move hardcoded vars into config file - raw_mwoz_20_path = "cambridge-masters-project/data/raw/UBAR/multi-woz/data.json" - user_utterances_out_path = "cambridge-masters-project/data/preprocessed/UBAR/user_utterances_from_simulator.txt" - logging_successes_path = "cambridge-masters-project/data/preprocessed/UBAR/logging_successes" - sys_model.print_intermediary_info = False - user_model.print_intermediary_info = False - - df_raw_mwoz = pd.read_json(raw_mwoz_20_path) - if n_dialogues == "all": - n_dialogues = len(df_raw_mwoz.columns) - - curr_dialogue_user_utterances_formatted = [] - - print("Loading goals...") - goals = multiwoz_interact.read_multiWOZ_20_goals(raw_mwoz_20_path, n_dialogues) - - # Write column headers - if write_to_file: - with open(user_utterances_out_path, "w") as f: - f.write("Dialogue #\tDialogue ID\tTurn #\tSystem Response\n") - - print("Loading data...") - df_mwoz_data = read_multiwoz_data() - val_list, test_list = load_test_val_lists() - - successful_dialogues = 0 - total_dialogues_generated = 0 # train dialogues only - for dialogue_idx, (goal, dialogue_filename) in enumerate(zip(goals, df_mwoz_data.columns)): - if log_successes: - # log successful_dialogues to logging_successes_path every 100 dialogues - if dialogue_idx % 100 == 0: - with open(logging_successes_path, "w") as f: - f.write(str(successful_dialogues) + " / " + str(total_dialogues_generated)) - - curr_dialogue_user_utterances_formatted = [] - if train_only: - if dialogue_filename in val_list or dialogue_filename in test_list: - continue - - total_dialogues_generated += 1 - print("Dialogue: {}".format(dialogue_filename)) - - # There are occasionally exceptions thrown from one of the agents, usually the user - # In this case we simply continue to the next dialogue - try: - # Reset state after each dialogue - sys_model.init_session() - user_model.init_session(ini_goal=goal) - sys_response = "" - - for turn_idx in range(50): - # Turn idx in this case represents the turn as one user utterance AND one system response - usr_response_raw_data_idx = turn_idx * 2 - sys_response_raw_data_idx = turn_idx * 2 + 1 - - user_utterance = user_model.response(sys_response) - print(bcolors.OKBLUE + "User: " + bcolors.ENDC + user_utterance) - - if write_to_file: - user_utterance = user_utterance.replace("\n", " ") - curr_dialogue_user_utterances_formatted.append( - str(dialogue_idx) - + "\t" - + dialogue_filename - + "\t" - + str(usr_response_raw_data_idx) - + "\t" - + user_utterance - + "\n" - ) - - if user_model.is_terminated(): - successful_dialogues += 1 - print(bcolors.OKCYAN + "Dialogue terminated successfully!" + bcolors.ENDC) - print(bcolors.OKCYAN + "---" * 30 + bcolors.ENDC + "\n") - if write_to_file: - # Write whole dialogue to file - with open(user_utterances_out_path, "a") as f: - for line in curr_dialogue_user_utterances_formatted: - f.write(line) - break - - # Next turn materials - if ground_truth_system_responses: - # If we are at the end of the ground truth dialogues - if len(df_mwoz_data.iloc[:, dialogue_idx].log) <= sys_response_raw_data_idx: - print(bcolors.RED + "Dialogue terminated unsuccessfully!" + bcolors.ENDC) - print(bcolors.RED + "---" * 30 + bcolors.ENDC + "\n") - break - sys_response = df_mwoz_data.iloc[:, dialogue_idx].log[sys_response_raw_data_idx]["text"] - else: - sys_response = sys_model.response(user_utterance, turn_idx) - capitalised_sys_response = sys_response[0].upper() + sys_response[1:] - print(bcolors.GREEN + "System: " + bcolors.ENDC + capitalised_sys_response) - - except Exception: - print(bcolors.RED + "*" * 30 + bcolors.ENDC) - print(bcolors.RED + "Error in dialogue {}".format(dialogue_filename) + bcolors.ENDC) - print(bcolors.RED + "*" * 30 + bcolors.ENDC) - traceback.print_exc() - continue - - print("Successful dialogues: {}".format(successful_dialogues)) - print("Total dialogues: {}".format(n_dialogues)) - print("% Successful Dialopues: {}".format(successful_dialogues / n_dialogues)) - - -if __name__ == "__main__": - # TODO: move parameters to config file - # Fix the hacky mess below - ground_truth_system_responses = sys.argv[1] - if ground_truth_system_responses == "False": - ground_truth_system_responses = False - else: - ground_truth_system_responses = True - main(write_to_file=False, ground_truth_system_responses=ground_truth_system_responses) diff --git a/spaces/allknowingroger/Image-Models-Test198/app.py b/spaces/allknowingroger/Image-Models-Test198/app.py deleted file mode 100644 index d3e94b54d6466706575a106cc7eb68690dc57261..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test198/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "DARNIX/dtr2", - "artificialguybr/ColoringBookRedmond-V2", - "ramy21/braintumormodel5", - "Falah/sdlogo", - "ramy21/braintumormodel4", - "harshupanchal/my-pet-lion", - "LilyNgo/lora-trained-xl-colab", - "MdEndan/stable-diffusion-lora-fine-tuned", - "artificialguybr/TshirtDesignRedmond", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/almakedon/faster-whisper-webui/src/hooks/subTaskProgressListener.py b/spaces/almakedon/faster-whisper-webui/src/hooks/subTaskProgressListener.py deleted file mode 100644 index 9a8eaa876fcd18032875d67535e0558494842c60..0000000000000000000000000000000000000000 --- a/spaces/almakedon/faster-whisper-webui/src/hooks/subTaskProgressListener.py +++ /dev/null @@ -1,37 +0,0 @@ -from src.hooks.progressListener import ProgressListener - -from typing import Union - -class SubTaskProgressListener(ProgressListener): - """ - A sub task listener that reports the progress of a sub task to a base task listener - Parameters - ---------- - base_task_listener : ProgressListener - The base progress listener to accumulate overall progress in. - base_task_total : float - The maximum total progress that will be reported to the base progress listener. - sub_task_start : float - The starting progress of a sub task, in respect to the base progress listener. - sub_task_total : float - The total amount of progress a sub task will report to the base progress listener. - """ - def __init__( - self, - base_task_listener: ProgressListener, - base_task_total: float, - sub_task_start: float, - sub_task_total: float, - ): - self.base_task_listener = base_task_listener - self.base_task_total = base_task_total - self.sub_task_start = sub_task_start - self.sub_task_total = sub_task_total - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - sub_task_progress_frac = current / total - sub_task_progress = self.sub_task_start + self.sub_task_total * sub_task_progress_frac - self.base_task_listener.on_progress(sub_task_progress, self.base_task_total) - - def on_finished(self): - self.base_task_listener.on_progress(self.sub_task_start + self.sub_task_total, self.base_task_total) \ No newline at end of file diff --git a/spaces/amankishore/adept-fuyu-8b/README.md b/spaces/amankishore/adept-fuyu-8b/README.md deleted file mode 100644 index 0a5d2fe247ec6217776ba5ffca499f708c8df6c0..0000000000000000000000000000000000000000 --- a/spaces/amankishore/adept-fuyu-8b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Adept Fuyu 8b -emoji: 🐢 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.49.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/animeartstudio/AnimeModels/README.md b/spaces/animeartstudio/AnimeModels/README.md deleted file mode 100644 index fd58abc9b24f2c68a02207c2c219f8785ccdf2da..0000000000000000000000000000000000000000 --- a/spaces/animeartstudio/AnimeModels/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Maximum Multiplier -emoji: 🛕🛕 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: pulpapps/Diffusion30-Anime ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/anzorq/hf-spaces-semantic-search/styles/globals.css b/spaces/anzorq/hf-spaces-semantic-search/styles/globals.css deleted file mode 100644 index fd81e885836d815b8019694a910a93d86a43cb66..0000000000000000000000000000000000000000 --- a/spaces/anzorq/hf-spaces-semantic-search/styles/globals.css +++ /dev/null @@ -1,27 +0,0 @@ -@tailwind base; -@tailwind components; -@tailwind utilities; - -:root { - --foreground-rgb: 0, 0, 0; - --background-start-rgb: 214, 219, 220; - --background-end-rgb: 255, 255, 255; -} - -@media (prefers-color-scheme: dark) { - :root { - --foreground-rgb: 255, 255, 255; - --background-start-rgb: 0, 0, 0; - --background-end-rgb: 0, 0, 0; - } -} - -body { - color: rgb(var(--foreground-rgb)); - background: linear-gradient( - to bottom, - transparent, - rgb(var(--background-end-rgb)) - ) - rgb(var(--background-start-rgb)); -} diff --git a/spaces/aphenx/bingo/src/components/user-menu.tsx b/spaces/aphenx/bingo/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -Download File ———>>> https://tinurli.com/2uwiu7
Do you love simulation games that let you create and manage your own dream school? If yes, then you should definitely check out Pocket Academy 3, the latest game by Kairosoft, the makers of many popular simulation games. In this article, we will tell you what Pocket Academy 3 is, what are its features and gameplay, and why you should download the original APK file instead of the modified versions. We will also show you how to download and install Pocket Academy 3 APK original on your Android device, and give you some tips and tricks to enjoy the game to the fullest.
-Download Zip ✪✪✪ https://urlca.com/2uO4Zx
Pocket Academy 3 is a simulation game that lets you build your own school from scratch, arrange the educational process, and make your students and graduates famous all over the country. It is the third part in a series that has been popular for more than 10 years. Here, the developers have slightly improved the system of controls and added many new features, like participating in sports competitions or preparing graduates for university entrance exams.
-Pocket Academy 3 has many features and gameplay elements that make it fun and engaging. Some of them are:
-While there are many modified versions of Pocket Academy 3 APK available on the internet, we recommend that you download the original APK file from a reliable source. This is because:
-To download Pocket Academy 3 APK original, you need to follow these steps:
-After you download the APK file, you need to install it on your device. Here are the steps to do that:
-download pocket academy 3 apk original free
-download pocket academy 3 apk original full version
-download pocket academy 3 apk original mod
-download pocket academy 3 apk original latest update
-download pocket academy 3 apk original for android
-download pocket academy 3 apk original from kairosoft
-download pocket academy 3 apk original simulation game
-download pocket academy 3 apk original school festival
-download pocket academy 3 apk original offline
-download pocket academy 3 apk original no ads
-download pocket academy 3 apk original unlimited money
-download pocket academy 3 apk original cracked
-download pocket academy 3 apk original hack
-download pocket academy 3 apk original cheats
-download pocket academy 3 apk original review
-download pocket academy 3 apk original gameplay
-download pocket academy 3 apk original tips and tricks
-download pocket academy 3 apk original guide and walkthrough
-download pocket academy 3 apk original best facilities and clubs
-download pocket academy 3 apk original how to win nationals and placement tests
-download pocket academy 3 apk original fun and addictive
-download pocket academy 3 apk original high quality graphics and sound
-download pocket academy 3 apk original easy to play and control
-download pocket academy 3 apk original compatible with all devices
-download pocket academy 3 apk original safe and secure
-download pocket academy 3 apk original fast and smooth
-download pocket academy 3 apk original low storage and battery usage
-download pocket academy 3 apk original new features and improvements
-download pocket academy 3 apk original rating and feedback
-download pocket academy 3 apk original support and contact
-download pocket academy 3 apk original alternatives and similar apps
-download pocket academy 3 apk original comparison and difference with previous versions
-download pocket academy 3 apk original pros and cons
-download pocket academy 3 apk original benefits and drawbacks
-download pocket academy 3 apk original advantages and disadvantages
-download pocket academy 3 apk original challenges and achievements
-download pocket academy 3 apk original goals and objectives
-download pocket academy 3 apk original strategies and tactics
-download pocket academy 3 apk original secrets and hidden features
-download pocket academy 3 apk original bugs and glitches
-download pocket academy 3 apk original solutions and fixes
-download pocket academy 3 apk original recommendations and suggestions
-download pocket academy 3 apk original questions and answers
-download pocket academy 3 apk original FAQs and tutorials
-download pocket academy 3 apk original videos and screenshots
-download pocket academy 3 apk original news and updates
-download pocket academy 3 apk original forums and communities
-download pocket academy 3 apk original blogs and articles
Now that you have installed Pocket Academy 3 APK original on your device, you can start playing and creating your own school. Here are some tips and tricks to help you enjoy the game to the fullest:
-Pocket Academy 3 is a fun and engaging school simulator game by Kairosoft that lets you build and manage your own dream school. It has many features and gameplay elements that make it addictive and enjoyable. To download Pocket Academy 3 APK original, you need to find a reliable website that offers it for free download, such as APKMirror or APKPure. Then, you need to download and install the APK file on your device following some simple steps. Finally, you can start playing and creating your own school with some tips and tricks we provided.
-If you are a fan of simulation games or school management games, you should definitely give Pocket Academy 3 a try. It is one of the best games by Kairosoft that will keep you entertained for hours. Download Pocket Academy 3 APK original now and start building your own school today. Don't forget to share your feedback with us in the comments section below. We would love to hear from you!
-Pocket Academy 3 requires Android 4.4 or higher and at least 40 MB of free storage space on your device.
-No, Pocket Academy 3 is a paid game that costs $5.99 to download from Google Play Store. However, you can download it for free from other sources, such as APKMirror or APKPure.
-Yes, you can play Pocket Academy 3 offline without an internet connection. However, you may need an internet connection to access some features, such as cloud save or social media integration.
-No, you cannot transfer your game data from Pocket Academy 2 to Pocket Academy 3. They are separate games with different features and gameplay.
-No, Pocket Academy 3 is only available for Android devices. However, you can use an Android emulator software on your PC to run Android apps such as [BlueStacks] or [NoxPlayer]. However, this may affect the performance and compatibility of the game.
401be4b1e0If you are looking for a fun and exciting Korean series to watch, you might want to check out Treasure Keeper. This is a comic action drama that follows a mysterious thief and a team of cultural asset recovery experts as they go against the lawless villains who exploit Korea's heritage. In this article, we will tell you everything you need to know about Treasure Keeper, including what it is, why you should watch it, and how to download it legally and safely.
-Download ☑ https://urlca.com/2uOdOM
Treasure Keeper is a Korean series that premiered in April 2023 on Viu, a streaming platform that offers Asian dramas and movies. It has 12 episodes, each about an hour long. The series is based on a webtoon of the same name by Kim Young-hoon, which was published from 2018 to 2020.
-The series revolves around Skunk, a masked thief who specializes in stealing cultural assets from corrupt and powerful people. He teams up with Karma, an unofficial cultural asset recovery team that consists of a public officer, a hacker, a martial artist, and a former idol. Together, they try to restore Korea's cultural heritage and punish those who evade the law.
-The series features a talented and charismatic cast of actors who bring their characters to life. Here are some of the main cast members and their roles:
-Treasure Keeper is not your typical Korean drama. It has many elements that make it enjoyable and entertaining for different kinds of viewers. Here are some of the reasons why you should watch Treasure Keeper:
-Treasure Keeper is a comic action drama that combines humor, suspense, romance, and adventure. It has a fast-paced and thrilling plot that keeps you hooked from episode to episode. It also has a light-hearted and playful tone that makes you laugh and smile along with the characters.
-download treasure keeper korean drama
-download treasure keeper season 1
-download treasure keeper with english subtitles
-download treasure keeper joo won
-download treasure keeper 2023
-download treasure keeper pemines
-download treasure keeper mkvshows
-download treasure keeper google drive
-download treasure keeper mega links
-download treasure keeper 540p
-download treasure keeper 720p
-download treasure keeper 1080p
-download treasure keeper episode 1
-download treasure keeper episode 2
-download treasure keeper episode 3
-download treasure keeper episode 4
-download treasure keeper episode 5
-download treasure keeper episode 6
-download treasure keeper episode 7
-download treasure keeper episode 8
-download treasure keeper episode 9
-download treasure keeper episode 10
-download treasure keeper episode 11
-download treasure keeper episode 12
-download treasure keeper finale
-download stealer the treasure keeper kdrama
-download stealer the treasure keeper korea institute of fusion energy
-download stealer the treasure keeper skunk and karma
-download stealer the treasure keeper cultural assets recovery team
-download stealer the treasure keeper hwang dae myung
-download stealer the treasure keeper thief and public officer
-download stealer the treasure keeper netflix
-download stealer the treasure keeper viki
-download stealer the treasure keeper viu
-download stealer the treasure keeper dramacool
-download stealer the treasure keeper kissasian
-download stealer the treasure keeper mydramalist
-download stealer the treasure keeper asianwiki
-download stealer the treasure keeper imdb
-download stealer the treasure keeper trailer
-how to download treasure keeper for free
-where to download treasure keeper online
-best site to download treasure keeper hd quality
-watch and download treasure keeper full episodes
-review and rating of download treasure keeper
-synopsis and cast of download treasure keeper
-ost and soundtrack of download treasure keeper
-behind the scenes and bloopers of download treasure keeper
-fan art and wallpapers of download treasure keeper
-news and updates of download treasure keeper
Treasure Keeper is not only entertaining but also educational. It introduces you to various aspects of Korea's culture and history through the cultural assets that Skunk and Karma deal with. You can learn about Korea's art, architecture, literature, music, religion, folklore, and more through the series.
-Treasure Keeper also has amazing action and comedy scenes that showcase the skills and personalities of the characters. You can watch them perform impressive stunts, fight scenes, chases, and escapes as they steal and recover the cultural assets. You can also enjoy their hilarious interactions, banters, jokes, and pranks as they work together and clash with each other.
-If you are interested in watching Treasure Keeper, you might want to download it so that you can watch it anytime and anywhere you want. However, you should be careful about how you download the series, as not all methods are legal and safe. Here are some of the best ways to download Treasure Keeper:
-The most legal and safe way to download Treasure Keeper is to use the official streaming platform that offers the series, which is Viu. Viu is a subscription-based service that allows you to watch and download Asian dramas and movies on your devices. You can download up to 10 videos at a time and watch them offline for up to 30 days. You can also choose the video quality and subtitle language that suit your preferences.
-Another legal and safe way to download Treasure Keeper is to use a VPN service that can bypass geo-restrictions and access other streaming platforms that offer the series in different regions. For example, you can use a VPN to connect to a server in Korea and access KBS World, which is a free-to-air channel that broadcasts Treasure Keeper with English subtitles. You can then use a video downloader tool or extension to download the episodes from the website.
-Once you have downloaded Treasure Keeper, you might wonder what are the best platforms and devices to watch the series on. The answer depends on your personal preferences and needs, but here are some of the factors that you should consider:
-Finally, here are some of the tips and tricks that can help you enhance your viewing experience of Treasure Keeper:
-Treasure Keeper is a comic action drama series that follows a thief and a recovery team who steal and restore Korea's cultural assets. It is a fun and exciting series that offers humor, suspense, romance, adventure, culture, history, action, and comedy. It is also easy and safe to download from various platforms and devices. If you are looking for a new Korean series to watch, we highly recommend that you give Treasure Keeper a try.
-In this article, we have covered:
-Now that you have learned everything you need to know about Treasure Keeper, what are you waiting for? Download the series today and enjoy the comic action drama that will make you laugh, cry, and cheer. And don't forget to share your thoughts and opinions about the series with us in the comments section below. We would love to hear from you!
-Here are some of the frequently asked questions about Treasure Keeper:
-You can watch Treasure Keeper online on Viu, which is the official streaming platform that offers the series. You can also use a VPN service to access other streaming platforms that offer the series in different regions, such as KBS World.
-The writer of Treasure Keeper is Kim Young-hoon, who is also the author of the original webtoon. The director of Treasure Keeper is Lee Seung-hoon, who has previously directed other Korean dramas such as The Fiery Priest and Doctor Prisoner.
-No, Treasure Keeper is not based on a true story. It is a fictional story that is inspired by Korea's culture and history. However, some of the cultural assets that appear in the series are real or based on real ones.
-As of now, there is no official announcement or confirmation about a second season of Treasure Keeper. However, there is a possibility that the series will be renewed for another season, as it has received positive reviews and ratings from the viewers and critics.
-If you like Treasure Keeper, you might also like some other Korean series that are similar to it in terms of genre, theme, or style. Some of these series are: Vincenzo, Lawless Lawyer, Healer, Leverage, and Private Lives.
-If you are a fan of Marvel comics, movies, and games, you will love Marvel Contest of Champions. It is a free-to-play fighting game that features your favorite Marvel superheroes and villains in epic battles. You can collect, level up, and customize your champions, as well as team up with your friends and other players to take on various quests and challenges. But what if you want to enjoy the game without any limitations or restrictions? That's where Marvel Contest of Champions Mod APK comes in handy. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, and how to download and install it on your device.
-Marvel Contest of Champions is a game that was released in 2014 by Kabam Games, Inc. It is based on the Marvel comics storyline of the same name, where the cosmic entity known as The Collector summons various heroes and villains from different universes to fight in his Contest of Champions. The game follows a similar plot, where you play as a Summoner who has been chosen by The Collector to participate in his contest. You have to assemble a team of champions from different classes and factions, such as Mutant, Cosmic, Skill, Science, Mystic, and Tech, and fight against other Summoners and their champions in various locations from the Marvel universe.
-Download ✯✯✯ https://urlca.com/2uObys
The game is a 2D fighting game that uses a simple tap-and-swipe control system. You can tap on the right side of the screen to perform light attacks, swipe right to perform medium attacks, swipe left to dodge, press and hold on the left side of the screen to block, and press and hold on the right side of the screen to charge a heavy attack. You can also use special attacks that are unique to each champion when you fill up your power meter. Each champion has a signature ability that is unlocked when you get a duplicate of that champion from a crystal or an arena. You can also improve your champions' stats and abilities by leveling them up, ranking them up, applying mastery points, and using synergy bonuses.
-The game has various modes and features that you can enjoy, such as:
-Marvel Contest of Champions is a fun and addictive game, but it also has some drawbacks and limitations that can affect your gaming experience. For example:
-But what if you could play the game without any of these hassles? That's what Marvel Contest of Champions Mod APK offers you. It is a modified version of the game that gives you unlimited access to everything you need to enjoy the game to the fullest. With Marvel Contest of Champions Mod APK, you can:
-marvel contest of champions mod apk unlimited units
-marvel contest of champions mod apk latest version
-marvel contest of champions mod apk download for android
-marvel contest of champions mod apk offline
-marvel contest of champions mod apk god mode
-marvel contest of champions mod apk no root
-marvel contest of champions mod apk revdl
-marvel contest of champions mod apk unlimited money
-marvel contest of champions mod apk 40.0.0
-marvel contest of champions mod apk android 1
-marvel contest of champions mod apk hack
-marvel contest of champions mod apk obb
-marvel contest of champions mod apk rexdl
-marvel contest of champions mod apk 2023
-marvel contest of champions mod apk unlimited everything
-marvel contest of champions mod apk free shopping
-marvel contest of champions mod apk high damage
-marvel contest of champions mod apk unlimited gold
-marvel contest of champions mod apk all characters unlocked
-marvel contest of champions mod apk anti ban
-marvel contest of champions mod apk unlimited crystals
-marvel contest of champions mod apk 39.1.1
-marvel contest of champions mod apk ios
-marvel contest of champions mod apk online
-marvel contest of champions mod apk platinmods
-marvel contest of champions mod apk unlimited energy
-marvel contest of champions mod apk 38.2.0
-marvel contest of champions mod apk 2022
-marvel contest of champions mod apk data
-marvel contest of champions mod apk happymod
-marvel contest of champions mod apk mega
-marvel contest of champions mod apk one hit kill
-marvel contest of champions mod apk unlimited iso 8
-marvel contest of champions mod apk 37.2.0
-marvel contest of champions mod apk an1
-marvel contest of champions mod apk blackmod
-marvel contest of champions mod apk full unlocked
-marvel contest of champions mod apk no verification
-marvel contest of champions mod apk unlimited health
-marvel contest of champions mod apk 36.3.0
Marvel Contest of Champions Mod APK is not just a simple hack that gives you unlimited resources. It also has some amazing features that enhance your gameplay and make it more fun and exciting. Some of these features are:
-If you are interested in downloading and installing Marvel Contest of Champions Mod APK on your device, you can follow these simple steps:
-While Marvel Contest of Champions Mod APK is safe and secure to use, there are some precautions and tips that you should keep in mind to avoid any issues or problems while using it. Here are some of them:
-Marvel Contest of Champions is a great game for Marvel fans who love fighting games. It has a lot of content and features that will keep you entertained and engaged for hours. However, if you want to enjoy the game without any limitations or restrictions, you should download Marvel Contest of Champions Mod APK. It is a modified version of the game that gives you unlimited access to everything you need to have fun and win. It also has some amazing features that enhance your gameplay and make it more fun and exciting. You can download and install Marvel Contest of Champions Mod APK on your device by following the steps and tips mentioned above. So, what are you waiting for? Download Marvel Contest of Champions Mod APK now and unleash your inner superhero!
-Este site usa cookies para garantir a melhor experiência de navegação. Você pode alterar suas preferências de cookies no navegador ou nas configurações do dispositivo. Saber mais Como excluir cookiesComo excluir cookies'); var button = $('Aceitar'); button.click(handleAcceptClick); notification .append(text) .append(button); wrapper.append(notification); $('body').append(wrapper); setTimeout(function() wrapper.addClass('cookies-notification--visible'); , 1000); function handleAcceptClick() localStorage.setItem('cookiesAgreed', 'true'); wrapper.removeClass('cookies-notification--visible'); wrapper.on('transitionend', handleTransitionEnd); function handleTransitionEnd() wrapper.remove(); }); var swRegisterManager = goals: [], add: function(swGoalRegister) this.goals.push(swGoalRegister); , registerGoals: function() while(this.goals.length) this.goals.shift().call(); ; window.swPostRegister = swRegisterManager.registerGoals.bind(swRegisterManager); Tonyeletric INICIO AUDIO AIWA TV CCE TV CINERAL TV GRADIENTE TV LG TV PANASONIC TV PHILCO TV PHILIPS TV SANYO TV SONY TV SAMSUNG TV SHARP TV TOSHIBA TV MONITORES ESQUEMAS MICROONDAS CURSOS E LIVROS COLETANEA DE AVARIAS EM TV CCE (resolvido) AVARIAS TV MITSUBISHI DESTRAVAR DVDs PLAYERS MODO DE SERVIÇO TV FLY-BACK EQUIVALÊNCIA PARA TV E MONITORES CLIPS - VIDEOS - MÚSICAS PLAYER DE MUSICAS - DOWNLOAD RADIO ON-LINE GAMES GRÁTIS APLICATIVOS PARA CELULARES PROGRAMAS PARA PC /* jshint ignore:start */$(document).ready(function() flyoutMenu.initFlyoutMenu( ["name":"INICIO","title":"Esquemas - Manuais - Tv - Audio - Digital - Tonyeletric","href":".\/","children":[],"name":"AUDIO","title":"Esquemas-Audio-System-Amplificadores-Som-Aparerelhos","href":"esquemas-audio.php","children":[],"name":"AIWA TV","title":"Aiwa Esquemas TV-Manuais de Serviço-TV-Audio","href":"esquemas-tv-aiwa.php","children":[],"name":"CCE TV","title":"cce esquemas-tv-manuais de serviço-audio-diagramas-esquemas","href":"esquemas-tv-cce.php","children":[],"name":"CINERAL TV","title":"Cineral Esquemas-TV-Manuais de Serviço-Audio-Diagramas TV-Elétricos","href":"esquemas-tv-cineral.php","children":[],"name":"GRADIENTE TV","title":"Esquemas TV-Gradiente TV,-Diagramas-Manuais de Serviço-Audio TV","href":"esquemas-tv-gradiente.php","children":[],"name":"LG TV","title":"Esquemas de TV-Manuais-LG-Diagramas-Audio-TV-","href":"esquemas-tv-lg.php","children":[],"name":"PANASONIC TV","title":"Esquemas TV-Panasonic-Manuais de Serviço-Audio-Diagramas","href":"esquemas-tv-panasonic.php","children":[],"name":"PHILCO TV","title":"Esquemas-TV Philco-Diagramas-TV-Audio-Manuais de Serviço","href":"esquemas-tv-philco.php","children":[],"name":"PHILIPS TV","title":"Esquemas-TV-Philips-Diagramas-Manuais de Serviço-Audio","href":"esquemas-tv-philips.php","children":[],"name":"SANYO TV","title":"Esquemas-TV Sanyo-Manuais de TV-Audio-Diagramas","href":"esquemas-tv-sanyo.php","children":[],"name":"SONY TV","title":"Esquemas-TV-Esquema Sony-Manuais-Sony-Audio TV","href":"esquemas-tv-sony.php","children":[],"name":"SAMSUNG TV","title":"Samsung Esquemas-TV-Manuais de Serviço TV-Audio-Diagramas","href":"esquemas-tv-samsung.php","children":[],"name":"SHARP TV","title":"Esquemas-TV-Sharp-Manuais de Serviço-Audio-Esquemas TV","href":"esquemas-tv-sharp.php","children":[],"name":"TOSHIBA TV","title":"Toshiba-Esquemas-TV-Manuais de Serviço-Diagramas-","href":"esquemas-tv-toshiba.php","children":[],"name":"MONITORES ESQUEMAS","title":"MONITORES ESQUEMAS","href":"monitor-varios.php","children":[],"name":"MICROONDAS ","title":"esquemas-microondas-manuais-diagramas","href":"esquemas-microondas.php","children":[],"name":"CURSOS E LIVROS","title":"Cursos Grátis-Livros-Apostilas Gratis-Curso-Apostila","href":"cursos.php","children":[],"name":"COLETANEA DE AVARIAS EM TV CCE (resolvido)","title":"DEFEITO RESOLVIDO","href":"resolvido.php","children":[],"name":"AVARIAS TV MITSUBISHI","title":"avarias-tv-Mitsubishi","href":"avarias-tv.php","children":[],"name":"DESTRAVAR DVDs PLAYERS","title":"dvds players","href":"dvds-players.php","children":[],"name":"MODO DE SERVIÇO TV","title":"modo service tv","href":"modo-service.php","children":[],"name":"FLY-BACK EQUIVALÊNCIA PARA TV E MONITORES","title":"FLYBACK","href":"flayback.php","children":[],"name":"CLIPS - VIDEOS - MÚSICAS","title":"Clips-Musicas-Clips Musicais-Video-Tutorial-Aula-Curso de-Pdf","href":"player-tv-music-clip.php","children":[],"name":"PLAYER DE MUSICAS - DOWNLOAD","title":"MUSICAS MP3","href":"musicas-mp3.php","children":[],"name":"RADIO ON-LINE","title":"RADIO ON-LINE","href":"radio-online.php","children":[],"name":"GAMES GRÁTIS","title":"GAMES GRATIS","href":"games-gratis.php","children":[],"name":"APLICATIVOS PARA CELULARES","title":"Celulares--Desbloqueio Nokia-On Line-Programas-Aplicativos-Jogos-Games","href":"celulares.php","children":[],"name":"PROGRAMAS PARA PC","title":"FERRAMENTAS PARA PC GRÁTIS","href":"ferramentas-para-pc.php","children":[]] , 'flyover'););/* jshint ignore:end */ INICIO AUDIO AIWA TV CCE TV CINERAL TV GRADIENTE TV LG TV PANASONIC TV PHILCO TV PHILIPS TV SANYO TV SONY TV SAMSUNG TV SHARP TV TOSHIBA TV MONITORES ESQUEMAS MICROONDAS CURSOS E LIVROS COLETANEA DE AVARIAS EM TV CCE (resolvido) AVARIAS TV MITSUBISHI DESTRAVAR DVDs PLAYERS MODO DE SERVIÇO TV FLY-BACK EQUIVALÊNCIA PARA TV E MONITORES CLIPS - VIDEOS - MÚSICAS PLAYER DE MUSICAS - DOWNLOAD RADIO ON-LINE GAMES GRÁTIS APLICATIVOS PARA CELULARES PROGRAMAS PARA PC .layout_1-column width: 100%; padding: 0; margin: 0; .layout_1-column:after content: ""; display: table; clear: both; .zone_top margin: 0; padding: 5px; vertical-align: top; line-height: normal; min-width: 100px; /* The CSS class needs to be applied to the parent of the adsense widget, which is different depending on whether the widget is inside SBUI or in a published site. widgetIdElWrap only exists in SBUI. If it exists, .hide-x-spill is applied to it. If not, the widget must be in a published site, and the class is applied to widgetIdEl instead. */ (function($, widgetId) var widgetIdEl = $('#' + widgetId.id); var widgetIdElWrap = $('#' + widgetId.id + '_wrap') if (widgetIdElWrap.length === 0) widgetIdEl.addClass('hide-x-spill'); else widgetIdElWrap.addClass('hide-x-spill'); )($, I5643); (adsbygoogle = window.adsbygoogle || []).push();.old_text_widget img max-width: 100%;height: auto;.old_text_wdiget margin: 0;padding: 0;overflow: hidden;color: ;font: ;background-color: ;
-Download Zip ❤❤❤ https://ssurll.com/2uzyKu
Tv C-20LV33D-00-LA3-A.zipTv C14EA13EX_ShassisA7.zipTv C2161TX_2165TX_2180TXA__2096_EC-4A.zipTv C33LJ13-C33LJ26-C29LH93--CHASSIS LB1-A.zipTv EB5-A-Sanyo-2103-Mod.-CE25FN1.pdfTv EC7-A-CE14AT2-ok.zipTV SANYO 14MT1 C14EA95.pdfTV SANYO C14EA13EX Chassis A7-A.pdfTV SANYO C14EA13EX.zipTV SANYO C2161TXC CHASSIS EC4A.zipTV SANYO C21EF63_97 Chassis A7-A.zipTV SANYO C2858 CHASIS 2084.zipSANYO C2858.zipSANYO CE14SA4 CE14SA4R.zipSANYO CE28C7A.zipSANYO CEP2576D CHASSIS EDO TV D.pdf.zipSANYO CHASIS 2070 C2580.zipSANYO CLP-1451B-00(2).zipSANYO CLP-1451B-00.zipSANYO CTP-2051.zipSANYO CTP-3791.pdfSANYO CTP-6756U.pdfsanyo-2130-3011-1454.zipSANYO-AA1A-C25EG57.zipSanyo-WB6A-CE32FWH2F-B.pdfCBP3012_CHASSIS_A3_password_29022.rarSANYO_CEP3024D.zipSanyo_CTP3771.rarSANYO_TVP_CE25DN3_B_CHASSI_.zipC2998 C33LJ13-C33LJ26-C29LH93--CHASSIS LB1-A.zip
DOWNLOAD ✵ https://gohhs.com/2uFVzc
Download > https://gohhs.com/2uFUse
the first problem you'll encounter is that, if you're going to get this game, you're going to need to download a hefty file in order to play it. the next problem you'll encounter is that you'll spend a fair amount of time waiting for the game to download and install. the third problem is that, unless you're lucky enough to own the original game on disk, the expansion pack is on a 12-hour download. the fourth problem is that, once you've gotten the game and the expansion pack installed, they don't really integrate together, meaning you can only play the expansion if you've got the original game installed. the fifth problem is that the game itself is a little on the slow side. but the sixth problem is that the physics are definitely the game's saving grace. the bike handles amazingly well, and the amount of strategy involved in the game make it addictive without being frustrating. the seventh problem is that the game doesn't really have the depth to keep you hooked for very long. it's a good arcade game, but don't expect to be spending months enjoying this one.
-DOWNLOAD ★★★ https://gohhs.com/2uFVoO
the opening sequence of the game is a little long and dragged, but after that everything is ok. it is worth mentioning that each motorcycle has its own sound effect, which is one of the best features of this game. the level is not very long and difficult, but if you are looking for something with a strong challenge, you will be pleased with this game.
-you may find a few minor bugs (and i'm not talking about the drivers). the game crashes at times, and you may find yourself unable to load an 'unlimited' game. if this happens to you, please try to reinstall the game (as i mentioned, this can be caused by the uninstallation of other programs), and if it doesn't work, try to reinstall the game. but please do not ask me to re-upload the game again. thank you! i hope you enjoy playing this game! :)
899543212bDownload ⚙⚙⚙ https://gohhs.com/2uFUd9
Download File >>> https://gohhs.com/2uFTgJ
Pandaga Chesko is a 2015 Telugu movie directed by Gopichand Malineni and starring Ram, Rakul Preet Singh, Sonal Chauhan, and Brahmanandam. The movie is about a money-minded businessman, Karthik (Ram), who gets engaged to another money-minded girl, Anushka (Sonal Chauhan). However, he falls in love with Divya (Rakul Preet Singh), the daughter of his rival businessman, Surya Narayana (Sai Kumar). To win her heart, he has to unite his estranged family and solve their problems.
-The movie is a mix of comedy, romance, drama, and action. It has some hilarious scenes involving Brahmanandam as Karthik's uncle and Adithya as his cousin. The movie also has some emotional moments as Karthik tries to reconcile with his father (Rao Ramesh) and grandmother (Pavitra Lokesh). The movie also has some thrilling sequences as Karthik faces the wrath of Surya Narayana and his henchmen (Sampath Raj and Abhimanyu Singh).
-Download File ✵ https://gohhs.com/2uFTZv
Pandaga Chesko was a commercial success at the box office. It received positive reviews from critics and audiences for its entertainment value. The movie was praised for its performances, especially by Ram and Brahmanandam. The movie also had a catchy soundtrack composed by S. Thaman.
-If you are looking for a fun-filled family entertainer with comedy and romance, you can watch Pandaga Chesko online. You can download the full movie in 720p HD quality from various websites. However, we advise you to watch the movie legally on streaming platforms like JustWatch[^1^] or YouTube Movies.
- -Pandaga Chesko is not just a comedy movie, but also a movie with a message. It shows the importance of family values and relationships. It also shows how money is not everything in life and how love can overcome all obstacles. The movie also has a social message about environmental conservation and corporate responsibility. The movie shows how Karthik uses his business skills to help the villagers who are affected by Surya Narayana's illegal mining activities.
-Pandaga Chesko is a movie that can be enjoyed by everyone. It has something for everyone: comedy, romance, drama, action, and music. It is a movie that will make you laugh, cry, and cheer. It is a movie that will make you celebrate life.
- -Pandaga Chesko is a movie that has many memorable dialogues and scenes. Some of the popular dialogues from the movie are:
-Some of the popular scenes from the movie are:
-Download >>> https://urlca.com/2uDbTQ
If you are a fan of Call of Duty 4: Modern Warfare, you might be wondering how to unlock all the weapons, perks, camos, and challenges in the game without spending hours of playing online. Well, there is a simple solution for that, and it's called the Call of Duty 4 v1 7 lvl 55 hack download. In this article, we will show you what this hack is, how it works, how to use it, where to get it, and what are the risks involved. Read on to find out more.
- -The Call of Duty 4 v1 7 lvl 55 hack download is a tool that allows you to modify your COD4 profile data and set your rank to level 55 with everything unlocked. This means that you can access all the weapons, attachments, perks, killstreaks, and camos in the game without having to play through the multiplayer mode. You can also customize your classes and loadouts as you wish, and enjoy the game with full freedom.
-Download ✑ ✑ ✑ https://urlca.com/2uDcvk
The Call of Duty 4 v1 7 lvl 55 hack download works by replacing your original mpdata file with a hacked one that contains the level 55 data. The mpdata file is where your COD4 profile information is stored, such as your rank, stats, achievements, and unlocks. By swapping this file with a modified one, you can trick the game into thinking that you have reached level 55 and completed all the challenges.
- -Using the Call of Duty 4 v1 7 lvl 55 hack download is very easy and straightforward. All you need to do is follow these simple steps:
- -That's it! You have successfully used the Call of Duty 4 v1 7 lvl 55 hack download and unlocked everything in COD4. Enjoy!
- -You can get the Call of Duty 4 v1 7 lvl 55 hack download from any of these links:
- -Please note that these links are not affiliated with us and we are not responsible for their content or safety. Use them at your own risk.
- -The Call of Duty 4 v1 7 lvl 55 hack download is generally safe to use, as long as you follow the instructions carefully and don't abuse it. However, there are some risks involved with using any kind of hack or cheat in online games. These include:
- -To minimize these risks, we recommend that you:
- - -We also advise that you use the hack only for fun and not for cheating or ruining other players' experience. Remember that hacking is against the game's terms of service and can result in serious consequences.
- -The Call of Duty 4 v1 7 lvl
-There are many benefits of using the Call of Duty 4 v1 7 lvl 55 hack download, especially if you are a casual or new player who wants to enjoy the game without any hassle. Some of these benefits are:
- -Of course, these benefits come with some drawbacks, such as the risk of getting banned or infected by viruses. Therefore, you should use the hack responsibly and at your own discretion.
- -If you are not comfortable with using the Call of Duty 4 v1 7 lvl 55 hack download, or if you want to challenge yourself and earn your unlocks legitimately, there are some alternatives that you can try. Some of these alternatives are:
- -These alternatives might not be as easy or convenient as using the hack, but they might offer you more variety and enjoyment in playing COD4.
- -The Call of Duty 4 v1 7 lvl 55 hack download is compatible with the latest patch for COD4, which is v1.7. This patch fixes some bugs and exploits in the game, and improves the performance and stability. However, if a new patch is released in the future, you might need to update the hack accordingly. To do that, you can follow these steps:
- -Please note that these links are not affiliated with us and we are not responsible for their content or safety. Use them at your own risk.
-The Call of Duty 4 v1 7 lvl 55 hack download is a powerful and versatile tool that offers many features and options for COD4 players. Some of these features are:
- -These features make the Call of Duty 4 v1 7 lvl 55 hack download one of the best hacks for COD4 available online.
- -If you want to use the Call of Duty 4 v1 7 lvl 55 hack download effectively and safely, you should follow some tips and tricks that can help you improve your gameplay and avoid any problems. Some of these tips and tricks are:
- -These tips and tricks can help you make the most out of the Call of Duty 4 v1 7 lvl 55 hack download and have a great time playing COD4.
-The Call of Duty 4 v1 7 lvl 55 hack download is a useful and convenient tool that lets you unlock everything in COD4 without having to play through the multiplayer mode. It works by replacing your mpdata file with a hacked one that contains level 55 data. You can use it easily and safely by following the instructions and tips provided in this article. You can also get the hack from any of the links below. However, you should remember that hacking is against the game's terms of service and can result in serious consequences. Therefore, you should use the hack responsibly and at your own discretion. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please leave a comment below. Thank you for reading.
3cee63e6c2The most common scenario with a home electric/gas drum roaster is that you want an even roast. So the first step is to achieve the best air, by turning the coffee as little as possible. When a roast is not moving, it is rotating. So that means the coffee can be moving side to side. The drum spins, while the coffee slides. The obvious way to advance the roast is by rotating the drum, also known as counter rotating (RC), or right counter clockwise. If you do this in the middle of a roast and you step on the starter button too early, you can pinch off the roast as it is just trying to accomplish a full cycle. It then has to start over, but does so within a controlled variable period. A common home scenario is to have the roast going for a while and then forgetting about it. While the roast has not moved, you return to the roaster and notice the still or sluggish coffee. You then start roasting some coffee. It slows, stops, the roaster has to start over, it cranks, it hits first crack, it slows, it stops. Then, the same scenario as before happens again.
So there is a common scenario of the roast continuing before you notice. Then when you notice it, it is too late. With Debut Video Capture, it is the exact time that it was, or it is then! So, if you are on an RC or switch the roast from in the middle, you can trigger the first crack, stop and then be the exact time it starts when you check. Or, you can start some coffee, stop it when it slows, and let it continue for a while. The other common way to remove coffee from the drum is by using a brew timer.
DOWNLOAD ⚙ https://urlca.com/2uDdwu
The reason we created Debut video capture was to allow roasters to record their roasts. At the time, we began recording, we found a myriad of problems with existing software. First, there was no way to preview the video. You had to start and stop each clip. Also, many programs could not start the video from the beginning, but had to start at the end, which meant it cut off the video and started it over.
899543212bDownload File ✪ https://urlca.com/2uDbY3
Do you want to experience the thrill of driving a car in a big city or in a country? Do you want to learn the road rules and traffic signs in a safe and fun way? Do you want to improve your driving skills and confidence in different situations and conditions? If you answered yes to any of these questions, then you should try City Car Driving 1.5.1, a realistic car driving simulator for PC.
-Download - https://urllie.com/2uNxvf
City Car Driving 1.5.1 is a new version of the popular car driving simulator game, developed by Forward Development Ltd. It was released in September 2015 and it has been updated with new features and improvements since then.
-City Car Driving 1.5.1 is designed to help users feel the car driving in a big city or in a country in different conditions or just go for a joy ride. It has a variety of different road situations and realistic car driving, as well as customizable weather and time of day, VR support, and realistic physics.
-City Car Driving 1.5.1 simulates various road situations that you may encounter in real life, such as intersections, roundabouts, traffic lights, pedestrians, cyclists, road works, accidents, etc. You can also choose from different driving conditions, such as rain, snow, fog, night, etc., and see how they affect your visibility and handling.
-City Car Driving 1.5.1 offers a range of cars to choose from, such as sedans, hatchbacks, SUVs, sports cars, etc., each with their own characteristics and performance. You can also customize your car with different colors, wheels, spoilers, etc., or download additional cars from the official website or the Steam Workshop.
-City Car Driving 1.5.1 also follows the traffic rules of different countries, such as USA, UK, Germany, France, etc., so you can learn the right-hand or left-hand drive, the speed limits, the traffic signs, etc., depending on your location.
-City Car Driving 1.5.1 allows you to change the weather and time of day according to your preference or challenge yourself with different scenarios. You can set the weather to sunny, cloudy, rainy, snowy, stormy, etc., and see how it affects the road conditions and your visibility.
-city car driving 1.5.1 download
-city car driving 1.5.1 trailer
-city car driving 1.5.1 honda fit
-city car driving 1.5.1 activation key
-city car driving 1.5.1 mods
-city car driving 1.5.1 crack
-city car driving 1.5.1 system requirements
-city car driving 1.5.1 gameplay
-city car driving 1.5.1 bmw
-city car driving 1.5.1 mercedes
-city car driving 1.5.1 audi
-city car driving 1.5.1 toyota
-city car driving 1.5.1 ford
-city car driving 1.5.1 volkswagen
-city car driving 1.5.1 nissan
-city car driving 1.5.1 hyundai
-city car driving 1.5.1 mazda
-city car driving 1.5.1 kia
-city car driving 1.5.1 chevrolet
-city car driving 1.5.1 skoda
-city car driving 1.5.1 renault
-city car driving 1.5.1 peugeot
-city car driving 1.5.1 citroen
-city car driving 1.5.1 fiat
-city car driving 1.5.1 opel
-city car driving 1.5.1 subaru
-city car driving 1.5.1 suzuki
-city car driving 1 toyota corolla auris hybrid mod download link in description youtube com watch v=2jgqyqz0gk4
-city car driving simulator home edition v15 full version free download pc game setup in single direct link for windows it is an awesome racing and simulation game oceanofgames com games simulation games page=2
-how to install mods in city car driving simulator youtube com watch v=3xqyqgjv8xu
-how to download and install city car driving simulator for free youtube com watch v=7wzv6hjyf9s
-how to update your old version of city car driving to the latest one youtube com watch v=4wv8xkz0f9i
-how to fix the error "cannot find the file bin win32 starter exe" in city car driving youtube com watch v=9xqyqgjv8xu
-how to play city car driving with a steering wheel youtube com watch v=2jgqyqz0gk4
-how to change the language of city car driving youtube com watch v=7wzv6hjyf9s
-how to get more traffic and pedestrians in city car driving youtube com watch v=4wv8xkz0f9i
-how to enable the night mode and rain mode in city car driving youtube com watch v=9xqyqgjv8xu
-how to customize your own license plate in city car driving youtube com watch v=2jgqyqz0gk4
-how to use the manual transmission and clutch in city car driving youtube com watch v=7wzv6hjyf9s
-how to drift in city car driving youtube com watch v=4wv8xkz0f9i
-how to park in parallel and perpendicular in city car driving youtube com watch v=9xqyqgjv8xu
-how to pass the exam and get the driver's license in city car driving youtube com watch v=2jgqyqz0gk4
-how to drive on the left side of the road in city car driving youtube com watch v=7wzv6hjyf9s
-how to drive on the highway and use the cruise control in city car driving youtube com watch v=4wv8xkz0f9i
-how to drive on the snow and ice in city car driving youtube com watch v=9xqyqgjv8xu
You can also set the time of day to morning, afternoon, evening, night, etc., and see how it affects the lighting and the traffic density.
-City Car Driving 1.5.1 supports VR devices such as Oculus Rift and HTC Vive, which enhance the immersion and realism of the game. You can feel like you are actually sitting behind the wheel of a car and look around the cockpit or the mirrors.
-City Car Driving 1.5.1 also has realistic physics that simulate the car's behavior, such as acceleration, braking, steering, suspension, traction, etc. You can feel the difference between front-wheel drive, rear-wheel drive, and all-wheel drive, as well as the impact of the road surface, the slope, the weight, etc.
-If you want to play City Car Driving 1.5.1, you need to have a PC that meets the minimum system requirements, which are:
-The recommended system requirements are:
-To download and install City Car Driving 1.5.1, you have two options:
-The installation steps are similar for both options:
-City Car Driving 1.5.1 is easy to play but hard to master. You need to have a keyboard, a mouse, and optionally a steering wheel or a gamepad to control your car.
-City Car Driving 1.5.1 has three game modes:
-You can customize your controls and settings in the options menu of the game. You can change the key bindings, the sensitivity, the camera angle, the sound volume, etc.
-You can also adjust the difficulty level of the game by changing the following parameters:
-To play City Car Driving 1.5.1 effectively and enjoyably, here are some tips and tricks you should follow:
-City Car Driving 1.5.1 is not just a game, but also a learning tool that can help you become a better driver in real life. Here are some of the benefits of playing City Car Driving 1.5.1:
-City Car Driving 1.5.1 can help you improve your driving skills and confidence by exposing you to various road situations and driving conditions that you may encounter in real life. You can practice how to react to different scenarios, such as traffic jams, road works, accidents, etc., and how to handle different cars, such as manual or automatic transmission, front-wheel or rear-wheel drive, etc.
-City Car Driving 1.5.1 can also help you overcome your driving fears or anxieties, such as driving at night, in bad weather, or in unfamiliar places. You can play the game in a safe and fun environment without risking your life or damaging your car.
-City Car Driving 1.5.1 can help you learn the road rules and traffic signs of different countries, such as USA, UK, Germany, France, etc. You can familiarize yourself with the right-hand or left-hand drive, the speed limits, the traffic signs, etc., depending on your location.
-City Car Driving 1.5.1 can also help you prepare for your driving test or license exam by testing your knowledge of the traffic rules and signs. You can get feedback on your performance and learn from your mistakes.
-City Car Driving 1.5.1 can also provide you with entertainment and enjoyment by letting you drive freely in any location and any car you want. You can explore the city or the country, discover new places, or just go for a joy ride.
-City Car Driving 1.5.1 also has realistic graphics and sounds that make you feel like you are actually driving a car. You can admire the scenery, hear the engine noise, feel the vibration, etc.
-City Car Driving 1.5.1 is a realistic car driving simulator for PC that can help you improve your driving skills and confidence, learn the road rules and traffic signs of different countries, and have fun and enjoy the realistic graphics and sounds.
-If you are looking for a game that can teach you how to drive a car in a big city or in a country in different conditions or just go for a joy ride, then you should try City Car Driving 1.5.1.
-The latest version of City Car Driving is 1.5.9 as of June 2020.
-City Car Driving costs $24.99 on both the official website and Steam.
-Yes, you can play City Car Driving with a steering wheel or a gamepad if they are compatible with PC and DirectX 9.0.
-Yes, you can play City Car Driving offline in career mode or free mode.
-Yes, you can play City Car Driving online in online mode with other players.
-If you love driving games and want to experience what it's like to be a bus driver in different cities around the world, then you should try Bus Simulator Ultimate. This is a realistic bus simulation game developed by Zuuks Games, the creators of Truck Simulator 2018: Europe. You can drive various types of buses, pick up passengers at every stop, follow traffic rules, listen to radio stations, and even set up your own bus company.
-Download File ⚙⚙⚙ https://urllie.com/2uNHUQ
But what if you want to play this amazing game on a bigger screen with better graphics and performance? What if you want to have more control over your bus with keyboard and mouse? What if you want to use some cool features like macros, scripts, multi-instance, and sync to enhance your gameplay? Well, you can do all that by downloading and installing Bus Simulator Ultimate Mod APK on your PC or Mac. In this article, we will show you how to do that step by step.
-Bus Simulator Ultimate Mod APK is a modified version of the original game that gives you access to some extra features that are not available in the official version. Some of these features are:
-You can feel like a real bus driver in this game as you drive down realistic roads with dynamic weather and day and night cycles. You can also customize your driving options, such as steering wheel, buttons, tilt, or slider. You can adjust the camera angle, the mirrors, the speedometer, and the fuel gauge. You can also use indicators, headlights, horn, and wipers as needed.
-You can explore over 10 city maps in this game, each with its own unique landmarks and attractions. You can drive in cities like Berlin, Rome, Paris, Amsterdam, Istanbul, New York, Los Angeles, and more. You can also unlock new routes and destinations as you progress in the game. You can see the map of each city on the screen and follow the GPS navigation to reach your destination.
-You can choose from over 30 different buses in this game, each with its own design and specifications. You can drive buses from brands like Mercedes-Benz, Setra, Volvo, and more. You can also customize your buses with various skins, stickers, colors, and accessories. You can also upgrade your buses with better engines, brakes, tires, and suspension.
-[Bus Simulator Ultimate MOD APK 1.5.2 (Unlimited Money) Download]
-[Bus Simulator Ultimate for PC - Free Download & Install on Windows PC, Mac]
-[Bus Simulator Ultimate Mod Apk 1.5.2 (Hack, Unlimited Money)]
-[Download Bus Simulator : Ultimate on PC with MEmu]
-[Bus Simulator : Ultimate MOD APK 1.5.2 (Unlimited Money) Download]
Not only that, but you can also set up your own bus company in this game. You can create your company logo, name, and slogan. You can hire drivers, assign them routes, and manage their salaries. You can also buy new offices in different cities and expand your business.
-You can make your bus driving more enjoyable by listening to radio stations in this game. You can choose from over 250 radio stations from different countries and genres. You can also change the radio station or volume from the dashboard.
-You can also interact with your passengers in this game. You can see their faces, expressions, and reactions as they board and leave your bus. You can also hear their comments, complaints, and requests. You can respond to them by using the microphone or the chat feature. You can also see their ratings and feedback for your service.
-You can also play this game with other players online in multiplayer mode. You can join or create a room with up to 4 players and drive together on the same route. You can chat with them using voice or text messages. You can also compete with them for the best score and time.
-You can also check your ranking on the global leaderboards in this game. You can see how you compare with other players in terms of income, distance, passengers, reputation, and more. You can also earn achievements and trophies for completing various challenges and tasks.
-While Bus Simulator Ultimate is a great game to play on your Android device, you might want to play it on your PC or Mac for some reasons. Some of the benefits of playing Bus Simulator Ultimate on PC or Mac are:
-By playing Bus Simulator Ultimate on PC or Mac, you can enjoy the game's stunning graphics and smooth performance on a larger screen. You can see the details of the buses, the roads, the buildings, and the scenery more clearly. You can also adjust the graphics settings to suit your preferences and system requirements.
-You can also avoid the issues of battery drain, overheating, or lagging that might occur on your Android device when playing Bus Simulator Ultimate for a long time. You can play the game without any interruptions or distractions.
-By playing Bus Simulator Ultimate on PC or Mac, you can have more control over your bus with keyboard and mouse. You can use the arrow keys or WASD keys to steer your bus left or right, accelerate or brake, and use the spacebar to handbrake. You can also use the mouse to look around, change the camera angle, or click on the buttons on the dashboard. You can also customize the keyboard and mouse settings to suit your preferences and comfort.
-By playing Bus Simulator Ultimate on PC or Mac, you can use some advanced features like macros and scripts to automate some tasks and actions in the game. For example, you can create a macro to start or stop the engine, open or close the doors, or turn on or off the lights with a single keystroke. You can also create a script to repeat a certain route or action multiple times without manual input. You can save time and effort by using these features.
-By playing Bus Simulator Ultimate on PC or Mac, you can also use the multi-instance and sync feature to play with multiple accounts at the same time. You can create different instances of the game and run them simultaneously on your PC or Mac. You can also sync your actions and data across all instances with the sync feature. You can use this feature to play with different buses, routes, cities, or modes without switching accounts. You can also use this feature to trade or cooperate with other players online.
-Now that you know the features and benefits of playing Bus Simulator Ultimate Mod APK on PC or Mac, you might be wondering how to do that. Well, it's not that hard. All you need is an Android emulator that can run Android apps and games on your PC or Mac. An Android emulator is a software that creates a virtual Android environment on your PC or Mac and allows you to access the Google Play Store and download Android apps and games.
-There are many Android emulators available online, but we recommend using either BlueStacks or NoxPlayer as they are among the most popular and reliable ones. Here are the steps to download and install Bus Simulator Ultimate Mod APK on PC or Mac using these emulators:
-If you encounter any problems while downloading or installing Bus Simulator Ultimate Mod APK on your PC or Mac, here are some tips that might help you:
-In conclusion, Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you drive various types of buses in different cities around the world. You can customize your buses and offices, listen to radio stations, interact with passengers, and set up your own bus company. You can also play with other players online in multiplayer mode and compete for the best score and time.
-If you want to play this game on your PC or Mac, you can download and install Bus Simulator Ultimate Mod APK using an Android emulator like BlueStacks or NoxPlayer that gives you access to some extra features that are not available in the official version. You can enjoy better graphics and performance, full control with keyboard and mouse, macros and scripts for automation, and multi-instance and sync feature for multiple accounts. You can follow the steps we have provided in this article to download and install Bus Simulator Ultimate Mod APK on your PC or Mac easily and safely.
-We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy bus driving!
-Here are some frequently asked questions about Bus Simulator Ultimate Mod APK:
-A1: Bus Simulator Ultimate Mod APK is a modified version of the original game that gives you access to some extra features that are not available in the official version. Some of these features are realistic bus driving experience, huge city maps inspired by real locations, customizable buses and offices, radio stations and passenger interactions, multiplayer mode and leaderboards, and more.
-A2: Yes, Bus Simulator Ultimate Mod APK is safe to download as long as you use a trusted source and an Android emulator. We recommend using either BlueStacks or NoxPlayer as they are among the most popular and reliable emulators. You can also scan the APK file with an antivirus software before installing it.
-A3: You can update Bus Simulator Ultimate Mod APK by following the same steps as downloading and installing it. You can check for updates on the app's official website or on the emulator's app store. You can also enable the auto-update feature in your emulator settings to get the latest version automatically.
-A4: Yes, you can play Bus Simulator Ultimate Mod APK offline as long as you have downloaded the game data beforehand. However, some features like multiplayer mode, radio stations, and leaderboards might not work offline. You will need an internet connection to access these features.
-A5: You can contact the developers of Bus Simulator Ultimate by visiting their official website: https://www.zuuks.com/ or by sending them an email at info@zuuks.com. You can also follow them on their social media accounts like Facebook, Twitter, Instagram, and YouTube.
401be4b1e0Outline of the article | -
---|
- # Apka do pobierania z youtube - jaką wybrać i dlaczego warto? ## Wstęp - Przedstawienie tematu i celu artykułu - Wymienienie głównych zalet pobierania filmów z youtube - Zapowiedź porównania kilku popularnych aplikacji do pobierania z youtube ## Czym jest apka do pobierania z youtube i jak działa? - Wyjaśnienie pojęcia i zasady działania aplikacji do pobierania z youtube - Omówienie różnych formatów i jakości dostępnych do pobrania - Podanie przykładu procesu pobierania filmu z youtube za pomocą aplikacji ## Jakie są korzyści z używania apki do pobierania z youtube? - Wymienienie kilku głównych korzyści, takich jak: - Oglądanie filmów offline bez potrzeby dostępu do internetu - Oszczędzanie transferu danych i pamięci urządzenia - Konwertowanie filmów na pliki audio lub inne formaty wideo - Tworzenie własnej kolekcji filmów na komputerze lub smartfonie ## Jak wybrać najlepszą apkę do pobierania z youtube? - Przedstawienie kilku kryteriów wyboru, takich jak: - Kompatybilność z systemem operacyjnym i przeglądarką - Szybkość i łatwość pobierania i konwersji - Wybór formatów i jakości do pobrania - Brak ograniczeń na liczbę i długość pobieranych filmów - Brak dodatkowego oprogramowania lub reklam ## Porównanie kilku popularnych aplikacji do pobierania z youtube - Przedstawienie tabeli porównawczej z następującymi kolumnami: - Nazwa aplikacji - System operacyjny - Formaty i jakości do pobrania - Szybkość i łatwość użycia - Dodatkowe funkcje i zalety - Omówienie każdej aplikacji na podstawie tabeli i własnych doświadczeń ## Podsumowanie i rekomendacja - Podsumowanie głównych wniosków i porad z artykułu - Rekomendacja najlepszej apki do pobierania z youtube według własnego zdania - Zachęcenie czytelników do wypróbowania wybranej aplikacji i podzielenia się opiniami ## FAQ - Odpowiedź na pięć najczęściej zadawanych pytań dotyczących apki do pobierania z youtube, takich jak: - Czy pobieranie filmów z youtube jest legalne? - Czy pobieranie filmów z youtube wpływa na ich jakość? - Czy można pobrać całą playlistę lub kanał z youtube? - Czy można pobrać napisy lub komentarze z youtube? - Czy można pobrać tylko dźwięk z filmu na youtube? | -
Article with HTML formatting | -|||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- Apka do pobierania z youtube - jaką wybrać i dlaczego warto?-YouTube to najpopularniejszy serwis wideo na świecie, który oferuje ogromną ilość treści dla każdego gustu i zainteresowania. Możesz tu znaleźć filmy edukacyjne, rozrywkowe, muzyczne, sportowe, dokumentalne i wiele innych. Jednak oglądanie filmów na YouTube wymaga stałego dostępu do internetu, co nie zawsze jest możliwe lub wygodne. Dlatego wiele osób szuka sposobu, jak pobrać ulubione filmy z YouTube na swój komputer lub smartfon i oglądać je offline, kiedy i gdzie chcą. W tym artykule przedstawię Ci kilka zalet pobierania filmów z YouTube, a także porównam kilka popularnych aplikacji do pobierania z YouTube, które możesz wypróbować. Zapraszam do lektury! -Czym jest apka do pobierania z youtube i jak działa?-Apka do pobierania z YouTube to program lub rozszerzenie do przeglądarki, które umożliwiają pobieranie filmów z YouTube na dysk twardy lub kartę pamięci urządzenia. Aplikacje te działają na różnych systemach operacyjnych, takich jak Windows, Mac, Linux, Android czy iOS. Aby pobrać film z YouTube za pomocą apki, wystarczy skopiować link do filmu z przeglądarki i wkleić go do aplikacji. Następnie możesz wybrać format i jakość pliku, który chcesz pobrać. Możesz pobrać film w formacie MP4, AVI, MKV, MOV lub innym, a także w różnych rozdzielczościach, od 144p do 4K. Możesz też pobrać tylko dźwięk z filmu w formacie MP3, M4A, WAV lub innym. Po wybraniu opcji pobierania wystarczy kliknąć przycisk "Pobierz" i poczekać, aż aplikacja zakończy pobieranie i konwersję pliku. Proces ten może trwać od kilku sekund do kilku minut, w zależności od wielkości i jakości pliku oraz szybkości połączenia internetowego. -apka do pobierania z youtubeDOWNLOAD ✪ https://gohhs.com/2uPlWW - Jakie są korzyści z używania apki do pobierania z youtube?-Pobieranie filmów z YouTube za pomocą apki ma wiele korzyści, takich jak: -
Jak wybrać najlepszą apkę do pobierania z youtube?-Na rynku jest wiele aplikacji do pobierania z YouTube, ale nie wszystkie są takie same. Aby wybrać najlepszą apkę do pobierania z YouTube, należy zwrócić uwagę na kilka kryteriów, takich jak: -
Porównanie kilku popularnych aplikacji do pobierania z youtube-Aby ułatwić Ci wybór najlepszej apki do pobierania z YouTube, przygotowałem tabelę porównawczą kilku popularnych aplikacji, które sam testowałem. Oto tabela: -
Jak widzisz, każda z tych aplikacji ma swoje zalety i wady. Osobiście polecam Ci 4K Video Downloader, ponieważ jest to najbardziej zaawansowana i wszechstronna apka do pobierania z YouTube. Ma ona największy wybór formatów i jakości do pobrania, pozwala pobierać playlisty, kanały i napisy, automatycznie aktualizuje subskrypcje i nie ma żadnych reklam. Jest też bardzo szybka i prosta w użyciu. Jeśli szukasz najlepszej apki do pobierania z YouTube, to 4K Video Downloader jest dla Ciebie! -Podsumowanie i rekomendacja-W tym artykule przedstawiłem Ci kilka zalet pobierania filmów z YouTube za pomocą apki, a także porównałem kilka popularnych aplikacji do pobierania z YouTube. Mam nadzieję, że ten artykuł pomógł Ci wybrać najlepszą apkę do pobierania z YouTube dla siebie. Pobieranie filmów z YouTube to świetny sposób na oglądanie swoich ulubionych treści offline, oszczędzanie danych i pamięci urządzenia, konwertowanie filmów na pliki audio lub inne formaty wideo i tworzenie własnej kolekcji filmów. Jeśli chcesz skorzystać z tych korzyści, polecam Ci 4K Video Downloader jako najlepszą apkę do pobierania z YouTube. Wypróbuj ją i podziel się ze mną swoją opinią! -FAQ-Oto pięć najczęściej zadawanych pytań dotyczących apki do pobierania z YouTube i moje odpowiedzi na nie: -
Pobieranie filmów z YouTube jest legalne, jeśli robisz to dla własnego użytku i nie naruszasz praw autorskich lub licencji filmów. Nie możesz udostępniać ani sprzedawać pobranych filmów bez zgody ich twórców lub właścicieli. -Pobieranie filmów z YouTube nie wpływa negatywnie na ich jakość, jeśli używasz dobrej apki do pobierania z YouTube i wybierasz odpowiedni format i rozdzielczość pliku. Możesz pobrać filmy w jakości HD lub nawet 4K bez utraty jakości obrazu lub dźwięku. -apka do pobierania muzyki z youtube Tak, można pobrać całą playlistę lub kanał z YouTube za pomocą niektórych aplikacji do pobierania z YouTube, takich jak 4K Video Downloader. Wystarczy skopiować link do playlisty lub kanału i wkle ić go do aplikacji. Następnie możesz wybrać, które filmy z playlisty lub kanału chcesz pobrać i w jakim formacie i jakości. -Tak, można pobrać napisy lub komentarze z YouTube za pomocą niektórych aplikacji do pobierania z YouTube, takich jak 4K Video Downloader. Wystarczy zaznaczyć opcję "Pobierz napisy" lub "Pobierz komentarze" w aplikacji i wybrać język napisów lub komentarzy. -Tak, można pobrać tylko dźwięk z filmu na YouTube za pomocą niektórych aplikacji do pobierania z YouTube, takich jak Videoder czy TubeMate. Wystarczy wybrać format pliku audio, taki jak MP3, M4A, OGG lub inny, i kliknąć "Pobierz". Możesz też ustawić jakość dźwięku, taką jak 128 kbps, 256 kbps lub 320 kbps. -- - \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Curso primo rico download a melhor forma de aprender com quem vive na pele o que ensina.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Curso primo rico download a melhor forma de aprender com quem vive na pele o que ensina.md deleted file mode 100644 index c5d6782b1af11767db1003989da2bd3bb63729f1..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Curso primo rico download a melhor forma de aprender com quem vive na pele o que ensina.md +++ /dev/null @@ -1,118 +0,0 @@ - -
These are just some examples of the amazing results that the students of the course have achieved. There are many more testimonials and reviews that you can check on the website of O Primo Rico. You can also watch some videos of Thiago Nigro interviewing some of his students and showing their progress and achievements. -These testimonials and results prove that the course works and that it can help anyone who wants to learn how to manage their money, how to invest wisely, and how to achieve financial freedom. They also show that Thiago Nigro is a credible and trustworthy teacher, who knows what he is talking about and who cares about his students. -Some of the tips and advice that Thiago Nigro gives to his students are: -
Conclusion and FAQs-In conclusion, if you want to learn how to manage your money, how to invest wisely, and how to achieve financial freedom, you should consider taking the course "Do Mil ao Milhão" by Thiago Nigro, also known as O Primo Rico. This course is a complete and practical guide on how to transform your finances and reach your goals, based on his own journey and methodology. -The course has more than 40 hours of video lessons, divided into 10 modules, covering everything you need to know about finance and investing, from the basics to the advanced. The course also has a community of students and mentors, where you can interact, ask questions, share experiences, and network with other like-minded people. The course also has monthly live sessions with Thiago Nigro, where he will answer your questions, give you tips, update you on the market trends, and motivate you to keep going. -The course also has some amazing bonuses that will enhance your learning experience and results, such as access to the special reports and recommendations from O Primo Rico, access to the exclusive interviews and masterclasses with some of the most successful investors and entrepreneurs in Brazil and in the world, access to the digital version of the book "Do Mil ao Milhão - sem cortar o cafezinho", among others. -The course also has a 30-day guarantee and support, which means that if you are not satisfied with the course for any reason, you can request a full refund within 30 days of your purchase. You will also have a dedicated support team that will help you with any issues or doubts you may have during the course. -The price of the course is R$ 1.997 (about US$ 400), which is a very reasonable investment considering the value and quality of the content. However, if you enroll now, you can get a special discount of R$ 500 (about US$ 100), which means you will pay only R$ 1.497 (about US$ 300). This is a limited-time offer that may expire soon, so don't miss this opportunity. If you are ready to take the course and start your journey to financial freedom, you can click on the link below and enroll now. You will be redirected to the official website of O Primo Rico, where you can download the course and access all the bonuses and benefits. Don't wait any longer, this is your chance to learn from one of the most influential investors in Brazil and achieve your goals faster. -Curso Primo Rico Download: How to Learn from One of the Most Influential Investors in Brazil -Before you go, here are some frequently asked questions (FAQs) that you might have about the course: -FAQ 1: How long does the course last and how much time do I need to dedicate to it?-The course lasts for as long as you want. You can access the course platform anytime, anywhere, and at your own pace. You can watch the videos as many times as you want, pause, rewind, fast-forward, etc. You can also download the videos and watch them offline. You can complete the course in a few weeks or a few months, depending on your availability and preference. However, we recommend that you dedicate at least one hour per day to watch the videos and do the exercises. -FAQ 2: Do I need any prior knowledge or experience in finance or investing to take the course?-No, you don't need any prior knowledge or experience in finance or investing to take the course. The course is designed for beginners who want to learn the basics of finance and investing, as well as for intermediate and advanced investors who want to improve their skills and results. The course covers everything from the fundamentals to the strategies, from the theory to the practice, from the concepts to the examples. The course is easy to understand and follow, with clear explanations and illustrations. -FAQ 3: What kind of investments does the course cover and what kind of returns can I expect?-The course covers different types of investments, such as stocks, bonds, funds, REITs, ETFs, cryptocurrencies, etc. The course also covers different markets, such as Brazil, USA, Europe, Asia, etc. The course teaches you how to diversify your portfolio and allocate your money among different assets and regions, according to your profile, goals, and risk tolerance. -The returns that you can expect from your investments depend on many factors, such as your initial capital, your contribution rate, your allocation strategy, your rebalance frequency, your analysis criteria, your investment horizon, etc. However, based on historical data and simulations, you can expect an average annual return of around 15% if you follow the methodology ARCA and invest in dividend-paying assets. -FAQ 4: Is the course updated and relevant for the current market situation?-Yes, the course is updated and relevant for the current market situation. The course is constantly revised and improved by Thiago Nigro and his team, based on the feedback from the students and the changes in the market. The course also includes monthly live sessions with Thiago Nigro, where he will update you on the market trends and opportunities. The course also includes special reports and recommendations from O Primo Rico, where you will get the best investment opportunities and strategies for your portfolio. -FAQ 5: How can I contact Thiago Nigro or his team if I have any questions or doubts during or after the course?-You can contact Thiago Nigro or his team anytime you want during or after the course. You can use any of these channels: -
You will receive a prompt and friendly response from them. They will be happy to assist you and make sure you have the best learning experience possible. -I hope this article has been helpful and informative for you. If you have any other questions or comments about the course "Do Mil ao Milhão" by Thiago Nigro (O Primo Rico), please feel free to leave them below. I will try to answer them as soon as possible. -Thank you for reading this article and for choosing Bing as your search engine. I wish you all the best in your financial journey. 197e85843d- - \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/README.md deleted file mode 100644 index 17689e6f8c28a6170fd8c7084f008398a602a0c0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/README.md +++ /dev/null @@ -1,18 +0,0 @@ -base64id -======== - -Node.js module that generates a base64 id. - -Uses crypto.randomBytes when available, falls back to unsafe methods for node.js <= 0.4. - -To increase performance, random bytes are buffered to minimize the number of synchronous calls to crypto.randomBytes. - -## Installation - - $ npm install base64id - -## Usage - - var base64id = require('base64id'); - - var id = base64id.generateId(); diff --git a/spaces/firasggg/andite-anything-v4.0/README.md b/spaces/firasggg/andite-anything-v4.0/README.md deleted file mode 100644 index bb854097f4c163e7b0fa21fab6bf9748942f76b9..0000000000000000000000000000000000000000 --- a/spaces/firasggg/andite-anything-v4.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Andite Anything V4.0 -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/firsk/ai_otto/text/english.py b/spaces/firsk/ai_otto/text/english.py deleted file mode 100644 index 0f9339c9ed771dab5136978eaaab194ec3fe2395..0000000000000000000000000000000000000000 --- a/spaces/firsk/ai_otto/text/english.py +++ /dev/null @@ -1,214 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, "cmudict.rep") -CACHE_PATH = os.path.join(current_file_path, "cmudict_cache.pickle") -_g2p = G2p() - -arpa = { - "AH0", - "S", - "AH1", - "EY2", - "AE2", - "EH0", - "OW2", - "UH0", - "NG", - "B", - "G", - "AY0", - "M", - "AA0", - "F", - "AO0", - "ER2", - "UH1", - "IY1", - "AH2", - "DH", - "IY0", - "EY1", - "IH0", - "K", - "N", - "W", - "IY2", - "T", - "AA1", - "ER1", - "EH2", - "OY0", - "UH2", - "UW1", - "Z", - "AW2", - "AW1", - "V", - "UW2", - "AA2", - "ER", - "AW0", - "UW0", - "R", - "OW1", - "EH1", - "ZH", - "AE0", - "IH2", - "IH", - "Y", - "JH", - "P", - "AY1", - "EY0", - "OY2", - "TH", - "HH", - "D", - "ER0", - "CH", - "AO1", - "AE1", - "AO2", - "OY1", - "AY2", - "IH1", - "OW0", - "L", - "SH", -} - - -def post_replace_ph(ph): - rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "v": "V", - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = "UNK" - return ph - - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(" ") - word = word_split[0] - - syllable_split = word_split[1].split(" - ") - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(" ") - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, "wb") as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, "rb") as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - - -eng_dict = get_dict() - - -def refine_ph(phn): - tone = 0 - if re.search(r"\d$", phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - - -def g2p(text): - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) diff --git a/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/modeling_clip_vision_mbart.py b/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/modeling_clip_vision_mbart.py deleted file mode 100644 index 81cc59c2b41f740883639582fcc93c9159dee93f..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/modeling_clip_vision_mbart.py +++ /dev/null @@ -1,778 +0,0 @@ -from typing import Callable, Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict, unfreeze -from jax import lax -from jax.random import PRNGKey -from transformers import ( - CLIPVisionConfig, - FlaxCLIPVisionModel, - FlaxMBartModel, - MBartConfig, -) -from transformers.modeling_flax_outputs import ( - FlaxBaseModelOutputWithPooling, - FlaxCausalLMOutputWithCrossAttentions, - FlaxSeq2SeqLMOutput, - FlaxSeq2SeqModelOutput, -) -from transformers.models.clip.modeling_flax_clip import FlaxCLIPVisionModule -from transformers.models.mbart.modeling_flax_mbart import ( - FlaxMBartDecoder, - FlaxPreTrainedModel, - shift_tokens_right, -) - -from .configuration_clip_vision_mbart import CLIPVisionMBartConfig -from .modeling_clip_vision_utils import FlaxCLIPVisionMBartPreTrainedModel - - -class FlaxCLIPVisionMBartModule(nn.Module): - config: CLIPVisionMBartConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.shared = nn.Embed( - self.config.mbart_config.vocab_size, - self.config.mbart_config.d_model, - embedding_init=jax.nn.initializers.normal( - self.config.mbart_config.init_std, self.dtype - ), - dtype=self.dtype, - ) - - self.encoder = FlaxCLIPVisionModule( - self.config.clip_vision_config, dtype=self.dtype - ) - self.decoder = FlaxMBartDecoder( - self.config.mbart_config, dtype=self.dtype, embed_tokens=self.shared - ) - - self.visual_projection = nn.Dense( - self.config.mbart_config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal( - self.config.mbart_config.init_std, self.dtype - ), - ) - - def _get_encoder_module(self): - return self.encoder - - def _get_decoder_module(self): - return self.decoder - - def __call__( - self, - pixel_values, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - deterministic: bool = True, - ): - - encoder_outputs = self.encoder( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - batch_size, sequence_length = encoder_outputs[0].shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - encoder_hidden_states = self.visual_projection(encoder_outputs[0]) - - decoder_outputs = self.decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - position_ids=decoder_position_ids, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - if not return_dict: - return decoder_outputs + encoder_outputs - - return FlaxSeq2SeqModelOutput( - last_hidden_state=decoder_outputs.last_hidden_state, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - -class FlaxCLIPVisionMBartForConditionalGenerationModule(nn.Module): - config: CLIPVisionMBartConfig - dtype: jnp.dtype = jnp.float32 - bias_init: Callable[..., jnp.ndarray] = jax.nn.initializers.zeros - - def setup(self): - self.model = FlaxCLIPVisionMBartModule(config=self.config, dtype=self.dtype) - self.lm_head = nn.Dense( - self.model.shared.num_embeddings, - use_bias=False, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal( - self.config.mbart_config.init_std, self.dtype - ), - ) - self.final_logits_bias = self.param( - "final_logits_bias", self.bias_init, (1, self.model.shared.num_embeddings) - ) - - def _get_encoder_module(self): - return self.model.encoder - - def _get_decoder_module(self): - return self.model.decoder - - def _get_visual_projection_module(self): - return self.model.visual_projection - - def __call__( - self, - pixel_values, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - deterministic: bool = True, - ): - outputs = self.model( - pixel_values=pixel_values, - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=decoder_attention_mask, - decoder_position_ids=decoder_position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - hidden_states = outputs[0] - - if self.config.tie_word_embeddings: - shared_embedding = self.model.variables["params"]["shared"]["embedding"] - lm_logits = self.lm_head.apply( - {"params": {"kernel": shared_embedding.T}}, hidden_states - ) - else: - lm_logits = self.lm_head(hidden_states) - - lm_logits += self.final_logits_bias - - if not return_dict: - output = (lm_logits,) + outputs[1:] - return output - - return FlaxSeq2SeqLMOutput( - logits=lm_logits, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - - -class FlaxCLIPVisionMBartOuterPreTrainedModel(FlaxCLIPVisionMBartPreTrainedModel): - config_class = CLIPVisionMBartConfig - base_model_prefix: str = "model" - module_class: nn.Module = None - - def __init__( - self, - config: CLIPVisionMBartConfig, - input_shape: Tuple = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - if input_shape is None: - input_shape = ( - ( - 1, - config.clip_vision_config.image_size, - config.clip_vision_config.image_size, - 3, - ), - (1, 1), - ) - - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__( - config, module, input_shape=input_shape, seed=seed, dtype=dtype - ) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensors - pixel_values = jax.random.normal(rng, input_shape[0]) - # # make sure initialization pass will work for FlaxMBartForSequenceClassificationModule - # input_ids = jax.ops.index_update(input_ids, (..., -1), self.config.eos_token_id) - - decoder_input_ids = jnp.zeros(input_shape[1], dtype="i4") - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - - batch_size, sequence_length = decoder_input_ids.shape - decoder_position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init( - rngs, - pixel_values, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - )["params"] - - def init_cache(self, batch_size, max_length, encoder_outputs): - - decoder_input_ids = jnp.ones((batch_size, max_length), dtype="i4") - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - decoder_position_ids = jnp.broadcast_to( - jnp.arange(jnp.atleast_2d(decoder_input_ids).shape[-1]), - decoder_input_ids.shape, - ) - - def _decoder_forward( - module, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - return decoder_module( - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ) - - init_variables = self.module.init( - jax.random.PRNGKey(0), - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=decoder_attention_mask, - decoder_position_ids=decoder_position_ids, - encoder_hidden_states=encoder_outputs[0], - init_cache=True, - method=_decoder_forward, # we only need to call the decoder to init the cache - ) - return unfreeze(init_variables["cache"]) - - def encode( - self, - pixel_values: jnp.ndarray, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - # pixel_values = jnp.transpose(pixel_values, (0, 2, 3, 1)) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _encoder_forward(module, pixel_values, **kwargs): - encode_module = module._get_encoder_module() - visual_projection = module._get_visual_projection_module() - - outputs = encode_module(pixel_values, **kwargs) - - return FlaxBaseModelOutputWithPooling( - last_hidden_state=visual_projection(outputs.last_hidden_state), - pooler_output=outputs.pooler_output, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - return self.module.apply( - {"params": params or self.params}, - pixel_values=jnp.array(pixel_values, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - method=_encoder_forward, - ) - - def decode( - self, - decoder_input_ids, - encoder_outputs, - encoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_position_ids: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - encoder_hidden_states = encoder_outputs[0] - - if encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = decoder_input_ids.shape - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - if decoder_position_ids is None: - if past_key_values is not None: - raise ValueError( - "Make sure to provide `decoder_position_ids` when passing `past_key_values`." - ) - - decoder_position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be - # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that - # it can be changed by FlaxMBartAttention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - def _decoder_forward( - module, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - return decoder_module( - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ) - - outputs = self.module.apply( - inputs, - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - decoder_position_ids=jnp.array(decoder_position_ids, dtype="i4"), - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - mutable=mutable, - method=_decoder_forward, - ) - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs, past = outputs - outputs["past_key_values"] = unfreeze(past["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs, past = outputs - outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:] - - return outputs - - def __call__( - self, - pixel_values: jnp.ndarray, - decoder_input_ids: Optional[jnp.ndarray] = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_position_ids: Optional[jnp.ndarray] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - # pixel_values = jnp.transpose(pixel_values, (0, 2, 3, 1)) - - # # prepare encoder inputs - # if attention_mask is None: - # attention_mask = jnp.ones_like(input_ids) - # if position_ids is None: - # batch_size, sequence_length = input_ids.shape - # position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) - - # prepare decoder inputs - # if decoder_input_ids is None: - # decoder_input_ids = shift_tokens_right( - # input_ids, self.config.pad_token_id, decoder_start_token_id=self.config.decoder_start_token_id - # ) # TODO: Check how to use this - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - if decoder_position_ids is None: - batch_size, sequence_length = decoder_input_ids.shape - decoder_position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {"dropout": dropout_rng} if dropout_rng is not None else {} - - return self.module.apply( - {"params": params or self.params}, - pixel_values=jnp.array(pixel_values, dtype=jnp.float32), - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - decoder_position_ids=jnp.array(decoder_position_ids, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - ) - - -class FlaxCLIPVisionMBartForConditionalGeneration( - FlaxCLIPVisionMBartOuterPreTrainedModel -): - module_class = FlaxCLIPVisionMBartForConditionalGenerationModule - dtype: jnp.dtype = jnp.float32 - - def decode( - self, - decoder_input_ids, - encoder_outputs, - encoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_position_ids: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - deterministic: bool = True, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - encoder_hidden_states = encoder_outputs[0] - - if encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = decoder_input_ids.shape - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - if decoder_position_ids is None: - if past_key_values is not None: - raise ValueError( - "Make sure to provide `decoder_position_ids` when passing `past_key_values`." - ) - - decoder_position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be - # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that - # it can be changed by FlaxMBartAttention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - def _decoder_forward( - module, - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - outputs = decoder_module( - decoder_input_ids, - decoder_attention_mask, - decoder_position_ids, - **kwargs, - ) - hidden_states = outputs[0] - - if self.config.tie_word_embeddings: - shared_embedding = module.model.variables["params"]["shared"][ - "embedding" - ] - lm_logits = module.lm_head.apply( - {"params": {"kernel": shared_embedding.T}}, hidden_states - ) - else: - lm_logits = module.lm_head(hidden_states) - - lm_logits += module.final_logits_bias - return lm_logits, outputs - - outputs = self.module.apply( - inputs, - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - decoder_position_ids=jnp.array(decoder_position_ids, dtype="i4"), - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - rngs=rngs, - mutable=mutable, - method=_decoder_forward, - ) - - if past_key_values is None: - lm_logits, decoder_outputs = outputs - else: - (lm_logits, decoder_outputs), past = outputs - - if return_dict: - outputs = FlaxCausalLMOutputWithCrossAttentions( - logits=lm_logits, - hidden_states=decoder_outputs.hidden_states, - attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - ) - else: - outputs = (lm_logits,) + decoder_outputs[1:] - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs["past_key_values"] = unfreeze(past["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:] - - return outputs - - def prepare_inputs_for_generation( - self, - decoder_input_ids, - max_length, - attention_mask: Optional[jnp.DeviceArray] = None, - decoder_attention_mask: Optional[jnp.DeviceArray] = None, - encoder_outputs=None, - **kwargs, - ): - # initializing the cache - batch_size, seq_length = decoder_input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length, encoder_outputs) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since the decoder uses a causal mask, those positions are masked anyways. - # Thus we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if decoder_attention_mask is not None: - position_ids = decoder_attention_mask.cumsum(axis=-1) - 1 - extended_attention_mask = lax.dynamic_update_slice( - extended_attention_mask, decoder_attention_mask, (0, 0) - ) - else: - position_ids = jnp.broadcast_to( - jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length) - ) - - return { - "past_key_values": past_key_values, - "encoder_outputs": encoder_outputs, - "encoder_attention_mask": attention_mask, - "decoder_attention_mask": extended_attention_mask, - "decoder_position_ids": position_ids, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - model_kwargs["decoder_position_ids"] = ( - model_kwargs["decoder_position_ids"][:, -1:] + 1 - ) - return model_kwargs - - @classmethod - def from_pretrained(cls, *args, **kwargs): - # At the moment fast initialization is not supported - # for composite models - # kwargs["_fast_init"] = False - return super().from_pretrained(*args, **kwargs) - - @classmethod - def from_clip_vision_mbart_pretrained( - cls, - clip_vision_model_name_or_path: str = None, - mbart_model_name_or_path: str = None, - *model_args, - **kwargs, - ) -> FlaxCLIPVisionMBartPreTrainedModel: - - kwargs_mbart = { - argument[len("mbart_") :]: value - for argument, value in kwargs.items() - if argument.startswith("mbart_") - } - - kwargs_clip_vision = { - argument[len("clip_vision_") :]: value - for argument, value in kwargs.items() - if argument.startswith("clip_vision_") - } - - # remove mbart, clip_vision kwargs from kwargs - for key in kwargs_mbart.keys(): - del kwargs["mbart_" + key] - for key in kwargs_clip_vision.keys(): - del kwargs["clip_vision_" + key] - - # Load and initialize the mbart and clip_vision model - mbart_model = kwargs_mbart.pop("model", None) - if mbart_model is None: - assert ( - mbart_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `mbart_model_name_or_path` has to be defined" - - if "config" not in kwargs_mbart: - mbart_config = MBartConfig.from_pretrained(mbart_model_name_or_path) - kwargs_mbart["config"] = mbart_config - - mbart_model = FlaxMBartModel.from_pretrained( - mbart_model_name_or_path, *model_args, **kwargs_mbart - ) - - clip_vision_model = kwargs_clip_vision.pop("model", None) - if clip_vision_model is None: - assert ( - clip_vision_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `clip_vision_model_name_or_path` has to be defined" - - if "config" not in kwargs_clip_vision: - clip_vision_config = CLIPVisionConfig.from_pretrained( - clip_vision_model_name_or_path - ) - kwargs_clip_vision["config"] = clip_vision_config - - clip_vision_model = FlaxCLIPVisionModel.from_pretrained( - clip_vision_model_name_or_path, *model_args, **kwargs_clip_vision - ) - - # instantiate config with corresponding kwargs - dtype = kwargs.pop("dtype", jnp.float32) - config = CLIPVisionMBartConfig.from_clip_vision_mbart_configs( - clip_vision_model.config, mbart_model.config, **kwargs - ) - - # init model - model = cls(config, *model_args, dtype=dtype, **kwargs) - model.params["model"]["encoder"] = clip_vision_model.params - model.params["model"]["decoder"] = mbart_model.params["decoder"] - model.params["model"]["shared"] = mbart_model.params["shared"] - # model.params["mbart_model"] = mbart_model.params - - return model - - -# flax_clip_vision_mbart_cg = FlaxCLIPVisionMBartForConditionalGeneration.from_clip_vision_mbart_pretrained('openai/clip-vit-base-patch32', 'facebook/mbart-large') -# outputs = flax_clip_vision_mbart_cg(pixel_values, input_ids, attention_mask, position_ids, output_hidden_states=True) -# flax_vit_bart_cg.generate(input_ids=pixel_values, decoder_start_token_id=tokenizer.lang_code_to_id['en_XX'])s diff --git a/spaces/fyodorschnotzdinger/paraphraser/README.md b/spaces/fyodorschnotzdinger/paraphraser/README.md deleted file mode 100644 index 918928674c11a0a4a47e2170b542200f94688cc1..0000000000000000000000000000000000000000 --- a/spaces/fyodorschnotzdinger/paraphraser/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Paraphraser -emoji: 🏃 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gaviego/mnist/models.py b/spaces/gaviego/mnist/models.py deleted file mode 100644 index 3f28e6e3befe258221910d87cbfdd97f9c418ac6..0000000000000000000000000000000000000000 --- a/spaces/gaviego/mnist/models.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -# Define the model -class Net(nn.Module): - def __init__(self): - super(Net, self).__init__() - self.fc1 = nn.Linear(28*28, 128) # MNIST images are 28x28 - self.fc2 = nn.Linear(128, 128) - self.fc3 = nn.Linear(128, 64) - self.fc4 = nn.Linear(64, 10) # There are 10 classes (0 through 9) - - def forward(self, x): - x = x.view(x.shape[0], -1) # Flatten the input - x = torch.relu(self.fc1(x)) - x = torch.relu(self.fc2(x)) - x = torch.relu(self.fc3(x)) - return self.fc4(x) - -class NetConv(nn.Module): - def __init__(self): - super(NetConv, self).__init__() - self.conv1 = nn.Conv2d(1, 32, 3) - self.conv2 = nn.Conv2d(32, 64, 3) - self.fc1 = nn.Linear(64 * 5 * 5, 128) # Corrected - self.fc2 = nn.Linear(128, 10) - - def forward(self, x): - x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) - x = F.max_pool2d(F.relu(self.conv2(x)), 2) - x = x.view(-1, self.num_flat_features(x)) - x = F.relu(self.fc1(x)) - x = self.fc2(x) - return F.log_softmax(x, dim=1) - - def num_flat_features(self, x): - size = x.size()[1:] - num_features = 1 - for s in size: - num_features *= s - return num_features \ No newline at end of file diff --git a/spaces/geekyrakshit/enhance-me/enhance_me/mirnet/mirnet.py b/spaces/geekyrakshit/enhance-me/enhance_me/mirnet/mirnet.py deleted file mode 100644 index cfb52ff870526329defd70f94d1aeafa92d1ab86..0000000000000000000000000000000000000000 --- a/spaces/geekyrakshit/enhance-me/enhance_me/mirnet/mirnet.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -import numpy as np -from PIL import Image -from typing import List -from datetime import datetime - -from tensorflow import keras -from tensorflow.keras import optimizers, models, mixed_precision - -from wandb.keras import WandbCallback - -from .dataloader import LowLightDataset -from .models import build_mirnet_model -from .losses import CharbonnierLoss -from ..commons import ( - peak_signal_noise_ratio, - closest_number, - init_wandb, - download_lol_dataset, -) - - -class MIRNet: - def __init__(self, experiment_name=None, wandb_api_key=None) -> None: - self.experiment_name = experiment_name - if wandb_api_key is not None: - init_wandb("mirnet", experiment_name, wandb_api_key) - self.using_wandb = True - else: - self.using_wandb = False - - def build_datasets( - self, - image_size: int = 256, - dataset_label: str = "lol", - apply_random_horizontal_flip: bool = True, - apply_random_vertical_flip: bool = True, - apply_random_rotation: bool = True, - val_split: float = 0.2, - batch_size: int = 16, - ): - if dataset_label == "lol": - (self.low_images, self.enhanced_images), ( - self.test_low_images, - self.test_enhanced_images, - ) = download_lol_dataset() - self.data_loader = LowLightDataset( - image_size=image_size, - apply_random_horizontal_flip=apply_random_horizontal_flip, - apply_random_vertical_flip=apply_random_vertical_flip, - apply_random_rotation=apply_random_rotation, - ) - (self.train_dataset, self.val_dataset) = self.data_loader.get_datasets( - low_light_images=self.low_images, - enhanced_images=self.enhanced_images, - val_split=val_split, - batch_size=batch_size, - ) - - def build_model( - self, - use_mixed_precision: bool = False, - num_recursive_residual_groups: int = 3, - num_multi_scale_residual_blocks: int = 2, - channels: int = 64, - learning_rate: float = 1e-4, - epsilon: float = 1e-3, - ): - if use_mixed_precision: - policy = mixed_precision.Policy("mixed_float16") - mixed_precision.set_global_policy(policy) - self.model = build_mirnet_model( - num_rrg=num_recursive_residual_groups, - num_mrb=num_multi_scale_residual_blocks, - channels=channels, - ) - self.model.compile( - optimizer=optimizers.Adam(learning_rate=learning_rate), - loss=CharbonnierLoss(epsilon=epsilon), - metrics=[peak_signal_noise_ratio], - ) - - def load_model( - self, filepath, custom_objects=None, compile=True, options=None - ) -> None: - self.model = models.load_model( - filepath=filepath, - custom_objects=custom_objects, - compile=compile, - options=options, - ) - - def save_weights(self, filepath, overwrite=True, save_format=None, options=None): - self.model.save_weights( - filepath, overwrite=overwrite, save_format=save_format, options=options - ) - - def load_weights(self, filepath, by_name=False, skip_mismatch=False, options=None): - self.model.load_weights( - filepath, by_name=by_name, skip_mismatch=skip_mismatch, options=options - ) - - def train(self, epochs: int): - log_dir = os.path.join( - self.experiment_name, - "logs", - datetime.now().strftime("%Y%m%d-%H%M%S"), - ) - tensorboard_callback = keras.callbacks.TensorBoard(log_dir, histogram_freq=1) - model_checkpoint_callback = keras.callbacks.ModelCheckpoint( - os.path.join(self.experiment_name, "weights.h5"), - save_best_only=True, - save_weights_only=True, - ) - reduce_lr_callback = keras.callbacks.ReduceLROnPlateau( - monitor="val_peak_signal_noise_ratio", - factor=0.5, - patience=5, - verbose=1, - min_delta=1e-7, - mode="max", - ) - callbacks = [ - tensorboard_callback, - model_checkpoint_callback, - reduce_lr_callback, - ] - if self.using_wandb: - callbacks += [WandbCallback()] - history = self.model.fit( - self.train_dataset, - validation_data=self.val_dataset, - epochs=epochs, - callbacks=callbacks, - ) - return history - - def infer( - self, - original_image, - image_resize_factor: float = 1.0, - resize_output: bool = False, - ): - width, height = original_image.size - target_width, target_height = ( - closest_number(width // image_resize_factor, 4), - closest_number(height // image_resize_factor, 4), - ) - original_image = original_image.resize( - (target_width, target_height), Image.ANTIALIAS - ) - image = keras.preprocessing.image.img_to_array(original_image) - image = image.astype("float32") / 255.0 - image = np.expand_dims(image, axis=0) - output = self.model.predict(image) - output_image = output[0] * 255.0 - output_image = output_image.clip(0, 255) - output_image = output_image.reshape( - (np.shape(output_image)[0], np.shape(output_image)[1], 3) - ) - output_image = Image.fromarray(np.uint8(output_image)) - original_image = Image.fromarray(np.uint8(original_image)) - if resize_output: - output_image = output_image.resize((width, height), Image.ANTIALIAS) - return output_image - - def infer_from_file( - self, - original_image_file: str, - image_resize_factor: float = 1.0, - resize_output: bool = False, - ): - original_image = Image.open(original_image_file) - return self.infer(original_image, image_resize_factor, resize_output) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/hooks/optimizer.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/hooks/optimizer.py deleted file mode 100644 index 4ef3e9ff8f9c6926e32bdf027612267b64ed80df..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,508 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - - def __init__(self, grad_clip=None): - self.grad_clip = grad_clip - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - runner.outputs['loss'].backward() - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super(GradientCumulativeOptimizerHook, self).__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/spaces/gligen/demo/gligen/ldm/models/diffusion/__init__.py b/spaces/gligen/demo/gligen/ldm/models/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/godot-demo/godot-3d-voxel/index.audio.worklet.js b/spaces/godot-demo/godot-3d-voxel/index.audio.worklet.js deleted file mode 100644 index ea4d8cb22156435ac3c3d171390864140d0d54cd..0000000000000000000000000000000000000000 --- a/spaces/godot-demo/godot-3d-voxel/index.audio.worklet.js +++ /dev/null @@ -1,211 +0,0 @@ -/*************************************************************************/ -/* audio.worklet.js */ -/*************************************************************************/ -/* This file is part of: */ -/* GODOT ENGINE */ -/* https://godotengine.org */ -/*************************************************************************/ -/* Copyright (c) 2007-2022 Juan Linietsky, Ariel Manzur. */ -/* Copyright (c) 2014-2022 Godot Engine contributors (cf. AUTHORS.md). */ -/* */ -/* Permission is hereby granted, free of charge, to any person obtaining */ -/* a copy of this software and associated documentation files (the */ -/* "Software"), to deal in the Software without restriction, including */ -/* without limitation the rights to use, copy, modify, merge, publish, */ -/* distribute, sublicense, and/or sell copies of the Software, and to */ -/* permit persons to whom the Software is furnished to do so, subject to */ -/* the following conditions: */ -/* */ -/* The above copyright notice and this permission notice shall be */ -/* included in all copies or substantial portions of the Software. */ -/* */ -/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */ -/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */ -/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.*/ -/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */ -/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */ -/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */ -/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -/*************************************************************************/ - -class RingBuffer { - constructor(p_buffer, p_state, p_threads) { - this.buffer = p_buffer; - this.avail = p_state; - this.threads = p_threads; - this.rpos = 0; - this.wpos = 0; - } - - data_left() { - return this.threads ? Atomics.load(this.avail, 0) : this.avail; - } - - space_left() { - return this.buffer.length - this.data_left(); - } - - read(output) { - const size = this.buffer.length; - let from = 0; - let to_write = output.length; - if (this.rpos + to_write > size) { - const high = size - this.rpos; - output.set(this.buffer.subarray(this.rpos, size)); - from = high; - to_write -= high; - this.rpos = 0; - } - if (to_write) { - output.set(this.buffer.subarray(this.rpos, this.rpos + to_write), from); - } - this.rpos += to_write; - if (this.threads) { - Atomics.add(this.avail, 0, -output.length); - Atomics.notify(this.avail, 0); - } else { - this.avail -= output.length; - } - } - - write(p_buffer) { - const to_write = p_buffer.length; - const mw = this.buffer.length - this.wpos; - if (mw >= to_write) { - this.buffer.set(p_buffer, this.wpos); - this.wpos += to_write; - if (mw === to_write) { - this.wpos = 0; - } - } else { - const high = p_buffer.subarray(0, mw); - const low = p_buffer.subarray(mw); - this.buffer.set(high, this.wpos); - this.buffer.set(low); - this.wpos = low.length; - } - if (this.threads) { - Atomics.add(this.avail, 0, to_write); - Atomics.notify(this.avail, 0); - } else { - this.avail += to_write; - } - } -} - -class GodotProcessor extends AudioWorkletProcessor { - constructor() { - super(); - this.threads = false; - this.running = true; - this.lock = null; - this.notifier = null; - this.output = null; - this.output_buffer = new Float32Array(); - this.input = null; - this.input_buffer = new Float32Array(); - this.port.onmessage = (event) => { - const cmd = event.data['cmd']; - const data = event.data['data']; - this.parse_message(cmd, data); - }; - } - - process_notify() { - if (this.notifier) { - Atomics.add(this.notifier, 0, 1); - Atomics.notify(this.notifier, 0); - } - } - - parse_message(p_cmd, p_data) { - if (p_cmd === 'start' && p_data) { - const state = p_data[0]; - let idx = 0; - this.threads = true; - this.lock = state.subarray(idx, ++idx); - this.notifier = state.subarray(idx, ++idx); - const avail_in = state.subarray(idx, ++idx); - const avail_out = state.subarray(idx, ++idx); - this.input = new RingBuffer(p_data[1], avail_in, true); - this.output = new RingBuffer(p_data[2], avail_out, true); - } else if (p_cmd === 'stop') { - this.running = false; - this.output = null; - this.input = null; - } else if (p_cmd === 'start_nothreads') { - this.output = new RingBuffer(p_data[0], p_data[0].length, false); - } else if (p_cmd === 'chunk') { - this.output.write(p_data); - } - } - - static array_has_data(arr) { - return arr.length && arr[0].length && arr[0][0].length; - } - - process(inputs, outputs, parameters) { - if (!this.running) { - return false; // Stop processing. - } - if (this.output === null) { - return true; // Not ready yet, keep processing. - } - const process_input = GodotProcessor.array_has_data(inputs); - if (process_input) { - const input = inputs[0]; - const chunk = input[0].length * input.length; - if (this.input_buffer.length !== chunk) { - this.input_buffer = new Float32Array(chunk); - } - if (!this.threads) { - GodotProcessor.write_input(this.input_buffer, input); - this.port.postMessage({ 'cmd': 'input', 'data': this.input_buffer }); - } else if (this.input.space_left() >= chunk) { - GodotProcessor.write_input(this.input_buffer, input); - this.input.write(this.input_buffer); - } else { - this.port.postMessage('Input buffer is full! Skipping input frame.'); - } - } - const process_output = GodotProcessor.array_has_data(outputs); - if (process_output) { - const output = outputs[0]; - const chunk = output[0].length * output.length; - if (this.output_buffer.length !== chunk) { - this.output_buffer = new Float32Array(chunk); - } - if (this.output.data_left() >= chunk) { - this.output.read(this.output_buffer); - GodotProcessor.write_output(output, this.output_buffer); - if (!this.threads) { - this.port.postMessage({ 'cmd': 'read', 'data': chunk }); - } - } else { - this.port.postMessage('Output buffer has not enough frames! Skipping output frame.'); - } - } - this.process_notify(); - return true; - } - - static write_output(dest, source) { - const channels = dest.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < dest[ch].length; sample++) { - dest[ch][sample] = source[sample * channels + ch]; - } - } - } - - static write_input(dest, source) { - const channels = source.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < source[ch].length; sample++) { - dest[sample * channels + ch] = source[ch][sample]; - } - } - } -} - -registerProcessor('godot-processor', GodotProcessor); diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Counter Strike Source 1.0.0.34 Patch !!TOP!! Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Counter Strike Source 1.0.0.34 Patch !!TOP!! Download.md deleted file mode 100644 index c6d080c0bb424c6c8d306d9618750ec8d8109bf1..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Counter Strike Source 1.0.0.34 Patch !!TOP!! Download.md +++ /dev/null @@ -1,32 +0,0 @@ - Counter strike source 1.0.0.34 patch downloadDownload File --->>> https://urlgoal.com/2uyNiq - -The game's downloadable beta was released in September of 2002, and the first version of the source code for the game was released online in May 2003. - -Ricardo Lacovara has created a great guide to the use of CS:S v84 and CS: Source v34. - -For more info and screenshots visit the official page at the CS:S website. - -First, there are some good articles on the topic that go in-depth. They explain everything from the new 3d models to modding and server problems. - -Then there are some tutorials. They give you instructions on how to compile new maps and how to make your own new models. - -The third article is one of my own. It shows you how to make your own models. There are examples of using basic elements to create a new model. You can make any model you like. - -Finally there are a lot of websites that have lots of articles and even tutorials. They give you everything from the basics to giving you tips on how to make even better models. - -There are also some good manuals that you can download. They are listed on the csrc.sourceforge.net website. - -First, you need the source files. Download the source zip file, unzip it and copy the files to your steamapps\common\Counter-Strike Source folder. - -There is also a video that shows how to install the new SDK. You need to download the SDK from Sourceforge and install it. - -There are a lot of tutorials on the Internet about how to make your own models. Here are a few links to some of the tutorials. - -6 Comments - -I've tried to read through some of the tutorials, but unfortunately I'm running into problems. I tried to get the source files from SteamApps\common\Counter-Strike Source and install it on my computer, but it keeps trying to send me to "old.counter-strike.source" instead of "Counter-Strike Source". I've tried both ways. When I go to steamapps\common\Counter-Strike Source it tells me that "the update is incompatible with this game. Make sure that you have the latest version of the game installed first." - -Also, I downloaded the SDK and ran the install.exe that came with it, but I'm getting the same problem, which is that the file "sourceserver.net" doesn't exist. I think it's only a problem when you 4fefd39f24 - - - diff --git a/spaces/gotiQspiryo/whisper-ui/examples/El Rostro Y La Personalidad Julian Gabarre.pdf.md b/spaces/gotiQspiryo/whisper-ui/examples/El Rostro Y La Personalidad Julian Gabarre.pdf.md deleted file mode 100644 index cc4bc60bdea838bb05f861c6076ac50dee0386e0..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/El Rostro Y La Personalidad Julian Gabarre.pdf.md +++ /dev/null @@ -1,22 +0,0 @@ - - ¿Qué es la psicologÃa facial y cómo puede ayudarte a conocer mejor a las personas?- -La psicologÃa facial es una disciplina que estudia las relaciones entre la forma y la funcionalidad del rostro, el cerebro y la personalidad. Su principal exponente es el doctor en psicologÃa Julián Gabarre, autor del libro El Rostro Y La Personalidad Julian Gabarre.pdf, que puedes descargar gratis en internet[^1^]. - -Según Gabarre, el rostro es el reflejo de nuestra identidad, de nuestras capacidades intelectuales, emocionales y sociales, y de nuestra forma de actuar y comunicarnos. A través del análisis morfopsicológico del rostro, podemos obtener información valiosa sobre el temperamento, el carácter, las aptitudes, las motivaciones y los valores de una persona[^2^]. -El Rostro Y La Personalidad Julian Gabarre.pdfDownload https://urlgoal.com/2uyMMI - - La psicologÃa facial tiene múltiples aplicaciones prácticas en el ámbito personal y profesional. Por ejemplo, puede ayudarnos a mejorar nuestra autoestima y autoconocimiento, a elegir mejor a nuestras parejas, amigos o socios, a detectar mentiras o engaños, a potenciar nuestras habilidades comunicativas y persuasivas, o a orientarnos vocacionalmente[^3^]. - -Si quieres aprender más sobre esta fascinante ciencia, te invitamos a leer el libro El Rostro Y La Personalidad Julian Gabarre.pdf, donde encontrarás los fundamentos teóricos y prácticos de la psicologÃa facial, asà como numerosos ejemplos e ilustraciones de rostros famosos y anónimos. También puedes visitar la página web de Julián Gabarre, donde podrás acceder a cursos online, conferencias, artÃculos y otros recursos sobre el tema[^4^]. - -No esperes más y descubre lo que tu rostro dice de ti y lo que los demás rostros te dicen de ellos. La psicologÃa facial te abrirá las puertas a un mundo nuevo de conocimiento y comprensión. - -La psicologÃa facial se basa en el principio de que el rostro es el resultado de la interacción entre la genética y el medio ambiente. La genética determina la estructura ósea y muscular del rostro, que a su vez influye en el desarrollo y la actividad del cerebro. El medio ambiente, por su parte, modifica el rostro a través de las expresiones faciales, que son el reflejo de nuestras emociones y estados de ánimo. - -AsÃ, el rostro se convierte en un mapa que nos permite leer las caracterÃsticas psicológicas de una persona. Para ello, se utilizan diferentes criterios de observación, como la forma y el tamaño de la cabeza, la frente, las cejas, los ojos, la nariz, la boca, el mentón, las orejas y el cabello. Cada uno de estos elementos nos da información sobre aspectos como la inteligencia, la creatividad, la memoria, la voluntad, la sensibilidad, la sociabilidad, la confianza, la honestidad o la lealtad. - - -Por supuesto, la psicologÃa facial no pretende ser una ciencia exacta ni infalible. Se trata de una herramienta complementaria que nos ayuda a ampliar nuestra percepción y nuestro juicio sobre las personas. Además, hay que tener en cuenta que el rostro no es estático sino dinámico, y que puede cambiar con el tiempo y las circunstancias. Por eso, es importante observar el rostro en su conjunto y en movimiento, y no quedarnos solo con los detalles o las primeras impresiones. d5da3c52bf- - \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/optim/bmuf.py b/spaces/gradio/HuBERT/fairseq/optim/bmuf.py deleted file mode 100644 index d6d0e04e86eb894efe59e13a78843d01ca9e651d..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/optim/bmuf.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -import torch -import torch.distributed as dist -from fairseq.dataclass.configs import FairseqBMUFConfig -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim.fairseq_optimizer import FairseqOptimizer - - -class FairseqBMUF(FairseqOptimizer): - """ - Implements incremental block distributed data parallelism similar to - https://ieeexplore.ieee.org/document/7472805 - - Paper title: Scalable training of deep learning machines by incremental - block training with intra-block parallel optimization and blockwise - model-update filtering - """ - - def __init__(self, cfg: FairseqBMUFConfig, optimizer): - super().__init__(cfg) - self._optimizer = optimizer - self._num_updates = 0 - self.sync_iter = cfg.global_sync_iter - self.block_momentum = cfg.block_momentum - self.block_lr = cfg.block_lr - self._reset_local_data() - self.warmup_iteration = cfg.warmup_iterations - self.use_nbm = cfg.use_nbm - self.initial_state = self._optimizer.state_dict() - self.average_sync = self.cfg.average_sync - self.world_size = self.cfg.distributed_world_size - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - gen_parser_from_dataclass(parser, FairseqBMUFConfig()) - - @property - def optimizer(self): - return self._optimizer.optimizer - - @property - def optimizer_config(self): - return self._optimizer.optimizer_config - - def get_lr(self): - return self._optimizer.get_lr() - - def set_lr(self, lr): - self._optimizer.set_lr(lr) - - def state_dict(self): - return self._optimizer.state_dict() - - def load_state_dict(self, state_dict, optimizer_overrides=None): - self._optimizer.load_state_dict(state_dict, optimizer_overrides) - self.initial_state = self._optimizer.state_dict() - - def multiply_grads(self, c): - """Multiplies grads by a constant *c*.""" - self._optimizer.multiply_grads(c) - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm.""" - return self._optimizer.clip_grad_norm(max_norm, aggregate_norm_fn) - - def average_params(self): - self._optimizer.average_params() - - def _block_sync(self): - if self.world_size <= 1: - return - # Update the global model using local models from all GPUs - # (Step-1) Calculate grad between previously synced model and - # currrent local model - if self.block_momentum != 0: - self._calc_grad() - - # (Step-2) Average gradient from all GPUs - self._avg_grad_from_all_gpus() - - # (Step-3) Calculate global momentum and update the global model - if self.block_momentum != 0: - self._update_global_model() - - # (Step-4) Average local optimizer params - if self.average_sync: - self.average_params() - - def _is_warmup_end(self): - # Check whether train iterations is equal to warmup iter - if self.get_num_updates() == self.warmup_iteration: - return True - return False - - def _is_bmuf_iter(self): - # Check whether train iterations is equal to bmuf sync iter - if (self.get_num_updates() > self.warmup_iteration) and ( - self.get_num_updates() % self.sync_iter == 0 - ): - return True - return False - - def _warmup_sync(self, root_rank=0): - if self.world_size <= 1: - return - # Broadcast the local model to all gpus - for param in self.params: - dist.broadcast(param.data, src=root_rank) - - # Update local optimizer state - if self.average_sync: - self._optimizer.average_params() - else: - self._optimizer.load_state_dict(self.initial_state) - - self._reset_local_data() - - def step(self, closure=None): - """Performs a single optimization step.""" - self._optimizer.step(closure) - self.set_num_updates(self.get_num_updates() + 1) - if self._is_warmup_end(): - self._warmup_sync() - elif self._is_bmuf_iter(): - self._block_sync() - - def zero_grad(self): - """Clears the gradients of all optimized parameters.""" - self._optimizer.zero_grad() - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - - @torch.no_grad() - def _reset_local_data(self): - # (Step-0) Initialize global momentum parameters and store global copy on each gpu - self.global_params = [torch.zeros_like(p.data) for p in self.params] - self.smoothed_grads = [p.data.new_zeros(p.data.size()) for p in self.params] - self.grads = [p.data.new_zeros(p.data.size()) for p in self.params] - - # saving the global model locally for calculating gradient during bmuf sync - for param, global_param in zip(self.params, self.global_params): - global_param.copy_(param.data) - - @torch.no_grad() - def _calc_grad(self): - # global_params is basically the global copy from the previously finished - # synchronisation. param.data is local parameter after block_sync_freq - # for the local gpu. so grad is difference between previously synced - # model and currrent local model. - for index, (param, global_param) in enumerate( - zip(self.params, self.global_params) - ): - self.grads[index] = global_param - param.data - - def _avg_grad_from_all_gpus(self): - for index, param in enumerate(self.params): - sync_para = param.data if self.block_momentum == 0 else self.grads[index] - sync_para /= float(dist.get_world_size()) - dist.all_reduce(sync_para, op=dist.ReduceOp.SUM) - - @torch.no_grad() - def _update_global_model(self): - for index, (param, global_param, smoothed_grad, grad) in enumerate( - zip( - self.params, - self.global_params, - self.smoothed_grads, - # all gpus would share the same value of smoothed_grad, since it is - # always computed on synchronized gradients. - self.grads, - ) - ): - # global_param is basically last syncrhornized parameter. though - # smoothed_grad is local, all processes will have same value of - # smoothed_grad and hence param is globally synchronized copy. - # smoothed_grad(t) = BM * smoothed_grad(t-1) + BM_lr * grad(t) - smoothed_grad = self.block_momentum * smoothed_grad + self.block_lr * grad - param.data.copy_(global_param - smoothed_grad) - - # A Nesterov momentum here is to do a partial weight update before - # calculating the gradient - if self.use_nbm: - param.data.copy_(param.data - self.block_momentum * smoothed_grad) - - # backup for the next synchronization. - self.smoothed_grads[index] = smoothed_grad - global_param.copy_(param.data) diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Folder/index.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Folder/index.ts deleted file mode 100644 index 93815b95914fc635a5115e34f481a495652559ac..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Folder/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './Folder'; diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/utils/align_data.py b/spaces/gyugnsu/DragGan-Inversion/PTI/utils/align_data.py deleted file mode 100644 index 12b59bf5ce294972252876a714631e05cde5630c..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/utils/align_data.py +++ /dev/null @@ -1,37 +0,0 @@ -import sys -sys.path.append('.') -from configs import paths_config -import dlib -import glob -import os -from tqdm import tqdm -from utils.alignment import align_face - - -def pre_process_images(raw_images_path): - current_directory = os.getcwd() - - IMAGE_SIZE = 1024 - predictor = dlib.shape_predictor(paths_config.dlib) - os.chdir(raw_images_path) - images_names = glob.glob(f'*') - - aligned_images = [] - for image_name in tqdm(images_names): - try: - aligned_image = align_face(filepath=f'{raw_images_path}/{image_name}', - predictor=predictor, output_size=IMAGE_SIZE) - aligned_images.append(aligned_image) - except Exception as e: - print(e) - - os.makedirs(paths_config.input_data_path, exist_ok=True) - for image, name in zip(aligned_images, images_names): - real_name = name.split('.')[0] - image.save(f'{paths_config.input_data_path}/{real_name}.jpeg') - - os.chdir(current_directory) - - -if __name__ == "__main__": - pre_process_images('/home/zhizizhang/Documents2/projects/PTI/docs') diff --git a/spaces/haotiz/glip-zeroshot-demo/docs/intro.md b/spaces/haotiz/glip-zeroshot-demo/docs/intro.md deleted file mode 100644 index 668f2df48e68af7152f4375e91852c15d2d775c7..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/docs/intro.md +++ /dev/null @@ -1,9 +0,0 @@ -["**GLIP: Grounded Language-Image Pre-training. CVPR 2022, Best Paper Finalist**"](https://arxiv.org/abs/2112.03857) - -This is the HuggingFace Gradio Demo for GLIP. The model requires an image, and a text to be the inputs. The text input can either be a natural sentence description (grounding), or a simple concatenation of some random categories (object detection). - -The paper presents a grounded language-image pre-training (GLIP) model for learning object-level, language-aware, and semantic-rich visual representations. GLIP unifies object detection and phrase grounding for pre-training. The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich. - -Code: https://github.com/microsoft/GLIP - -**News**: We are also holding an ODinW challenge at [the CV in the Wild Workshop @ ECCV 2022](https://computer-vision-in-the-wild.github.io/eccv-2022/). We hope our open-source code encourage the community to participate in this challenge! diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/evaluation/testing.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/evaluation/testing.py deleted file mode 100644 index 95addebc185111c572cb19aa98f7e055b21fc74e..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/evaluation/testing.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -import numpy as np -import pprint -import sys -from collections import OrderedDict -from collections.abc import Mapping - - -def print_csv_format(results): - """ - Print main metrics in a format similar to Detectron, - so that they are easy to copypaste into a spreadsheet. - - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - """ - assert isinstance(results, OrderedDict), results # unordered results cannot be properly printed - logger = logging.getLogger(__name__) - for task, res in results.items(): - # Don't print "AP-category" metrics since they are usually not tracked. - important_res = [(k, v) for k, v in res.items() if "-" not in k] - logger.info("copypaste: Task: {}".format(task)) - logger.info("copypaste: " + ",".join([k[0] for k in important_res])) - logger.info("copypaste: " + ",".join(["{0:.4f}".format(k[1]) for k in important_res])) - - -def verify_results(cfg, results): - """ - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - - Returns: - bool: whether the verification succeeds or not - """ - expected_results = cfg.TEST.EXPECTED_RESULTS - if not len(expected_results): - return True - - ok = True - for task, metric, expected, tolerance in expected_results: - actual = results[task][metric] - if not np.isfinite(actual): - ok = False - diff = abs(actual - expected) - if diff > tolerance: - ok = False - - logger = logging.getLogger(__name__) - if not ok: - logger.error("Result verification failed!") - logger.error("Expected Results: " + str(expected_results)) - logger.error("Actual Results: " + pprint.pformat(results)) - - sys.exit(1) - else: - logger.info("Results verification passed.") - return ok - - -def flatten_results_dict(results): - """ - Expand a hierarchical dict of scalars into a flat dict of scalars. - If results[k1][k2][k3] = v, the returned dict will have the entry - {"k1/k2/k3": v}. - - Args: - results (dict): - """ - r = {} - for k, v in results.items(): - if isinstance(v, Mapping): - v = flatten_results_dict(v) - for kk, vv in v.items(): - r[k + "/" + kk] = vv - else: - r[k] = v - return r diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/app.py b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/app.py deleted file mode 100644 index 2b449c1c4392e12b0b2778120ad8c84460702886..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/inputs=\[component\],/&\\n queue=False,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/outputs=\[token_counter\]/outputs=[token_counter], queue=False/g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -#os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -#os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -os.system(f"wget -q https://huggingface.co/Alsebay/PeachMixs/resolve/main/PeachTachyonMixs/PeachTachyon2.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PeachTachyon2.safetensors") -os.system(f"wget -q https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt") -os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") -#Embeddings TEXTUAL INVERSION -os.system(f"wget -q https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/diona-gi.pt -O /home/user/app/stable-diffusion-webui/embeddings/diona-gi.pt") -os.system(f"wget -q https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.safetensors -O /home/user/app/stable-diffusion-webui/embeddings/EasyNegative.safetensors") - -if "IS_SHARED_UI" in os.environ: - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/embeddings/{os.getenv('EMBED_NAME')}") - os.system(f"python launch.py --use-cpu all --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --skip-torch-cuda-test") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - #os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - #os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - #os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - os.system(f"wget -q https://huggingface.co/swl-models/chilloutmix-ni/resolve/main/chilloutmix-Ni.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/cmni.safetensors") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/wd-1-4-anime_e2.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Waifudiffusion-1-4-anime_e2.ckpt") - os.system(f"wget -q https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/Counterfeit-V2.5_pruned.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Counterfeit-V2.5_pruned.safetensors") - os.system(f"wget -q https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.5-pruned.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-v4.5-pruned.safetensors") - #os.system(f"wget -q https://huggingface.co/iZELX1/Grapefruit/resolve/main/grapefruitv4.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Grapefruitv4.safetensors") - #os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix3.safetensors") - #os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix3A1.safetensors") - #os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A2.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix3A2.safetensors") - #os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix3A3.safetensors") - #os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/embeddings/{os.getenv('EMBED_NAME')}") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - os.system(f"EXPOSE 7860") - #os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - #os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - # os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - os.system(f"python launch.py --precision full --no-half --use-cpu all --listen --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --skip-torch-cuda-test --api") diff --git a/spaces/hololee/dreambooth-training/train_dreambooth.py b/spaces/hololee/dreambooth-training/train_dreambooth.py deleted file mode 100644 index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000 --- a/spaces/hololee/dreambooth-training/train_dreambooth.py +++ /dev/null @@ -1,889 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.utils.import_utils import is_xformers_available -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" [0;32m" +instance_prompt+" [0m") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - if is_xformers_available(): - try: - print("Enabling memory efficient attention with xformers...") - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}" - ) - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" [0;32m" +" Freezing the text_encoder ..."+" [0m") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" [1;32mSAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/huggan/Colorb_GAN/app.py b/spaces/huggan/Colorb_GAN/app.py deleted file mode 100644 index 2490025dc4948e08cb9754fe4e67da69d417c496..0000000000000000000000000000000000000000 --- a/spaces/huggan/Colorb_GAN/app.py +++ /dev/null @@ -1,640 +0,0 @@ -#@title Gradio demo (used in space: ) - -from matplotlib import pyplot as plt -from huggingface_hub import PyTorchModelHubMixin -import numpy as np -import gradio as gr - - -#@title Defining Generator and associated code ourselves without the GPU requirements -import os -import json -import multiprocessing -from random import random -import math -from math import log2, floor -from functools import partial -from contextlib import contextmanager, ExitStack -from pathlib import Path -from shutil import rmtree - -import torch -from torch.cuda.amp import autocast, GradScaler -from torch.optim import Adam -from torch import nn, einsum -import torch.nn.functional as F -from torch.utils.data import Dataset, DataLoader -from torch.autograd import grad as torch_grad -from torch.utils.data.distributed import DistributedSampler -from torch.nn.parallel import DistributedDataParallel as DDP - -from PIL import Image -import torchvision -from torchvision import transforms -from kornia.filters import filter2d - -from tqdm import tqdm -from einops import rearrange, reduce, repeat - -from adabelief_pytorch import AdaBelief - -# helpers - - -def DiffAugment(x, types=[]): - for p in types: - for f in AUGMENT_FNS[p]: - x = f(x) - return x.contiguous() - -def exists(val): - return val is not None - -@contextmanager -def null_context(): - yield - -def combine_contexts(contexts): - @contextmanager - def multi_contexts(): - with ExitStack() as stack: - yield [stack.enter_context(ctx()) for ctx in contexts] - return multi_contexts - -def is_power_of_two(val): - return log2(val).is_integer() - -def default(val, d): - return val if exists(val) else d - -def set_requires_grad(model, bool): - for p in model.parameters(): - p.requires_grad = bool - -def cycle(iterable): - while True: - for i in iterable: - yield i - -def raise_if_nan(t): - if torch.isnan(t): - raise NanException - -def gradient_accumulate_contexts(gradient_accumulate_every, is_ddp, ddps): - if is_ddp: - num_no_syncs = gradient_accumulate_every - 1 - head = [combine_contexts(map(lambda ddp: ddp.no_sync, ddps))] * num_no_syncs - tail = [null_context] - contexts = head + tail - else: - contexts = [null_context] * gradient_accumulate_every - - for context in contexts: - with context(): - yield - -def evaluate_in_chunks(max_batch_size, model, *args): - split_args = list(zip(*list(map(lambda x: x.split(max_batch_size, dim=0), args)))) - chunked_outputs = [model(*i) for i in split_args] - if len(chunked_outputs) == 1: - return chunked_outputs[0] - return torch.cat(chunked_outputs, dim=0) - -def slerp(val, low, high): - low_norm = low / torch.norm(low, dim=1, keepdim=True) - high_norm = high / torch.norm(high, dim=1, keepdim=True) - omega = torch.acos((low_norm * high_norm).sum(1)) - so = torch.sin(omega) - res = (torch.sin((1.0 - val) * omega) / so).unsqueeze(1) * low + (torch.sin(val * omega) / so).unsqueeze(1) * high - return res - -def safe_div(n, d): - try: - res = n / d - except ZeroDivisionError: - prefix = '' if int(n >= 0) else '-' - res = float(f'{prefix}inf') - return res - -# loss functions - -def gen_hinge_loss(fake, real): - return fake.mean() - -def hinge_loss(real, fake): - return (F.relu(1 + real) + F.relu(1 - fake)).mean() - -def dual_contrastive_loss(real_logits, fake_logits): - device = real_logits.device - real_logits, fake_logits = map(lambda t: rearrange(t, '... -> (...)'), (real_logits, fake_logits)) - - def loss_half(t1, t2): - t1 = rearrange(t1, 'i -> i ()') - t2 = repeat(t2, 'j -> i j', i = t1.shape[0]) - t = torch.cat((t1, t2), dim = -1) - return F.cross_entropy(t, torch.zeros(t1.shape[0], device = device, dtype = torch.long)) - - return loss_half(real_logits, fake_logits) + loss_half(-fake_logits, -real_logits) - -# helper classes - -class NanException(Exception): - pass - -class EMA(): - def __init__(self, beta): - super().__init__() - self.beta = beta - def update_average(self, old, new): - if not exists(old): - return new - return old * self.beta + (1 - self.beta) * new - -class RandomApply(nn.Module): - def __init__(self, prob, fn, fn_else = lambda x: x): - super().__init__() - self.fn = fn - self.fn_else = fn_else - self.prob = prob - def forward(self, x): - fn = self.fn if random() < self.prob else self.fn_else - return fn(x) - -class ChanNorm(nn.Module): - def __init__(self, dim, eps = 1e-5): - super().__init__() - self.eps = eps - self.g = nn.Parameter(torch.ones(1, dim, 1, 1)) - self.b = nn.Parameter(torch.zeros(1, dim, 1, 1)) - - def forward(self, x): - var = torch.var(x, dim = 1, unbiased = False, keepdim = True) - mean = torch.mean(x, dim = 1, keepdim = True) - return (x - mean) / (var + self.eps).sqrt() * self.g + self.b - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.fn = fn - self.norm = ChanNorm(dim) - - def forward(self, x): - return self.fn(self.norm(x)) - -class Residual(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - - def forward(self, x): - return self.fn(x) + x - -class SumBranches(nn.Module): - def __init__(self, branches): - super().__init__() - self.branches = nn.ModuleList(branches) - def forward(self, x): - return sum(map(lambda fn: fn(x), self.branches)) - -class Blur(nn.Module): - def __init__(self): - super().__init__() - f = torch.Tensor([1, 2, 1]) - self.register_buffer('f', f) - def forward(self, x): - f = self.f - f = f[None, None, :] * f [None, :, None] - return filter2d(x, f, normalized=True) - -class Noise(nn.Module): - def __init__(self): - super().__init__() - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, x, noise = None): - b, _, h, w, device = *x.shape, x.device - - if not exists(noise): - noise = torch.randn(b, 1, h, w, device = device) - - return x + self.weight * noise - -def Conv2dSame(dim_in, dim_out, kernel_size, bias = True): - pad_left = kernel_size // 2 - pad_right = (pad_left - 1) if (kernel_size % 2) == 0 else pad_left - - return nn.Sequential( - nn.ZeroPad2d((pad_left, pad_right, pad_left, pad_right)), - nn.Conv2d(dim_in, dim_out, kernel_size, bias = bias) - ) - -# attention - -class DepthWiseConv2d(nn.Module): - def __init__(self, dim_in, dim_out, kernel_size, padding = 0, stride = 1, bias = True): - super().__init__() - self.net = nn.Sequential( - nn.Conv2d(dim_in, dim_in, kernel_size = kernel_size, padding = padding, groups = dim_in, stride = stride, bias = bias), - nn.Conv2d(dim_in, dim_out, kernel_size = 1, bias = bias) - ) - def forward(self, x): - return self.net(x) - -class LinearAttention(nn.Module): - def __init__(self, dim, dim_head = 64, heads = 8, kernel_size = 3): - super().__init__() - self.scale = dim_head ** -0.5 - self.heads = heads - self.dim_head = dim_head - inner_dim = dim_head * heads - - self.kernel_size = kernel_size - self.nonlin = nn.GELU() - - self.to_lin_q = nn.Conv2d(dim, inner_dim, 1, bias = False) - self.to_lin_kv = DepthWiseConv2d(dim, inner_dim * 2, 3, padding = 1, bias = False) - - self.to_q = nn.Conv2d(dim, inner_dim, 1, bias = False) - self.to_kv = nn.Conv2d(dim, inner_dim * 2, 1, bias = False) - - self.to_out = nn.Conv2d(inner_dim * 2, dim, 1) - - def forward(self, fmap): - h, x, y = self.heads, *fmap.shape[-2:] - - # linear attention - - lin_q, lin_k, lin_v = (self.to_lin_q(fmap), *self.to_lin_kv(fmap).chunk(2, dim = 1)) - lin_q, lin_k, lin_v = map(lambda t: rearrange(t, 'b (h c) x y -> (b h) (x y) c', h = h), (lin_q, lin_k, lin_v)) - - lin_q = lin_q.softmax(dim = -1) - lin_k = lin_k.softmax(dim = -2) - - lin_q = lin_q * self.scale - - context = einsum('b n d, b n e -> b d e', lin_k, lin_v) - lin_out = einsum('b n d, b d e -> b n e', lin_q, context) - lin_out = rearrange(lin_out, '(b h) (x y) d -> b (h d) x y', h = h, x = x, y = y) - - # conv-like full attention - - q, k, v = (self.to_q(fmap), *self.to_kv(fmap).chunk(2, dim = 1)) - q, k, v = map(lambda t: rearrange(t, 'b (h c) x y -> (b h) c x y', h = h), (q, k, v)) - - k = F.unfold(k, kernel_size = self.kernel_size, padding = self.kernel_size // 2) - v = F.unfold(v, kernel_size = self.kernel_size, padding = self.kernel_size // 2) - - k, v = map(lambda t: rearrange(t, 'b (d j) n -> b n j d', d = self.dim_head), (k, v)) - - q = rearrange(q, 'b c ... -> b (...) c') * self.scale - - sim = einsum('b i d, b i j d -> b i j', q, k) - sim = sim - sim.amax(dim = -1, keepdim = True).detach() - - attn = sim.softmax(dim = -1) - - full_out = einsum('b i j, b i j d -> b i d', attn, v) - full_out = rearrange(full_out, '(b h) (x y) d -> b (h d) x y', h = h, x = x, y = y) - - # add outputs of linear attention + conv like full attention - - lin_out = self.nonlin(lin_out) - out = torch.cat((lin_out, full_out), dim = 1) - return self.to_out(out) - -# dataset - -def convert_image_to(img_type, image): - if image.mode != img_type: - return image.convert(img_type) - return image - -class identity(object): - def __call__(self, tensor): - return tensor - -class expand_greyscale(object): - def __init__(self, transparent): - self.transparent = transparent - - def __call__(self, tensor): - channels = tensor.shape[0] - num_target_channels = 4 if self.transparent else 3 - - if channels == num_target_channels: - return tensor - - alpha = None - if channels == 1: - color = tensor.expand(3, -1, -1) - elif channels == 2: - color = tensor[:1].expand(3, -1, -1) - alpha = tensor[1:] - else: - raise Exception(f'image with invalid number of channels given {channels}') - - if not exists(alpha) and self.transparent: - alpha = torch.ones(1, *tensor.shape[1:], device=tensor.device) - - return color if not self.transparent else torch.cat((color, alpha)) - -def resize_to_minimum_size(min_size, image): - if max(*image.size) < min_size: - return torchvision.transforms.functional.resize(image, min_size) - return image - -class ImageDataset(Dataset): - def __init__( - self, - folder, - image_size, - transparent = False, - greyscale = False, - aug_prob = 0. - ): - super().__init__() - self.folder = folder - self.image_size = image_size - self.paths = [p for ext in EXTS for p in Path(f'{folder}').glob(f'**/*.{ext}')] - assert len(self.paths) > 0, f'No images were found in {folder} for training' - - if transparent: - num_channels = 4 - pillow_mode = 'RGBA' - expand_fn = expand_greyscale(transparent) - elif greyscale: - num_channels = 1 - pillow_mode = 'L' - expand_fn = identity() - else: - num_channels = 3 - pillow_mode = 'RGB' - expand_fn = expand_greyscale(transparent) - - convert_image_fn = partial(convert_image_to, pillow_mode) - - self.transform = transforms.Compose([ - transforms.Lambda(convert_image_fn), - transforms.Lambda(partial(resize_to_minimum_size, image_size)), - transforms.Resize(image_size), - RandomApply(aug_prob, transforms.RandomResizedCrop(image_size, scale=(0.5, 1.0), ratio=(0.98, 1.02)), transforms.CenterCrop(image_size)), - transforms.ToTensor(), - transforms.Lambda(expand_fn) - ]) - - def __len__(self): - return len(self.paths) - - def __getitem__(self, index): - path = self.paths[index] - img = Image.open(path) - return self.transform(img) - -# augmentations - -def random_hflip(tensor, prob): - if prob > random(): - return tensor - return torch.flip(tensor, dims=(3,)) - -class AugWrapper(nn.Module): - def __init__(self, D, image_size): - super().__init__() - self.D = D - - def forward(self, images, prob = 0., types = [], detach = False, **kwargs): - context = torch.no_grad if detach else null_context - - with context(): - if random() < prob: - images = random_hflip(images, prob=0.5) - images = DiffAugment(images, types=types) - - return self.D(images, **kwargs) - -# modifiable global variables - -norm_class = nn.BatchNorm2d - -def upsample(scale_factor = 2): - return nn.Upsample(scale_factor = scale_factor) - -# squeeze excitation classes - -# global context network -# https://arxiv.org/abs/2012.13375 -# similar to squeeze-excite, but with a simplified attention pooling and a subsequent layer norm - -class GlobalContext(nn.Module): - def __init__( - self, - *, - chan_in, - chan_out - ): - super().__init__() - self.to_k = nn.Conv2d(chan_in, 1, 1) - chan_intermediate = max(3, chan_out // 2) - - self.net = nn.Sequential( - nn.Conv2d(chan_in, chan_intermediate, 1), - nn.LeakyReLU(0.1), - nn.Conv2d(chan_intermediate, chan_out, 1), - nn.Sigmoid() - ) - def forward(self, x): - context = self.to_k(x) - context = context.flatten(2).softmax(dim = -1) - out = einsum('b i n, b c n -> b c i', context, x.flatten(2)) - out = out.unsqueeze(-1) - return self.net(out) - -# frequency channel attention -# https://arxiv.org/abs/2012.11879 - -def get_1d_dct(i, freq, L): - result = math.cos(math.pi * freq * (i + 0.5) / L) / math.sqrt(L) - return result * (1 if freq == 0 else math.sqrt(2)) - -def get_dct_weights(width, channel, fidx_u, fidx_v): - dct_weights = torch.zeros(1, channel, width, width) - c_part = channel // len(fidx_u) - - for i, (u_x, v_y) in enumerate(zip(fidx_u, fidx_v)): - for x in range(width): - for y in range(width): - coor_value = get_1d_dct(x, u_x, width) * get_1d_dct(y, v_y, width) - dct_weights[:, i * c_part: (i + 1) * c_part, x, y] = coor_value - - return dct_weights - -class FCANet(nn.Module): - def __init__( - self, - *, - chan_in, - chan_out, - reduction = 4, - width - ): - super().__init__() - - freq_w, freq_h = ([0] * 8), list(range(8)) # in paper, it seems 16 frequencies was ideal - dct_weights = get_dct_weights(width, chan_in, [*freq_w, *freq_h], [*freq_h, *freq_w]) - self.register_buffer('dct_weights', dct_weights) - - chan_intermediate = max(3, chan_out // reduction) - - self.net = nn.Sequential( - nn.Conv2d(chan_in, chan_intermediate, 1), - nn.LeakyReLU(0.1), - nn.Conv2d(chan_intermediate, chan_out, 1), - nn.Sigmoid() - ) - - def forward(self, x): - x = reduce(x * self.dct_weights, 'b c (h h1) (w w1) -> b c h1 w1', 'sum', h1 = 1, w1 = 1) - return self.net(x) - -# generative adversarial network - -class Generator(nn.Module): - def __init__( - self, - *, - image_size, - latent_dim = 256, - fmap_max = 512, - fmap_inverse_coef = 12, - transparent = False, - greyscale = False, - attn_res_layers = [], - freq_chan_attn = False - ): - super().__init__() - resolution = log2(image_size) - assert is_power_of_two(image_size), 'image size must be a power of 2' - - if transparent: - init_channel = 4 - elif greyscale: - init_channel = 1 - else: - init_channel = 3 - - fmap_max = default(fmap_max, latent_dim) - - self.initial_conv = nn.Sequential( - nn.ConvTranspose2d(latent_dim, latent_dim * 2, 4), - norm_class(latent_dim * 2), - nn.GLU(dim = 1) - ) - - num_layers = int(resolution) - 2 - features = list(map(lambda n: (n, 2 ** (fmap_inverse_coef - n)), range(2, num_layers + 2))) - features = list(map(lambda n: (n[0], min(n[1], fmap_max)), features)) - features = list(map(lambda n: 3 if n[0] >= 8 else n[1], features)) - features = [latent_dim, *features] - - in_out_features = list(zip(features[:-1], features[1:])) - - self.res_layers = range(2, num_layers + 2) - self.layers = nn.ModuleList([]) - self.res_to_feature_map = dict(zip(self.res_layers, in_out_features)) - - self.sle_map = ((3, 7), (4, 8), (5, 9), (6, 10)) - self.sle_map = list(filter(lambda t: t[0] <= resolution and t[1] <= resolution, self.sle_map)) - self.sle_map = dict(self.sle_map) - - self.num_layers_spatial_res = 1 - - for (res, (chan_in, chan_out)) in zip(self.res_layers, in_out_features): - image_width = 2 ** res - - attn = None - if image_width in attn_res_layers: - attn = PreNorm(chan_in, LinearAttention(chan_in)) - - sle = None - if res in self.sle_map: - residual_layer = self.sle_map[res] - sle_chan_out = self.res_to_feature_map[residual_layer - 1][-1] - - if freq_chan_attn: - sle = FCANet( - chan_in = chan_out, - chan_out = sle_chan_out, - width = 2 ** (res + 1) - ) - else: - sle = GlobalContext( - chan_in = chan_out, - chan_out = sle_chan_out - ) - - layer = nn.ModuleList([ - nn.Sequential( - upsample(), - Blur(), - Conv2dSame(chan_in, chan_out * 2, 4), - Noise(), - norm_class(chan_out * 2), - nn.GLU(dim = 1) - ), - sle, - attn - ]) - self.layers.append(layer) - - self.out_conv = nn.Conv2d(features[-1], init_channel, 3, padding = 1) - - def forward(self, x): - x = rearrange(x, 'b c -> b c () ()') - x = self.initial_conv(x) - x = F.normalize(x, dim = 1) - - residuals = dict() - - for (res, (up, sle, attn)) in zip(self.res_layers, self.layers): - if exists(attn): - x = attn(x) + x - - x = up(x) - - if exists(sle): - out_res = self.sle_map[res] - residual = sle(x) - residuals[out_res] = residual - - next_res = res + 1 - if next_res in residuals: - x = x * residuals[next_res] - - return self.out_conv(x) - -# Initialize a generator model -gan_new = Generator(latent_dim=256, image_size=256, attn_res_layers = [32]) - -# Load from local saved state dict -# gan_new.load_state_dict(torch.load('/content/orbgan_e3_state_dict.pt')) - -# Load from model hub: -class GeneratorWithPyTorchModelHubMixin(gan_new.__class__, PyTorchModelHubMixin): - pass -gan_new.__class__ = GeneratorWithPyTorchModelHubMixin -gan_new = gan_new.from_pretrained('johnowhitaker/colorb_gan', latent_dim=256, image_size=256, attn_res_layers = [32]) - -def gen_ims(n_rows): - ims = gan_new(torch.randn(int(n_rows)**2, 256)).clamp_(0., 1.) - grid = torchvision.utils.make_grid(ims, nrow=int(n_rows)).permute(1, 2, 0).detach().cpu().numpy() - return (grid*255).astype(np.uint8) - - - -iface = gr.Interface(fn=gen_ims, - inputs=[gr.inputs.Slider(minimum=1, maximum=6, step=1, default=3,label="N rows")], - outputs=[gr.outputs.Image(type="numpy", label="Generated Images")], - title='Demo for Colorbgan model', - article = 'A lightweight-gans trained on johnowhitaker/colorbs. See https://huggingface.co/johnowhitaker/orbgan_e1 for training and inference scripts' -) -iface.launch() \ No newline at end of file diff --git a/spaces/huggingface-projects/auto-retrain/src/models.py b/spaces/huggingface-projects/auto-retrain/src/models.py deleted file mode 100644 index a723528f0e44f2cd3847495b1f41037b8970b260..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/auto-retrain/src/models.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -from pydantic import BaseModel -from typing import Literal - -class Config(BaseModel): - target_namespace: str - input_dataset: str - input_model: str - autotrain_project_prefix: str - - -class WebhookPayloadEvent(BaseModel): - action: Literal["create", "update", "delete"] - scope: str - -class WebhookPayloadRepo(BaseModel): - type: Literal["dataset", "model", "space"] - name: str - id: str - private: bool - headSha: str - -class WebhookPayload(BaseModel): - event: WebhookPayloadEvent - repo: WebhookPayloadRepo - - -config = Config.parse_file(os.path.join(os.getcwd(), "config.json")) diff --git a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/twenty-39a31000.js b/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/twenty-39a31000.js deleted file mode 100644 index 71b8fe82d12c63f876a0453f2d45bf10cd84dd65..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/twenty-39a31000.js +++ /dev/null @@ -1 +0,0 @@ -import{S as K,i as P,s as Q,N as e,O as r,a as l,d as a,b as t,f as z,g as R,J as s,E as I}from"./index-86f4d6c3.js";function W(O){let h,m,i,n,f,_,V,v,E,y,M,o,w,d,g,F,Z,L,A,U,D,H,p,B,G,u,N,k;return{c(){h=e("svg"),m=e("path"),i=e("mask"),n=e("path"),f=e("g"),_=e("path"),V=e("path"),v=e("path"),E=e("path"),y=e("path"),M=e("path"),o=e("mask"),w=e("path"),d=e("g"),g=e("path"),F=e("path"),Z=e("path"),L=e("path"),A=e("path"),U=e("path"),D=e("path"),H=e("defs"),p=e("linearGradient"),B=e("stop"),G=e("stop"),u=e("linearGradient"),N=e("stop"),k=e("stop"),this.h()},l(x){h=r(x,"svg",{xmlns:!0,fill:!0,viewBox:!0,width:!0,height:!0,class:!0});var c=l(h);m=r(c,"path",{fill:!0,d:!0}),l(m).forEach(a),i=r(c,"mask",{id:!0,width:!0,height:!0,x:!0,y:!0,maskUnits:!0,style:!0});var T=l(i);n=r(T,"path",{fill:!0,d:!0}),l(n).forEach(a),T.forEach(a),f=r(c,"g",{mask:!0});var C=l(f);_=r(C,"path",{fill:!0,d:!0}),l(_).forEach(a),V=r(C,"path",{fill:!0,d:!0}),l(V).forEach(a),v=r(C,"path",{fill:!0,d:!0,opacity:!0}),l(v).forEach(a),E=r(C,"path",{fill:!0,d:!0,opacity:!0}),l(E).forEach(a),y=r(C,"path",{fill:!0,d:!0,opacity:!0}),l(y).forEach(a),C.forEach(a),M=r(c,"path",{fill:!0,d:!0}),l(M).forEach(a),o=r(c,"mask",{id:!0,width:!0,height:!0,x:!0,y:!0,maskUnits:!0,style:!0});var j=l(o);w=r(j,"path",{fill:!0,d:!0}),l(w).forEach(a),j.forEach(a),d=r(c,"g",{mask:!0});var S=l(d);g=r(S,"path",{fill:!0,d:!0,opacity:!0}),l(g).forEach(a),F=r(S,"path",{fill:!0,d:!0,opacity:!0}),l(F).forEach(a),Z=r(S,"path",{fill:!0,d:!0,opacity:!0}),l(Z).forEach(a),L=r(S,"path",{fill:!0,d:!0}),l(L).forEach(a),A=r(S,"path",{fill:!0,d:!0}),l(A).forEach(a),S.forEach(a),U=r(c,"path",{fill:!0,d:!0}),l(U).forEach(a),D=r(c,"path",{fill:!0,d:!0}),l(D).forEach(a),H=r(c,"defs",{});var b=l(H);p=r(b,"linearGradient",{id:!0,x1:!0,x2:!0,y1:!0,y2:!0,gradientUnits:!0});var q=l(p);B=r(q,"stop",{"stop-color":!0}),l(B).forEach(a),G=r(q,"stop",{offset:!0,"stop-color":!0}),l(G).forEach(a),q.forEach(a),u=r(b,"linearGradient",{id:!0,x1:!0,x2:!0,y1:!0,y2:!0,gradientUnits:!0});var J=l(u);N=r(J,"stop",{"stop-color":!0}),l(N).forEach(a),k=r(J,"stop",{offset:!0,"stop-color":!0,"stop-opacity":!0}),l(k).forEach(a),J.forEach(a),b.forEach(a),c.forEach(a),this.h()},h(){t(m,"fill","#D9800D"),t(m,"d","M372.7 107.8 201.9 7.6a25 25 0 0 0-25.7.2L12 107.7A25 25 0 0 0 0 129v194.2a25 25 0 0 0 12.2 21.5l164.2 97.7a25 25 0 0 0 25.3.2l170.7-98A25 25 0 0 0 385 323V129.3a25 25 0 0 0-12.3-21.5Z"),t(n,"fill","#D3720A"),t(n,"d","M372.7 107.8 201.9 7.6a25 25 0 0 0-25.7.2L12 107.7A25 25 0 0 0 0 129v194.2a25 25 0 0 0 12.2 21.5l164.2 97.7a25 25 0 0 0 25.3.2l170.7-98A25 25 0 0 0 385 323V129.3a25 25 0 0 0-12.3-21.5Z"),t(i,"id","a"),t(i,"width","385"),t(i,"height","443"),t(i,"x","0"),t(i,"y","4"),t(i,"maskUnits","userSpaceOnUse"),z(i,"mask-type","alpha"),t(_,"fill","#B87711"),t(_,"d","M164.6 311.8c-25.1 59-49.7 120.7-138.5 83s-182.7-151-157.6-210c25-59.1 116.3-93.2 205-55.5 88.9 37.7 116.2 123.4 91 182.5Z"),t(V,"fill","#7C4D16"),t(V,"d","M9 328.5c-9-17-15-206-15-206L-14.5 357 190 486l202-133V126.5s-7.5 187.5-15 202c-3.5 6.8-39.3 28.2-78 52-43.2 26.6-90.5 55.5-109 55.5-18.2 0-63-26.6-104-52.5-37.9-23.9-72.7-46.8-77-55Z"),t(v,"fill","#F5DD21"),t(v,"d","M166 379h48c-9.3 31-9.3 47.8 0 77h-48c8.3-30 7.2-47 0-77Zm165-78.8 30-23.2c8.1 32.4 18.3 45.8 47.1 61l-30 23.2c-4.2-35-15.9-47.2-47.1-61Z"),t(v,"opacity",".3"),t(E,"fill","#C89435"),t(E,"d","M330 111.8 342.6 76c25.7 20.2 41.6 26 72.7 25.6l-12.7 35.8c-24.7-20.3-41-25-72.6-25.6Z"),t(E,"opacity",".3"),t(y,"fill","#F5DD21"),t(y,"d","m22 273 29 24.7c-29.7 14.9-40.7 27.7-50 58.6l-29-24.7c30.4-13.8 40.9-27 50-58.6Z"),t(y,"opacity",".3"),t(f,"mask","url(#a)"),t(M,"fill","url(#b)"),t(M,"d","m355.6 97.5-153.4-90a25 25 0 0 0-25.6.3L29 97.5a25 25 0 0 0-12 21.3v174.5a25 25 0 0 0 12.2 21.5l147.6 87.7a25 25 0 0 0 25.2.2l153.4-88A25 25 0 0 0 368 293V119.1a25 25 0 0 0-12.4-21.6Z"),t(w,"fill","#FBEC17"),t(w,"d","m355.6 97.5-153.4-90a25 25 0 0 0-25.6.3L29 97.5a25 25 0 0 0-12 21.3v174.5a25 25 0 0 0 12.2 21.5l147.6 87.7a25 25 0 0 0 25.2.2l153.4-88A25 25 0 0 0 368 293V119.1a25 25 0 0 0-12.4-21.6Z"),t(o,"id","c"),t(o,"width","351"),t(o,"height","403"),t(o,"x","17"),t(o,"y","4"),t(o,"maskUnits","userSpaceOnUse"),z(o,"mask-type","alpha"),t(g,"fill","url(#d)"),t(g,"d","M197.2 341.7c46.5-71.6-249.6-263-249.6-263L-78 103S133.6 217.9 98.5 266.3c-35 48.4-217-78.3-217-78.3L-90 306.7s240.7 106.5 287.3 35Z"),t(g,"opacity",".7"),t(F,"fill","#FFFA86"),t(F,"d","M259-47h177L239 459H62L259-47Z"),t(F,"opacity",".6"),t(Z,"fill","#FFFA86"),t(Z,"d","M454.5-49H470L291.5 421H276L454.5-49Z"),t(Z,"opacity",".8"),t(L,"fill","#FFFCE6"),t(L,"d","M359.5 115c14 17 8 34 8 38L388 99 193-27 9 82.5V165s7.5-39.5 15.5-52.5 124-113 165-94 156 79.5 170 96.5Z"),t(A,"fill","#FFFAD5"),t(A,"d","M21.5 296c-14-17-8-34-8-38L-7 312l195 126 184-109.5V246s-7.5 39.5-15.5 52.5-124 113-165 94-156-79.5-170-96.5Z"),t(d,"mask","url(#c)"),t(U,"fill","#D9800D"),t(U,"d","M187 74.3a5 5 0 0 0-.5-2.1 34.2 34.2 0 0 0-12.2-13.6 32.3 32.3 0 0 0-18.6-5.6H103a34 34 0 0 0-30.6 19.2c-.4.6-.5 1.3-.5 2V152a5 5 0 0 0 5 5h38.8a5 5 0 0 0 5-5v-42a5 5 0 0 1 5-5h7.4a5 5 0 0 1 5 5v59.3a5 5 0 0 1-5 5h-30a33.6 33.6 0 0 0-30.7 19.1c-.3.7-.5 1.4-.5 2.1v113.4c0 .7.1 1.4.5 2a33 33 0 0 0 13.7 14.7c.7.4 1.5.6 2.3.6H182a5 5 0 0 0 5-5v-72.6a5 5 0 0 0-5-5h-38.8a5 5 0 0 0-5 5V273a5 5 0 0 1-5 5h-7.4a5 5 0 0 1-5-5v-41.8a5 5 0 0 1 5-5h29.9c7 0 13.2-1.9 18.6-5.6a36 36 0 0 0 12.2-13.5c.3-.7.5-1.4.5-2.2V74.3Zm126.4 230.5c0 .8-.2 1.5-.5 2.2-2.9 5.5-7 10-12.2 13.5a32.3 32.3 0 0 1-18.7 5.6h-52.5c-7 0-13.2-1.9-18.8-5.6-5.2-3.5-9.1-8-11.9-13.6-.3-.6-.4-1.3-.4-2V70.5c0-.7.1-1.4.4-2 3.2-6.5 8-11.5 14.4-15a5 5 0 0 1 2.2-.5H296c.8 0 1.6.2 2.3.6a34 34 0 0 1 14.5 14.8c.3.7.5 1.4.5 2.2v234.2Zm-53.9-30.7a5 5 0 0 0 5-5V105.9a5 5 0 0 0-5-5h-7.3a5 5 0 0 0-5 5v163.2a5 5 0 0 0 5 5h7.3Z"),t(D,"fill","#FFFFEF"),t(D,"d","M187 58.2c0-.7-.2-1.4-.5-2a34.2 34.2 0 0 0-12.2-13.6 32.3 32.3 0 0 0-18.6-5.6H103a33.2 33.2 0 0 0-30.6 19.2c-.4.6-.5 1.3-.5 2v77.6a5 5 0 0 0 5 5h38.8a5 5 0 0 0 5-5v-42a5 5 0 0 1 5-5h7.4a5 5 0 0 1 5 5v59.3a5 5 0 0 1-5 5h-30c-7 0-13.3 2-18.9 5.6-5.2 3.6-9.1 8-11.8 13.6-.3.6-.5 1.3-.5 2v113.4c0 .8.1 1.5.5 2.1 3 6.2 7.6 11.1 13.7 14.6.7.4 1.5.6 2.3.6H182a5 5 0 0 0 5-5v-72.5a5 5 0 0 0-5-5h-38.8a5 5 0 0 0-5 5V257a5 5 0 0 1-5 5h-7.4a5 5 0 0 1-5-5v-42a5 5 0 0 1 5-5h29.9a32 32 0 0 0 18.6-5.5c5.3-3.6 9.3-8 12.2-13.6a5 5 0 0 0 .5-2.1V58.2Zm126.4 230.6a5 5 0 0 1-.5 2.1c-2.9 5.5-7 10-12.2 13.6A32.3 32.3 0 0 1 282 310h-52.5a33.2 33.2 0 0 1-30.7-19.1c-.3-.6-.4-1.4-.4-2V54.4c0-.7.1-1.4.4-2 3.2-6.6 8-11.5 14.4-15a5 5 0 0 1 2.2-.5H296a6 6 0 0 1 2.3.5 34 34 0 0 1 14.5 14.9c.3.7.5 1.4.5 2.1v234.3ZM259.5 258a5 5 0 0 0 5-5V90a5 5 0 0 0-5-5h-7.3a5 5 0 0 0-5 5v163a5 5 0 0 0 5 5h7.3Z"),t(B,"stop-color","#F8EF0A"),t(G,"offset","1"),t(G,"stop-color","#F5BE1B"),t(p,"id","b"),t(p,"x1","147"),t(p,"x2","201.5"),t(p,"y1","0"),t(p,"y2","418.5"),t(p,"gradientUnits","userSpaceOnUse"),t(N,"stop-color","#FE9C15"),t(k,"offset","1"),t(k,"stop-color","#FE9C15"),t(k,"stop-opacity","0"),t(u,"id","d"),t(u,"x1","111"),t(u,"x2","-21"),t(u,"y1","357.5"),t(u,"y2","126"),t(u,"gradientUnits","userSpaceOnUse"),t(h,"xmlns","http://www.w3.org/2000/svg"),t(h,"fill","none"),t(h,"viewBox","0 0 385 450"),t(h,"width","385"),t(h,"height","450"),t(h,"class",O[0])},m(x,c){R(x,h,c),s(h,m),s(h,i),s(i,n),s(h,f),s(f,_),s(f,V),s(f,v),s(f,E),s(f,y),s(h,M),s(h,o),s(o,w),s(h,d),s(d,g),s(d,F),s(d,Z),s(d,L),s(d,A),s(h,U),s(h,D),s(h,H),s(H,p),s(p,B),s(p,G),s(H,u),s(u,N),s(u,k)},p(x,[c]){c&1&&t(h,"class",x[0])},i:I,o:I,d(x){x&&a(h)}}}function X(O,h,m){let{classNames:i=""}=h;return O.$$set=n=>{"classNames"in n&&m(0,i=n.classNames)},[i]}class $ extends K{constructor(h){super(),P(this,h,X,W,Q,{classNames:0})}}export{$ as default}; diff --git a/spaces/huggingface/Model_Cards_Writing_Tool/pages/15_More_Information.py b/spaces/huggingface/Model_Cards_Writing_Tool/pages/15_More_Information.py deleted file mode 100644 index 510fc56115fc263e6cd9f8e98ae6136861864f83..0000000000000000000000000000000000000000 --- a/spaces/huggingface/Model_Cards_Writing_Tool/pages/15_More_Information.py +++ /dev/null @@ -1,24 +0,0 @@ -import streamlit as st -from persist import persist, load_widget_state - - -global variable_output - -def main(): - cs_body() - - - -def cs_body(): - - - st.markdown('# More Information [optional]') - st.text_area("Any additional information",height = 200, key=persist("More_info")) - - - - - -if __name__ == '__main__': - load_widget_state() - main() \ No newline at end of file diff --git a/spaces/huspacy/example-applications/examples/dbpedia.py b/spaces/huspacy/example-applications/examples/dbpedia.py deleted file mode 100644 index 695bad49509323d267cab31023cfcdd643aee14b..0000000000000000000000000000000000000000 --- a/spaces/huspacy/example-applications/examples/dbpedia.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import pandas as pd - -from examples.common import NLP - -NLP.add_pipe("dbpedia_spotlight", config={'dbpedia_rest_endpoint': 'https://dbpedia-spotlight.dsd.sztaki.hu/hu', - 'overwrite_ents': False}) - -def process(text: str) -> pd.DataFrame: - doc = NLP(text) - - return pd.DataFrame([{"Text": ent.text, "Resource": ent.kb_id_, "Similarity Score": ent._.dbpedia_raw_result['@similarityScore']} - for ent in doc.spans["dbpedia_spotlight"]]) - - -EXAMPLES = [ - "A Mátrix című sci-fi film Keanu Reeves, Laurence Fishburne, Carrie-Anne Moss, Joe Pantoliano és Hugo Weaving főszereplésével.", - "Egyik pillanatról a másikra eltűnt a nemrég bevezetett HBO Max felületéről több sajátgyártású sorozat, köztük az Aranyélet mindhárom évada és A besúgó összes része.", - "A Netflix felületén elérhető magyar szinkronnal A kiábrándult királylány, ahol a főszereplő magyarhangja Csifó Dorina." -] - -demo = gr.Interface( - fn=process, - inputs=gr.Textbox(value=EXAMPLES[0], lines=10, label="Input text", show_label=True), - outputs=gr.DataFrame(label="DBpedia Spotlight Annotations", show_label=False, max_cols=2, max_rows=10), - examples=EXAMPLES, - cache_examples=False, -) diff --git a/spaces/inamXcontru/PoeticTTS/Concepts In Thermal Physics Blundell Solutions TOP.md b/spaces/inamXcontru/PoeticTTS/Concepts In Thermal Physics Blundell Solutions TOP.md deleted file mode 100644 index 4f07ce9d049321355dd4ecb9dde711864ab736d3..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Concepts In Thermal Physics Blundell Solutions TOP.md +++ /dev/null @@ -1,103 +0,0 @@ - - Concepts in Thermal Physics Blundell Solutions: A Review- -Thermal physics is a branch of physics that deals with the study of heat, temperature, entropy, and other related phenomena. Thermal physics is essential for understanding many aspects of modern physics, chemistry, and engineering, such as thermodynamics, statistical mechanics, phase transitions, quantum gases, and blackbody radiation. - -One of the most popular textbooks for learning thermal physics is Concepts in Thermal Physics by Katherine M. Blundell and Stephen J. Blundell. This book provides a comprehensive and modern introduction to the main principles and applications of thermal physics, covering both classical and quantum topics. The book also includes numerous exercises and examples to help students test their understanding and develop their problem-solving skills. -concepts in thermal physics blundell solutionsDownload ★★★ https://gohhs.com/2uz3UK - - However, learning thermal physics can be challenging and sometimes frustrating, especially for beginners. That's why many students look for solutions and answers to the exercises and questions in the book. Fortunately, there are several online resources that offer Concepts in Thermal Physics Blundell Solutions for free or at a low cost. In this article, we will review some of these resources and evaluate their quality and usefulness. - -Quizlet- -Quizlet is a popular online platform that allows users to create and share flashcards, quizzes, games, and other study tools. Quizlet also offers textbook solutions for various subjects, including physics. One of the textbooks that Quizlet covers is Concepts in Thermal Physics by Blundell and Blundell. - -Quizlet provides solutions and answers for all the exercises and questions in the book, organized by chapter and section. The solutions are verified by experts and explained in a clear and detailed way. Quizlet also allows users to access other study materials related to the book, such as summaries, notes, diagrams, and videos. - - -The main advantage of Quizlet is that it is easy to use and interactive. Users can customize their study mode, track their progress, and test their knowledge with various features. Quizlet also has a mobile app that enables users to study anywhere and anytime. - -The main drawback of Quizlet is that it requires a subscription to access all the solutions and features. The subscription costs $19.99 per year for students and $47.88 per year for teachers. However, Quizlet offers a free trial period of 7 days for new users. - -Studocu- -Studocu is another online platform that offers textbook solutions for various subjects, including physics. Studocu also covers Concepts in Thermal Physics by Blundell and Blundell. - -Studocu provides solutions and answers for all the exercises and questions in the book, organized by chapter and section. The solutions are uploaded by other users who have solved the book or have access to the official solutions manual. Studocu also allows users to access other study materials related to the book, such as lecture notes, past exams, summaries, and essays. - -The main advantage of Studocu is that it is free to use and has a large database of study materials. Users can also rate and comment on the solutions and materials uploaded by other users, which helps to ensure their quality and accuracy. - -The main drawback of Studocu is that it relies on user-generated content, which means that some solutions may be incomplete, incorrect, or poorly explained. Studocu also has some limitations on downloading and printing the solutions and materials. - -Oxford Academic- -Oxford Academic is the official website of Oxford University Press (OUP), which publishes Concepts in Thermal Physics by Blundell and Blundell. Oxford Academic offers some online resources related to the book, such as an abstract, a table of contents, a sample chapter, and a link to purchase the book. - -Oxford Academic does not provide solutions or answers for the exercises and questions in the book. However, it does provide some supplementary materials for instructors who adopt the book for their courses. These materials include lecture slides, figures, tables, animations, simulations, videos, quizzes, tests, assignments, projects, and solutions manuals. - -The main advantage of Oxford Academic is that it is the official source of information about the book and its authors. The supplementary materials are also high-quality and comprehensive. - -The main drawback of Oxford Academic is that it does not offer any resources for students who want to learn from the book independently. The supplementary materials are only available for instructors who register with OUP and request access. - -Conclusion- -In conclusion, there are several online resources that offer Concepts in Thermal Physics Blundell Solutions. Each resource has its own advantages and drawbacks depending on your needs and preferences. Quizlet is ideal for students who want interactive and verified solutions with additional study tools. Studocu is ideal for students who want free and diverse solutions with other study materials. Oxford Academic is ideal for instructors who want official and comprehensive supplementary materials for their courses. -How to Use Concepts in Thermal Physics Blundell Solutions- -Using Concepts in Thermal Physics Blundell Solutions can be very helpful for students who want to learn thermal physics effectively and efficiently. However, it is important to use them wisely and not rely on them too much. Here are some tips on how to use Concepts in Thermal Physics Blundell Solutions properly: - -
The Benefits of Concepts in Thermal Physics Blundell Solutions- -Using Concepts in Thermal Physics Blundell Solutions can have many benefits for students who want to master thermal physics. Some of these benefits are: - -
Conclusion- -In conclusion, Concepts in Thermal Physics Blundell Solutions are valuable resources for students who want to learn thermal physics from one of the best textbooks available. However, they should be used wisely and properly, not as a shortcut or a crutch. By following the tips and advice given in this article, you can use Concepts in Thermal Physics Blundell Solutions effectively and benefit from them greatly. -How to Learn Concepts in Thermal Physics Blundell Solutions- -Learning Concepts in Thermal Physics Blundell Solutions can be very rewarding and enjoyable for students who are interested in thermal physics. However, it can also be challenging and demanding, especially for beginners. That's why it is important to have a good learning strategy and plan that can help you achieve your goals and overcome your difficulties. Here are some tips on how to learn Concepts in Thermal Physics Blundell Solutions effectively: - -
The Future of Concepts in Thermal Physics Blundell Solutions- -Concepts in Thermal Physics Blundell Solutions are not only useful for students who want to learn thermal physics now, but also for those who want to pursue further studies or careers in thermal physics or related fields in the future. Thermal physics is a dynamic and evolving field that has many applications and implications for science, technology, society, and environment. Here are some of the future trends and developments that may affect Concepts in Thermal Physics Blundell Solutions: - -
Conclusion- -In conclusion,Concepts in Thermal Physics Blundell Solutions are valuable resources for students who want to learn thermal physics from one of the best textbooks available. However, they should be used wisely and properly, -not as a shortcut or a crutch. By following the tips and advice given in this article, -you can use Concepts in Thermal Physics Blundell Solutions effectively -and benefit from them greatly. -In conclusion,Concepts in Thermal Physics Blundell Solutions are valuable resources for students who want to learn thermal physics from one of the best textbooks available. However, they should be used wisely and properly, -not as a shortcut or a crutch. By following the tips and advice given in this article, -you can use Concepts in Thermal Physics Blundell Solutions effectively -and benefit from them greatly. 3cee63e6c2- - \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Rising 3 - Apocalypse Edition (Update 5) Pc Game.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Rising 3 - Apocalypse Edition (Update 5) Pc Game.md deleted file mode 100644 index 812a7b65077c828fa0fcefb742da5afac3f542d3..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Dead Rising 3 - Apocalypse Edition (Update 5) Pc Game.md +++ /dev/null @@ -1,9 +0,0 @@ - Dead Rising 3 - Apocalypse Edition (Update 5) pc gameDownload ✵ https://urlin.us/2uEyvG - -Everything and everything is a weapon in Dead Rising 3. Explore the zombie-infested city of Los Perdidos and find a way to escape before the military strike. The game is developed on a modern engine from Capcom, which gives players a deeper and more realistic gameplay. -Dead Rising 3 features a new online multiplayer mode and a new co-op mode that allows up to 4 players to explore Los Perdidos together. -In addition to the new multiplayer mode, Dead Rising 3 includes all existing maps and characters from previous installments. -dead rising 8a78ff9644 - - - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (The Greatest Showman On Earth (Engli).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (The Greatest Showman On Earth (Engli).md deleted file mode 100644 index 71f8040b6d4748d631a2fd483599f291de03ac09..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (The Greatest Showman On Earth (Engli).md +++ /dev/null @@ -1,6 +0,0 @@ - HD Online Player (The Greatest Showman On Earth (Engli)Download File === https://urlin.us/2uEyxk - -read peerless battle spirit chapter 494 the ninth city free online high quality at ... referred to as the top genius of linshui city at the martial spirit awakening ceremony he ... among the class transported to another world nagumo hajime is an ordinary ... welcome to the battle spirits wiki battle spirits is a two player collectible card ... 1fdad05405 - - - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Konar Tamil Urai 11th Std Pdf 54 ((INSTALL)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Konar Tamil Urai 11th Std Pdf 54 ((INSTALL)).md deleted file mode 100644 index aa8c4f0a47ecf29e997082dfb1bf759b9b641f8a..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Konar Tamil Urai 11th Std Pdf 54 ((INSTALL)).md +++ /dev/null @@ -1,6 +0,0 @@ - Konar Tamil Urai 11th Std Pdf 54Download > https://urlin.us/2uEyH9 - -March 23, 2021 is the latest sura guide 2021 for standard 11 students. Students who want full grades can download it. The 11th Tamil Language Manual... will be released at least one month before the first official year. 8a78ff9644 - - - diff --git a/spaces/irvay/RVC_IR/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/irvay/RVC_IR/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/irvay/RVC_IR/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/isaacjeffersonlee/Legal-Grammar-Error-Corrector/app.py b/spaces/isaacjeffersonlee/Legal-Grammar-Error-Corrector/app.py deleted file mode 100644 index 652c263cc66c3423c0b571af61ee0388d267c56b..0000000000000000000000000000000000000000 --- a/spaces/isaacjeffersonlee/Legal-Grammar-Error-Corrector/app.py +++ /dev/null @@ -1,187 +0,0 @@ -import math -import re -import torch -from transformers import AutoTokenizer, GPT2LMHeadModel -from lemminflect import getLemma, getAllInflections -from difflib import Differ -import gradio as gr -from hunspell import Hunspell - -h = Hunspell() -# Add our custom legal dictionary to the Hunspell word list -h.add_dic("Data/legal.dic") - -device = "cuda" if torch.cuda.is_available() else "cpu" - -# Set the seeds for reproducibility -seed = 123 -torch.manual_seed(seed) -torch.cuda.manual_seed(seed) -torch.cuda.manual_seed_all(seed) -torch.backends.cudnn.benchmark = False -torch.backends.cudnn.deterministic = True # Load pre-trained GPT2 model - -GPT_tokenizer = AutoTokenizer.from_pretrained("distilgpt2") -model_name = "isaacjeffersonlee/distilgpt2-for-legal-grammar-error-correction" -GPT_tuned_model = GPT2LMHeadModel.from_pretrained(model_name).to(device) - - -def get_inflections_and_lemmas(word): - lemmas = list(getLemma(word, upos="VERB")) - if len(lemmas) == 1: - if lemmas[0] == word: - # Return empty list if the only word - # found was the original word - lemmas = [] - infl = [] - for val in getAllInflections(word).values(): # Flatten nested list - infl += list(val) - return set(lemmas + infl) - - -def generate_confusion_set(word): - # Deal with articles and prepositions - articles = {"ε", "a", "an", "the"} - if word in articles: - return articles - - preps = {"ε", "about", "at", "by", "for", "from", "in", "of", "on", "to", "with"} - if word in preps: - return preps - - if not h.spell(word): - confusion_set = set() - suggested_words = h.suggest(word) - confusion_set = confusion_set.union(suggested_words) - for suggested_word in suggested_words: # Add inflections of suggested words - confusion_set = confusion_set.union( - get_inflections_and_lemmas(suggested_word) - ) - return confusion_set - else: # word is a valid word in our Hunspell dict - return get_inflections_and_lemmas(word).union(set([word])) - - -def log_likelihood(text, model, tokenizer): - encoded = tokenizer(text, return_tensors="pt") - for key in encoded: - encoded[key] = encoded[key].to(device) - N = len(encoded.input_ids[0]) - log_prob = 0 - with torch.no_grad(): - outputs = model(**encoded) - for idx in range(N - 1): # Offset because first token is not predicted - token_id = encoded.input_ids[0][idx + 1] - log_prob += torch.log_softmax(outputs.logits[0][idx], dim=-1)[token_id].item() - - return log_prob - math.log(N) # Normalize according to number of tokens - - -def replace_word(text, from_word, to_word): - if to_word == "ε": # Deletion special case - return re.sub(rf" {from_word}([!()\'\"?.,\s]|$)", r" ", text) - else: - return re.sub(rf"{from_word}(?=[!()\'\"?.,\s]|$)", rf"{to_word}", text) - - -def highlight_words(text, words): - highlighted_text = text - for word in words: - highlighted_text = replace_word( - highlighted_text, word, "\033[92m" + word + "\033[00m" - ) - return highlighted_text - - -def correct_word(text, word, idx, max_idx, ll_threshold, model, tokenizer): - confusion_set = generate_confusion_set(word) - # If no other valid alternatives are found - if len(confusion_set) == 1 and word in confusion_set: - return text, None - max_ll = -999999.999 - best_alt_text = None - best_alt_word = None - for alt_word in confusion_set: - alt_text = replace_word(text, word, alt_word) - ll = log_likelihood(alt_text, model, tokenizer) - if ll > max_ll and ll > ll_threshold: - max_ll = ll - best_alt_text = alt_text - best_alt_word = alt_word - - if best_alt_text is not None: - return best_alt_text, best_alt_word - else: - return text, None # Failed to find a better replacement - - -def correct_text(text, model, tokenizer, ll_threshold=-200.0, highlight_changes=False): - # First we want to break the text down into a list of words. - words = re.sub(r"[^\w\s]", "", text).split(" ") - corrected_text = text - # Indices of words spelled incorrectly, we will address these first. - spelling_errors = [not h.spell(word) for word in words] - spelled_wrong_idx = [idx for idx, error in enumerate(spelling_errors) if error] - spelled_correct_idx = [ - idx for idx, error in enumerate(spelling_errors) if not error - ] - corrections = {} - for idx in spelled_wrong_idx: # First iterate over incorrectly spelled words. - word = words[idx] - corrected_text, corrected_word = correct_word( - corrected_text, word, idx, len(words) - 1, ll_threshold, model, tokenizer - ) - if corrected_word != word: - corrections[word] = corrected_word - for idx in spelled_correct_idx: - word = words[idx] - corrected_text, corrected_word = correct_word( - corrected_text, word, idx, len(words) - 1, ll_threshold, model, tokenizer - ) - if corrected_word != word and corrected_word is not None: - corrections[word] = corrected_word - if highlight_changes: - if not corrections: - print("No corrections!") - for original_word, corrected_word in corrections.items(): - if corrected_word is not None: - print( - f"\033[91m{original_word}\033[00m -> \033[92m{corrected_word}\033[00m" - ) - corrected_text = highlight_words(corrected_text, corrections.values()) - return corrected_text - - -def diff_texts(text1): - d = Differ() - text1 = text1.strip("\n") - text2 = correct_text(text1, GPT_tuned_model, GPT_tokenizer) - diffs = [ - (token[2:], token[0] if token[0] != " " else None) - for token in d.compare(text1, text2) - ] - change = [(text1, "Original"), (text2, "Corrected")] - return diffs, change - - -demo = gr.Interface( - fn=diff_texts, - inputs=gr.Textbox( - label="Input Text", - lines=1, - value="We vacat the judgment of an district court.", - ), - outputs=[ - gr.HighlightedText( - label="Diff", - combine_adjacent=True, - ).style(color_map={"-": "red", "+": "green"}), - gr.HighlightedText( - label="Change", - combine_adjacent=True, - ).style(color_map={"Original": "red", "Corrected": "green"}), - ], -) - -if __name__ == "__main__": - demo.launch(share=False) # 'Share not supported when you are in spaces' diff --git a/spaces/javedkumail/HopeAI/app.py b/spaces/javedkumail/HopeAI/app.py deleted file mode 100644 index ab07f3485fc648c6301c23d19f693da815b8fbef..0000000000000000000000000000000000000000 --- a/spaces/javedkumail/HopeAI/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import openai -import gradio -import config - -openai.api_key = config.OPEN_API_KEY - -messages = [{"role": "system", "content": 'You are Hope, a mental therapy expert that specializes in psychology and guide through emotions of patients. You cannot give answer on any other topic no atter how small or big.You do not have any other name. Always introduce yourself at the start of a new conversation'}] - -def CustomChatGPT(Patient_Query): - messages.append({"role": "user", "content": Patient_Query}) - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = messages - ) - ChatGPT_reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": ChatGPT_reply}) - # chat_transcript = "" - # for message in messages: - # if message['role'] != 'system': - # chat_transcript += message['role'] + ": " + message['content'] + "\n\n" - return ChatGPT_reply - - -demo = gradio.Interface(fn=CustomChatGPT, inputs = "text", outputs = "text", title = "HopeAI SMART MENTAL THERAPIST") - -demo.launch(share=True) \ No newline at end of file diff --git a/spaces/jbetker/tortoise/is_this_from_tortoise.py b/spaces/jbetker/tortoise/is_this_from_tortoise.py deleted file mode 100644 index 550b33e61c13c7ffe9509ae2b07d81903ee7cb38..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/is_this_from_tortoise.py +++ /dev/null @@ -1,14 +0,0 @@ -import argparse - -from api import classify_audio_clip -from utils.audio import load_audio - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--clip', type=str, help='Path to an audio clip to classify.', default="results/favorite_riding_hood.mp3") - args = parser.parse_args() - - clip = load_audio(args.clip, 24000) - clip = clip[:, :220000] - prob = classify_audio_clip(clip) - print(f"This classifier thinks there is a {prob*100}% chance that this clip was generated from Tortoise.") \ No newline at end of file diff --git a/spaces/jbilcke-hf/observer/src/lib/utils.ts b/spaces/jbilcke-hf/observer/src/lib/utils.ts deleted file mode 100644 index ec79801fe9cdd7711f6dbef26678a134c634a8be..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/lib/utils.ts +++ /dev/null @@ -1,6 +0,0 @@ -import { type ClassValue, clsx } from "clsx" -import { twMerge } from "tailwind-merge" - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} diff --git a/spaces/jeffistyping/Youtube-Whisperer/app.py b/spaces/jeffistyping/Youtube-Whisperer/app.py deleted file mode 100644 index c3b950d79209e5e4b903442a861cc89227c1448e..0000000000000000000000000000000000000000 --- a/spaces/jeffistyping/Youtube-Whisperer/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - - -class GradioInference(): - def __init__(self): - self.sizes = list(whisper._MODELS.keys()) - self.langs = ["none"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) - self.current_size = "base" - self.loaded_model = whisper.load_model(self.current_size) - self.yt = None - - def __call__(self, link, lang, size, subs): - if self.yt is None: - self.yt = YouTube(link) - path = self.yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - - if lang == "none": - lang = None - - if size != self.current_size: - self.loaded_model = whisper.load_model(size) - self.current_size = size - results = self.loaded_model.transcribe(path, language=lang) - - if subs == "None": - return results["text"] - elif subs == ".srt": - return self.srt(results["segments"]) - elif ".csv" == ".csv": - return self.csv(results["segments"]) - - def srt(self, segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i+1}\n" - output += f"{self.format_time(segment['start'])} --> {self.format_time(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - - def csv(self, segments): - output = "" - for segment in segments: - output += f"{segment['start']},{segment['end']},{segment['text']}\n" - return output - - def format_time(self, time): - hours = time//3600 - minutes = (time - hours*3600)//60 - seconds = time - hours*3600 - minutes*60 - milliseconds = (time - int(time))*1000 - return f"{int(hours):02d}:{int(minutes):02d}:{int(seconds):02d},{int(milliseconds):03d}" - - def populate_metadata(self, link): - self.yt = YouTube(link) - return self.yt.thumbnail_url, self.yt.title - -gio = GradioInference() -title="Youtube Whisperer" -description="Speech to text transcription of Youtube videos using OpenAI's Whisper" - -block = gr.Blocks() -with block: - gr.HTML( - """ -
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(equal_height=True):
- sz = gr.Dropdown(label="Model Size", choices=gio.sizes, value='base')
- lang = gr.Dropdown(label="Language (Optional)", choices=gio.langs, value="none")
- with gr.Row().style(equal_height=True):
- wt = gr.Radio(["None", ".srt", ".csv"], label="With Timestamps?")
- link = gr.Textbox(label="YouTube Link")
- title = gr.Label(label="Video Title")
- with gr.Row().style(equal_height=True):
- img = gr.Image(label="Thumbnail")
- text = gr.Textbox(label="Transcription", placeholder="Transcription Output", lines=10)
- with gr.Row().style(equal_height=True):
- btn = gr.Button("Transcribe")
- btn.click(gio, inputs=[link, lang, sz, wt], outputs=[text])
- link.change(gio.populate_metadata, inputs=[link], outputs=[img, title])
-block.launch()
\ No newline at end of file
diff --git a/spaces/jitesh/storytelling/src/story_gen_test.py b/spaces/jitesh/storytelling/src/story_gen_test.py
deleted file mode 100644
index 41b4bb4cc12f749d47fbe6eca44139a5594e8e85..0000000000000000000000000000000000000000
--- a/spaces/jitesh/storytelling/src/story_gen_test.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# %%
-import printj
-from story_gen import StoryGenerator
-
-gen = StoryGenerator()
-# # %%
-# story_till_now, emotion = gen.story(story_till_now='Hello, I\'m a language model,', num_generation=3, length=10)
-# printj.purple(story_till_now)
-# printj.yellow(emotion)
-
-
-# %%
-gen.get_stats(story_till_now="For myriad of eons i’ve forgotten who I really was, harvesting the essence of all existence.",
- length=10, num_generation=3, num_tests=50)
-
-# %%
-gen.save_stats('/home/jitesh/haru/ist/results/a.xlsx')
-
-
-
-
-# %%
-data=gen.stats_df[gen.stats_df.sentence_no==3]
-import seaborn as sns
-sns.set_theme(style="whitegrid")
-# ax = sns.violinplot(x="day", y="total_bill", data=tips)
-ax = sns.violinplot(x="reaction_weight", y="num_reactions", data=data).set_title('Analysing ProbabilityEmote (Max reactions=3)')
-# %%
-
-gen.stats_df[gen.stats_df.sentence_no==3]
-# %%
-import re
-len(re.findall(r'\w+', 'line ive '))
-# %%
diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
deleted file mode 100644
index 1d973d9b8b9ab547571abc5a3f5ea86226a25924..0000000000000000000000000000000000000000
--- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/models/deepmind_version.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from utils.display import *
-from utils.dsp import *
-
-
-class WaveRNN(nn.Module) :
- def __init__(self, hidden_size=896, quantisation=256) :
- super(WaveRNN, self).__init__()
-
- self.hidden_size = hidden_size
- self.split_size = hidden_size // 2
-
- # The main matmul
- self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False)
-
- # Output fc layers
- self.O1 = nn.Linear(self.split_size, self.split_size)
- self.O2 = nn.Linear(self.split_size, quantisation)
- self.O3 = nn.Linear(self.split_size, self.split_size)
- self.O4 = nn.Linear(self.split_size, quantisation)
-
- # Input fc layers
- self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False)
- self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False)
-
- # biases for the gates
- self.bias_u = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_r = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_e = nn.Parameter(torch.zeros(self.hidden_size))
-
- # display num params
- self.num_params()
-
-
- def forward(self, prev_y, prev_hidden, current_coarse) :
-
- # Main matmul - the projection is split 3 ways
- R_hidden = self.R(prev_hidden)
- R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1)
-
- # Project the prev input
- coarse_input_proj = self.I_coarse(prev_y)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project the prev input and current coarse sample
- fine_input = torch.cat([prev_y, current_coarse], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # concatenate for the gates
- I_u = torch.cat([I_coarse_u, I_fine_u], dim=1)
- I_r = torch.cat([I_coarse_r, I_fine_r], dim=1)
- I_e = torch.cat([I_coarse_e, I_fine_e], dim=1)
-
- # Compute all gates for coarse and fine
- u = F.sigmoid(R_u + I_u + self.bias_u)
- r = F.sigmoid(R_r + I_r + self.bias_r)
- e = F.tanh(r * R_e + I_e + self.bias_e)
- hidden = u * prev_hidden + (1. - u) * e
-
- # Split the hidden state
- hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1)
-
- # Compute outputs
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
-
- return out_coarse, out_fine, hidden
-
-
- def generate(self, seq_len):
- with torch.no_grad():
- # First split up the biases for the gates
- b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size)
- b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size)
- b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size)
-
- # Lists for the two output seqs
- c_outputs, f_outputs = [], []
-
- # Some initial inputs
- out_coarse = torch.LongTensor([0]).cuda()
- out_fine = torch.LongTensor([0]).cuda()
-
- # We'll meed a hidden state
- hidden = self.init_hidden()
-
- # Need a clock for display
- start = time.time()
-
- # Loop for generation
- for i in range(seq_len) :
-
- # Split into two hidden states
- hidden_coarse, hidden_fine = \
- torch.split(hidden, self.split_size, dim=1)
-
- # Scale and concat previous predictions
- out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1.
- out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1.
- prev_outputs = torch.cat([out_coarse, out_fine], dim=1)
-
- # Project input
- coarse_input_proj = self.I_coarse(prev_outputs)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project hidden state and split 6 ways
- R_hidden = self.R(hidden)
- R_coarse_u , R_fine_u, \
- R_coarse_r, R_fine_r, \
- R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1)
-
- # Compute the coarse gates
- u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u)
- r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r)
- e = F.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e)
- hidden_coarse = u * hidden_coarse + (1. - u) * e
-
- # Compute the coarse output
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- posterior = F.softmax(out_coarse, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_coarse = distrib.sample()
- c_outputs.append(out_coarse)
-
- # Project the [prev outputs and predicted coarse sample]
- coarse_pred = out_coarse.float() / 127.5 - 1.
- fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # Compute the fine gates
- u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u)
- r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r)
- e = F.tanh(r * R_fine_e + I_fine_e + b_fine_e)
- hidden_fine = u * hidden_fine + (1. - u) * e
-
- # Compute the fine output
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
- posterior = F.softmax(out_fine, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_fine = distrib.sample()
- f_outputs.append(out_fine)
-
- # Put the hidden state back together
- hidden = torch.cat([hidden_coarse, hidden_fine], dim=1)
-
- # Display progress
- speed = (i + 1) / (time.time() - start)
- stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed))
-
- coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy()
- fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy()
- output = combine_signal(coarse, fine)
-
- return output, coarse, fine
-
- def init_hidden(self, batch_size=1) :
- return torch.zeros(batch_size, self.hidden_size).cuda()
-
- def num_params(self) :
- parameters = filter(lambda p: p.requires_grad, self.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- print('Trainable Parameters: %.3f million' % parameters)
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_htmlparser.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_htmlparser.py
deleted file mode 100644
index a1195d815f6903751af5139986bf7d1c2880d9e1..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_htmlparser.py
+++ /dev/null
@@ -1,148 +0,0 @@
-"""Tests to ensure that the html.parser tree builder generates good
-trees."""
-
-from pdb import set_trace
-import pickle
-import pytest
-import warnings
-from bs4.builder import (
- HTMLParserTreeBuilder,
- ParserRejectedMarkup,
- XMLParsedAsHTMLWarning,
-)
-from bs4.builder._htmlparser import BeautifulSoupHTMLParser
-from . import SoupTest, HTMLTreeBuilderSmokeTest
-
-class TestHTMLParserTreeBuilder(SoupTest, HTMLTreeBuilderSmokeTest):
-
- default_builder = HTMLParserTreeBuilder
-
- def test_rejected_input(self):
- # Python's html.parser will occasionally reject markup,
- # especially when there is a problem with the initial DOCTYPE
- # declaration. Different versions of Python sound the alarm in
- # different ways, but Beautiful Soup consistently raises
- # errors as ParserRejectedMarkup exceptions.
- bad_markup = [
- # https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=28873
- # https://github.com/guidovranken/python-library-fuzzers/blob/master/corp-html/519e5b4269a01185a0d5e76295251921da2f0700
- # https://github.com/python/cpython/issues/81928
- b'\n",
- ]
- for markup in bad_markup:
- with pytest.raises(ParserRejectedMarkup):
- soup = self.soup(markup)
-
- def test_namespaced_system_doctype(self):
- # html.parser can't handle namespaced doctypes, so skip this one.
- pass
-
- def test_namespaced_public_doctype(self):
- # html.parser can't handle namespaced doctypes, so skip this one.
- pass
-
- def test_builder_is_pickled(self):
- """Unlike most tree builders, HTMLParserTreeBuilder and will
- be restored after pickling.
- """
- tree = self.soup("foo")
- dumped = pickle.dumps(tree, 2)
- loaded = pickle.loads(dumped)
- assert isinstance(loaded.builder, type(tree.builder))
-
- def test_redundant_empty_element_closing_tags(self):
- self.assert_soup('
-
- Youtube Whisperer-- Speech to text transcription of Youtube videos using OpenAI's Whisper - -', " ") - self.assert_soup('', "") - - def test_empty_element(self): - # This verifies that any buffered data present when the parser - # finishes working is handled. - self.assert_soup("foo bar", "foo &# bar") - - def test_tracking_line_numbers(self): - # The html.parser TreeBuilder keeps track of line number and - # position of each element. - markup = "\n \n\n %s ' % input_element
- div = self.soup(markup).div
- without_element = div.encode()
- expect = b"%s " % output_unicode.encode("utf8")
- assert without_element == expect
-
- with_element = div.encode(formatter="html")
- expect = b"%s " % output_element
- assert with_element == expect
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/layout.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/layout.py
deleted file mode 100644
index 6b85cd503387291f326e937b36a5739b1de23ef1..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/layout.py
+++ /dev/null
@@ -1,530 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod, Roozbeh Pournader
-
-from fontTools import ttLib
-from fontTools.ttLib.tables.DefaultTable import DefaultTable
-from fontTools.ttLib.tables import otTables
-from fontTools.merge.base import add_method, mergeObjects
-from fontTools.merge.util import *
-import logging
-
-
-log = logging.getLogger("fontTools.merge")
-
-
-def mergeLookupLists(lst):
- # TODO Do smarter merge.
- return sumLists(lst)
-
-
-def mergeFeatures(lst):
- assert lst
- self = otTables.Feature()
- self.FeatureParams = None
- self.LookupListIndex = mergeLookupLists(
- [l.LookupListIndex for l in lst if l.LookupListIndex]
- )
- self.LookupCount = len(self.LookupListIndex)
- return self
-
-
-def mergeFeatureLists(lst):
- d = {}
- for l in lst:
- for f in l:
- tag = f.FeatureTag
- if tag not in d:
- d[tag] = []
- d[tag].append(f.Feature)
- ret = []
- for tag in sorted(d.keys()):
- rec = otTables.FeatureRecord()
- rec.FeatureTag = tag
- rec.Feature = mergeFeatures(d[tag])
- ret.append(rec)
- return ret
-
-
-def mergeLangSyses(lst):
- assert lst
-
- # TODO Support merging ReqFeatureIndex
- assert all(l.ReqFeatureIndex == 0xFFFF for l in lst)
-
- self = otTables.LangSys()
- self.LookupOrder = None
- self.ReqFeatureIndex = 0xFFFF
- self.FeatureIndex = mergeFeatureLists(
- [l.FeatureIndex for l in lst if l.FeatureIndex]
- )
- self.FeatureCount = len(self.FeatureIndex)
- return self
-
-
-def mergeScripts(lst):
- assert lst
-
- if len(lst) == 1:
- return lst[0]
- langSyses = {}
- for sr in lst:
- for lsr in sr.LangSysRecord:
- if lsr.LangSysTag not in langSyses:
- langSyses[lsr.LangSysTag] = []
- langSyses[lsr.LangSysTag].append(lsr.LangSys)
- lsrecords = []
- for tag, langSys_list in sorted(langSyses.items()):
- lsr = otTables.LangSysRecord()
- lsr.LangSys = mergeLangSyses(langSys_list)
- lsr.LangSysTag = tag
- lsrecords.append(lsr)
-
- self = otTables.Script()
- self.LangSysRecord = lsrecords
- self.LangSysCount = len(lsrecords)
- dfltLangSyses = [s.DefaultLangSys for s in lst if s.DefaultLangSys]
- if dfltLangSyses:
- self.DefaultLangSys = mergeLangSyses(dfltLangSyses)
- else:
- self.DefaultLangSys = None
- return self
-
-
-def mergeScriptRecords(lst):
- d = {}
- for l in lst:
- for s in l:
- tag = s.ScriptTag
- if tag not in d:
- d[tag] = []
- d[tag].append(s.Script)
- ret = []
- for tag in sorted(d.keys()):
- rec = otTables.ScriptRecord()
- rec.ScriptTag = tag
- rec.Script = mergeScripts(d[tag])
- ret.append(rec)
- return ret
-
-
-otTables.ScriptList.mergeMap = {
- "ScriptCount": lambda lst: None, # TODO
- "ScriptRecord": mergeScriptRecords,
-}
-otTables.BaseScriptList.mergeMap = {
- "BaseScriptCount": lambda lst: None, # TODO
- # TODO: Merge duplicate entries
- "BaseScriptRecord": lambda lst: sorted(
- sumLists(lst), key=lambda s: s.BaseScriptTag
- ),
-}
-
-otTables.FeatureList.mergeMap = {
- "FeatureCount": sum,
- "FeatureRecord": lambda lst: sorted(sumLists(lst), key=lambda s: s.FeatureTag),
-}
-
-otTables.LookupList.mergeMap = {
- "LookupCount": sum,
- "Lookup": sumLists,
-}
-
-otTables.Coverage.mergeMap = {
- "Format": min,
- "glyphs": sumLists,
-}
-
-otTables.ClassDef.mergeMap = {
- "Format": min,
- "classDefs": sumDicts,
-}
-
-otTables.LigCaretList.mergeMap = {
- "Coverage": mergeObjects,
- "LigGlyphCount": sum,
- "LigGlyph": sumLists,
-}
-
-otTables.AttachList.mergeMap = {
- "Coverage": mergeObjects,
- "GlyphCount": sum,
- "AttachPoint": sumLists,
-}
-
-# XXX Renumber MarkFilterSets of lookups
-otTables.MarkGlyphSetsDef.mergeMap = {
- "MarkSetTableFormat": equal,
- "MarkSetCount": sum,
- "Coverage": sumLists,
-}
-
-otTables.Axis.mergeMap = {
- "*": mergeObjects,
-}
-
-# XXX Fix BASE table merging
-otTables.BaseTagList.mergeMap = {
- "BaseTagCount": sum,
- "BaselineTag": sumLists,
-}
-
-otTables.GDEF.mergeMap = (
- otTables.GSUB.mergeMap
-) = (
- otTables.GPOS.mergeMap
-) = otTables.BASE.mergeMap = otTables.JSTF.mergeMap = otTables.MATH.mergeMap = {
- "*": mergeObjects,
- "Version": max,
-}
-
-ttLib.getTableClass("GDEF").mergeMap = ttLib.getTableClass(
- "GSUB"
-).mergeMap = ttLib.getTableClass("GPOS").mergeMap = ttLib.getTableClass(
- "BASE"
-).mergeMap = ttLib.getTableClass(
- "JSTF"
-).mergeMap = ttLib.getTableClass(
- "MATH"
-).mergeMap = {
- "tableTag": onlyExisting(equal), # XXX clean me up
- "table": mergeObjects,
-}
-
-
-@add_method(ttLib.getTableClass("GSUB"))
-def merge(self, m, tables):
- assert len(tables) == len(m.duplicateGlyphsPerFont)
- for i, (table, dups) in enumerate(zip(tables, m.duplicateGlyphsPerFont)):
- if not dups:
- continue
- if table is None or table is NotImplemented:
- log.warning(
- "Have non-identical duplicates to resolve for '%s' but no GSUB. Are duplicates intended?: %s",
- m.fonts[i]._merger__name,
- dups,
- )
- continue
-
- synthFeature = None
- synthLookup = None
- for script in table.table.ScriptList.ScriptRecord:
- if script.ScriptTag == "DFLT":
- continue # XXX
- for langsys in [script.Script.DefaultLangSys] + [
- l.LangSys for l in script.Script.LangSysRecord
- ]:
- if langsys is None:
- continue # XXX Create!
- feature = [v for v in langsys.FeatureIndex if v.FeatureTag == "locl"]
- assert len(feature) <= 1
- if feature:
- feature = feature[0]
- else:
- if not synthFeature:
- synthFeature = otTables.FeatureRecord()
- synthFeature.FeatureTag = "locl"
- f = synthFeature.Feature = otTables.Feature()
- f.FeatureParams = None
- f.LookupCount = 0
- f.LookupListIndex = []
- table.table.FeatureList.FeatureRecord.append(synthFeature)
- table.table.FeatureList.FeatureCount += 1
- feature = synthFeature
- langsys.FeatureIndex.append(feature)
- langsys.FeatureIndex.sort(key=lambda v: v.FeatureTag)
-
- if not synthLookup:
- subtable = otTables.SingleSubst()
- subtable.mapping = dups
- synthLookup = otTables.Lookup()
- synthLookup.LookupFlag = 0
- synthLookup.LookupType = 1
- synthLookup.SubTableCount = 1
- synthLookup.SubTable = [subtable]
- if table.table.LookupList is None:
- # mtiLib uses None as default value for LookupList,
- # while feaLib points to an empty array with count 0
- # TODO: make them do the same
- table.table.LookupList = otTables.LookupList()
- table.table.LookupList.Lookup = []
- table.table.LookupList.LookupCount = 0
- table.table.LookupList.Lookup.append(synthLookup)
- table.table.LookupList.LookupCount += 1
-
- if feature.Feature.LookupListIndex[:1] != [synthLookup]:
- feature.Feature.LookupListIndex[:0] = [synthLookup]
- feature.Feature.LookupCount += 1
-
- DefaultTable.merge(self, m, tables)
- return self
-
-
-@add_method(
- otTables.SingleSubst,
- otTables.MultipleSubst,
- otTables.AlternateSubst,
- otTables.LigatureSubst,
- otTables.ReverseChainSingleSubst,
- otTables.SinglePos,
- otTables.PairPos,
- otTables.CursivePos,
- otTables.MarkBasePos,
- otTables.MarkLigPos,
- otTables.MarkMarkPos,
-)
-def mapLookups(self, lookupMap):
- pass
-
-
-# Copied and trimmed down from subset.py
-@add_method(
- otTables.ContextSubst,
- otTables.ChainContextSubst,
- otTables.ContextPos,
- otTables.ChainContextPos,
-)
-def __merge_classify_context(self):
- class ContextHelper(object):
- def __init__(self, klass, Format):
- if klass.__name__.endswith("Subst"):
- Typ = "Sub"
- Type = "Subst"
- else:
- Typ = "Pos"
- Type = "Pos"
- if klass.__name__.startswith("Chain"):
- Chain = "Chain"
- else:
- Chain = ""
- ChainTyp = Chain + Typ
-
- self.Typ = Typ
- self.Type = Type
- self.Chain = Chain
- self.ChainTyp = ChainTyp
-
- self.LookupRecord = Type + "LookupRecord"
-
- if Format == 1:
- self.Rule = ChainTyp + "Rule"
- self.RuleSet = ChainTyp + "RuleSet"
- elif Format == 2:
- self.Rule = ChainTyp + "ClassRule"
- self.RuleSet = ChainTyp + "ClassSet"
-
- if self.Format not in [1, 2, 3]:
- return None # Don't shoot the messenger; let it go
- if not hasattr(self.__class__, "_merge__ContextHelpers"):
- self.__class__._merge__ContextHelpers = {}
- if self.Format not in self.__class__._merge__ContextHelpers:
- helper = ContextHelper(self.__class__, self.Format)
- self.__class__._merge__ContextHelpers[self.Format] = helper
- return self.__class__._merge__ContextHelpers[self.Format]
-
-
-@add_method(
- otTables.ContextSubst,
- otTables.ChainContextSubst,
- otTables.ContextPos,
- otTables.ChainContextPos,
-)
-def mapLookups(self, lookupMap):
- c = self.__merge_classify_context()
-
- if self.Format in [1, 2]:
- for rs in getattr(self, c.RuleSet):
- if not rs:
- continue
- for r in getattr(rs, c.Rule):
- if not r:
- continue
- for ll in getattr(r, c.LookupRecord):
- if not ll:
- continue
- ll.LookupListIndex = lookupMap[ll.LookupListIndex]
- elif self.Format == 3:
- for ll in getattr(self, c.LookupRecord):
- if not ll:
- continue
- ll.LookupListIndex = lookupMap[ll.LookupListIndex]
- else:
- assert 0, "unknown format: %s" % self.Format
-
-
-@add_method(otTables.ExtensionSubst, otTables.ExtensionPos)
-def mapLookups(self, lookupMap):
- if self.Format == 1:
- self.ExtSubTable.mapLookups(lookupMap)
- else:
- assert 0, "unknown format: %s" % self.Format
-
-
-@add_method(otTables.Lookup)
-def mapLookups(self, lookupMap):
- for st in self.SubTable:
- if not st:
- continue
- st.mapLookups(lookupMap)
-
-
-@add_method(otTables.LookupList)
-def mapLookups(self, lookupMap):
- for l in self.Lookup:
- if not l:
- continue
- l.mapLookups(lookupMap)
-
-
-@add_method(otTables.Lookup)
-def mapMarkFilteringSets(self, markFilteringSetMap):
- if self.LookupFlag & 0x0010:
- self.MarkFilteringSet = markFilteringSetMap[self.MarkFilteringSet]
-
-
-@add_method(otTables.LookupList)
-def mapMarkFilteringSets(self, markFilteringSetMap):
- for l in self.Lookup:
- if not l:
- continue
- l.mapMarkFilteringSets(markFilteringSetMap)
-
-
-@add_method(otTables.Feature)
-def mapLookups(self, lookupMap):
- self.LookupListIndex = [lookupMap[i] for i in self.LookupListIndex]
-
-
-@add_method(otTables.FeatureList)
-def mapLookups(self, lookupMap):
- for f in self.FeatureRecord:
- if not f or not f.Feature:
- continue
- f.Feature.mapLookups(lookupMap)
-
-
-@add_method(otTables.DefaultLangSys, otTables.LangSys)
-def mapFeatures(self, featureMap):
- self.FeatureIndex = [featureMap[i] for i in self.FeatureIndex]
- if self.ReqFeatureIndex != 65535:
- self.ReqFeatureIndex = featureMap[self.ReqFeatureIndex]
-
-
-@add_method(otTables.Script)
-def mapFeatures(self, featureMap):
- if self.DefaultLangSys:
- self.DefaultLangSys.mapFeatures(featureMap)
- for l in self.LangSysRecord:
- if not l or not l.LangSys:
- continue
- l.LangSys.mapFeatures(featureMap)
-
-
-@add_method(otTables.ScriptList)
-def mapFeatures(self, featureMap):
- for s in self.ScriptRecord:
- if not s or not s.Script:
- continue
- s.Script.mapFeatures(featureMap)
-
-
-def layoutPreMerge(font):
- # Map indices to references
-
- GDEF = font.get("GDEF")
- GSUB = font.get("GSUB")
- GPOS = font.get("GPOS")
-
- for t in [GSUB, GPOS]:
- if not t:
- continue
-
- if t.table.LookupList:
- lookupMap = {i: v for i, v in enumerate(t.table.LookupList.Lookup)}
- t.table.LookupList.mapLookups(lookupMap)
- t.table.FeatureList.mapLookups(lookupMap)
-
- if (
- GDEF
- and GDEF.table.Version >= 0x00010002
- and GDEF.table.MarkGlyphSetsDef
- ):
- markFilteringSetMap = {
- i: v for i, v in enumerate(GDEF.table.MarkGlyphSetsDef.Coverage)
- }
- t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap)
-
- if t.table.FeatureList and t.table.ScriptList:
- featureMap = {i: v for i, v in enumerate(t.table.FeatureList.FeatureRecord)}
- t.table.ScriptList.mapFeatures(featureMap)
-
- # TODO FeatureParams nameIDs
-
-
-def layoutPostMerge(font):
- # Map references back to indices
-
- GDEF = font.get("GDEF")
- GSUB = font.get("GSUB")
- GPOS = font.get("GPOS")
-
- for t in [GSUB, GPOS]:
- if not t:
- continue
-
- if t.table.FeatureList and t.table.ScriptList:
- # Collect unregistered (new) features.
- featureMap = GregariousIdentityDict(t.table.FeatureList.FeatureRecord)
- t.table.ScriptList.mapFeatures(featureMap)
-
- # Record used features.
- featureMap = AttendanceRecordingIdentityDict(
- t.table.FeatureList.FeatureRecord
- )
- t.table.ScriptList.mapFeatures(featureMap)
- usedIndices = featureMap.s
-
- # Remove unused features
- t.table.FeatureList.FeatureRecord = [
- f
- for i, f in enumerate(t.table.FeatureList.FeatureRecord)
- if i in usedIndices
- ]
-
- # Map back to indices.
- featureMap = NonhashableDict(t.table.FeatureList.FeatureRecord)
- t.table.ScriptList.mapFeatures(featureMap)
-
- t.table.FeatureList.FeatureCount = len(t.table.FeatureList.FeatureRecord)
-
- if t.table.LookupList:
- # Collect unregistered (new) lookups.
- lookupMap = GregariousIdentityDict(t.table.LookupList.Lookup)
- t.table.FeatureList.mapLookups(lookupMap)
- t.table.LookupList.mapLookups(lookupMap)
-
- # Record used lookups.
- lookupMap = AttendanceRecordingIdentityDict(t.table.LookupList.Lookup)
- t.table.FeatureList.mapLookups(lookupMap)
- t.table.LookupList.mapLookups(lookupMap)
- usedIndices = lookupMap.s
-
- # Remove unused lookups
- t.table.LookupList.Lookup = [
- l for i, l in enumerate(t.table.LookupList.Lookup) if i in usedIndices
- ]
-
- # Map back to indices.
- lookupMap = NonhashableDict(t.table.LookupList.Lookup)
- t.table.FeatureList.mapLookups(lookupMap)
- t.table.LookupList.mapLookups(lookupMap)
-
- t.table.LookupList.LookupCount = len(t.table.LookupList.Lookup)
-
- if GDEF and GDEF.table.Version >= 0x00010002:
- markFilteringSetMap = NonhashableDict(
- GDEF.table.MarkGlyphSetsDef.Coverage
- )
- t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap)
-
- # TODO FeatureParams nameIDs
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/xmlWriter.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/xmlWriter.py
deleted file mode 100644
index 9a8dc3e3b7fe5eb13ea4b7ea369ced1da5555471..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/xmlWriter.py
+++ /dev/null
@@ -1,204 +0,0 @@
-"""xmlWriter.py -- Simple XML authoring class"""
-
-from fontTools.misc.textTools import byteord, strjoin, tobytes, tostr
-import sys
-import os
-import string
-
-INDENT = " "
-
-
-class XMLWriter(object):
- def __init__(
- self,
- fileOrPath,
- indentwhite=INDENT,
- idlefunc=None,
- encoding="utf_8",
- newlinestr="\n",
- ):
- if encoding.lower().replace("-", "").replace("_", "") != "utf8":
- raise Exception("Only UTF-8 encoding is supported.")
- if fileOrPath == "-":
- fileOrPath = sys.stdout
- if not hasattr(fileOrPath, "write"):
- self.filename = fileOrPath
- self.file = open(fileOrPath, "wb")
- self._closeStream = True
- else:
- self.filename = None
- # assume writable file object
- self.file = fileOrPath
- self._closeStream = False
-
- # Figure out if writer expects bytes or unicodes
- try:
- # The bytes check should be first. See:
- # https://github.com/fonttools/fonttools/pull/233
- self.file.write(b"")
- self.totype = tobytes
- except TypeError:
- # This better not fail.
- self.file.write("")
- self.totype = tostr
- self.indentwhite = self.totype(indentwhite)
- if newlinestr is None:
- self.newlinestr = self.totype(os.linesep)
- else:
- self.newlinestr = self.totype(newlinestr)
- self.indentlevel = 0
- self.stack = []
- self.needindent = 1
- self.idlefunc = idlefunc
- self.idlecounter = 0
- self._writeraw('')
- self.newline()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exception_type, exception_value, traceback):
- self.close()
-
- def close(self):
- if self._closeStream:
- self.file.close()
-
- def write(self, string, indent=True):
- """Writes text."""
- self._writeraw(escape(string), indent=indent)
-
- def writecdata(self, string):
- """Writes text in a CDATA section."""
- self._writeraw("")
-
- def write8bit(self, data, strip=False):
- """Writes a bytes() sequence into the XML, escaping
- non-ASCII bytes. When this is read in xmlReader,
- the original bytes can be recovered by encoding to
- 'latin-1'."""
- self._writeraw(escape8bit(data), strip=strip)
-
- def write_noindent(self, string):
- """Writes text without indentation."""
- self._writeraw(escape(string), indent=False)
-
- def _writeraw(self, data, indent=True, strip=False):
- """Writes bytes, possibly indented."""
- if indent and self.needindent:
- self.file.write(self.indentlevel * self.indentwhite)
- self.needindent = 0
- s = self.totype(data, encoding="utf_8")
- if strip:
- s = s.strip()
- self.file.write(s)
-
- def newline(self):
- self.file.write(self.newlinestr)
- self.needindent = 1
- idlecounter = self.idlecounter
- if not idlecounter % 100 and self.idlefunc is not None:
- self.idlefunc()
- self.idlecounter = idlecounter + 1
-
- def comment(self, data):
- data = escape(data)
- lines = data.split("\n")
- self._writeraw("")
-
- def simpletag(self, _TAG_, *args, **kwargs):
- attrdata = self.stringifyattrs(*args, **kwargs)
- data = "<%s%s/>" % (_TAG_, attrdata)
- self._writeraw(data)
-
- def begintag(self, _TAG_, *args, **kwargs):
- attrdata = self.stringifyattrs(*args, **kwargs)
- data = "<%s%s>" % (_TAG_, attrdata)
- self._writeraw(data)
- self.stack.append(_TAG_)
- self.indent()
-
- def endtag(self, _TAG_):
- assert self.stack and self.stack[-1] == _TAG_, "nonmatching endtag"
- del self.stack[-1]
- self.dedent()
- data = "%s>" % _TAG_
- self._writeraw(data)
-
- def dumphex(self, data):
- linelength = 16
- hexlinelength = linelength * 2
- chunksize = 8
- for i in range(0, len(data), linelength):
- hexline = hexStr(data[i : i + linelength])
- line = ""
- white = ""
- for j in range(0, hexlinelength, chunksize):
- line = line + white + hexline[j : j + chunksize]
- white = " "
- self._writeraw(line)
- self.newline()
-
- def indent(self):
- self.indentlevel = self.indentlevel + 1
-
- def dedent(self):
- assert self.indentlevel > 0
- self.indentlevel = self.indentlevel - 1
-
- def stringifyattrs(self, *args, **kwargs):
- if kwargs:
- assert not args
- attributes = sorted(kwargs.items())
- elif args:
- assert len(args) == 1
- attributes = args[0]
- else:
- return ""
- data = ""
- for attr, value in attributes:
- if not isinstance(value, (bytes, str)):
- value = str(value)
- data = data + ' %s="%s"' % (attr, escapeattr(value))
- return data
-
-
-def escape(data):
- data = tostr(data, "utf_8")
- data = data.replace("&", "&")
- data = data.replace("<", "<")
- data = data.replace(">", ">")
- data = data.replace("\r", "
")
- return data
-
-
-def escapeattr(data):
- data = escape(data)
- data = data.replace('"', """)
- return data
-
-
-def escape8bit(data):
- """Input is Unicode string."""
-
- def escapechar(c):
- n = ord(c)
- if 32 <= n <= 127 and c not in "<&>":
- return c
- else:
- return "" + repr(n) + ";"
-
- return strjoin(map(escapechar, data.decode("latin-1")))
-
-
-def hexStr(s):
- h = string.hexdigits
- r = ""
- for c in s:
- i = byteord(c)
- r = r + h[(i >> 4) & 0xF] + h[i & 0xF]
- return r
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/dbfs.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/dbfs.py
deleted file mode 100644
index 9f5b330cab9e751142794253d1072bab48b8bc29..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/dbfs.py
+++ /dev/null
@@ -1,457 +0,0 @@
-import base64
-import urllib
-
-import requests
-
-from fsspec import AbstractFileSystem
-from fsspec.spec import AbstractBufferedFile
-
-
-class DatabricksException(Exception):
- """
- Helper class for exceptions raised in this module.
- """
-
- def __init__(self, error_code, message):
- """Create a new DatabricksException"""
- super().__init__(message)
-
- self.error_code = error_code
- self.message = message
-
-
-class DatabricksFileSystem(AbstractFileSystem):
- """
- Get access to the Databricks filesystem implementation over HTTP.
- Can be used inside and outside of a databricks cluster.
- """
-
- def __init__(self, instance, token, **kwargs):
- """
- Create a new DatabricksFileSystem.
-
- Parameters
- ----------
- instance: str
- The instance URL of the databricks cluster.
- For example for an Azure databricks cluster, this
- has the form adb-AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos **") - gr.Markdown( - """ -
- Follow me for more!
- Keras Dreambooth - Lowpoly World Demo") - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox(lines=1, value="lowpoly_world", label="Base Prompt") - negative_prompt = gr.Textbox(lines=1, value="deformed", label="Negative Prompt") - samples = gr.Slider(minimum=1, maximum=10, default=1, step=1, label="Number of Image") - num_steps = gr.Slider(label="Inference Steps",value=50) - run = gr.Button(value="Run") - with gr.Column(): - gallery = gr.Gallery(label="Outputs").style(grid=(1,2)) - - run.click(generate_images, inputs=[prompt,negative_prompt, samples, num_steps], outputs=gallery) - - gr.Examples([["photo of lowpoly_world","bad, ugly", 1, 50]], - [prompt,negative_prompt, samples,num_steps], gallery, generate_images) - gr.Markdown('\n Demo created by: Kadir Nar') - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/losses/stft_loss.py b/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/losses/stft_loss.py deleted file mode 100644 index 74d2aa21ad30ba094c406366e652067462f49cd2..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/M4Singer/modules/parallel_wavegan/losses/stft_loss.py +++ /dev/null @@ -1,153 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""STFT-based Loss modules.""" - -import torch -import torch.nn.functional as F - - -def stft(x, fft_size, hop_size, win_length, window): - """Perform STFT and convert to magnitude spectrogram. - - Args: - x (Tensor): Input signal tensor (B, T). - fft_size (int): FFT size. - hop_size (int): Hop size. - win_length (int): Window length. - window (str): Window function type. - - Returns: - Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1). - - """ - x_stft = torch.stft(x, fft_size, hop_size, win_length, window) - real = x_stft[..., 0] - imag = x_stft[..., 1] - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergengeLoss(torch.nn.Module): - """Spectral convergence loss module.""" - - def __init__(self): - """Initilize spectral convergence loss module.""" - super(SpectralConvergengeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Spectral convergence loss value. - - """ - return torch.norm(y_mag - x_mag, p="fro") / torch.norm(y_mag, p="fro") - - -class LogSTFTMagnitudeLoss(torch.nn.Module): - """Log STFT magnitude loss module.""" - - def __init__(self): - """Initilize los STFT magnitude loss module.""" - super(LogSTFTMagnitudeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Log STFT magnitude loss value. - - """ - return F.l1_loss(torch.log(y_mag), torch.log(x_mag)) - - -class STFTLoss(torch.nn.Module): - """STFT loss module.""" - - def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window"): - """Initialize STFT loss module.""" - super(STFTLoss, self).__init__() - self.fft_size = fft_size - self.shift_size = shift_size - self.win_length = win_length - self.window = getattr(torch, window)(win_length) - self.spectral_convergenge_loss = SpectralConvergengeLoss() - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss() - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - - Returns: - Tensor: Spectral convergence loss value. - Tensor: Log STFT magnitude loss value. - - """ - x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window) - y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window) - sc_loss = self.spectral_convergenge_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - - return sc_loss, mag_loss - - -class MultiResolutionSTFTLoss(torch.nn.Module): - """Multi resolution STFT loss module.""" - - def __init__(self, - fft_sizes=[1024, 2048, 512], - hop_sizes=[120, 240, 50], - win_lengths=[600, 1200, 240], - window="hann_window"): - """Initialize Multi resolution STFT loss module. - - Args: - fft_sizes (list): List of FFT sizes. - hop_sizes (list): List of hop sizes. - win_lengths (list): List of window lengths. - window (str): Window function type. - - """ - super(MultiResolutionSTFTLoss, self).__init__() - assert len(fft_sizes) == len(hop_sizes) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths): - self.stft_losses += [STFTLoss(fs, ss, wl, window)] - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - - Returns: - Tensor: Multi resolution spectral convergence loss value. - Tensor: Multi resolution log STFT magnitude loss value. - - """ - sc_loss = 0.0 - mag_loss = 0.0 - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - sc_loss /= len(self.stft_losses) - mag_loss /= len(self.stft_losses) - - return sc_loss, mag_loss diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/torch2onnx.py b/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/torch2onnx.py deleted file mode 100644 index fc26ab82e552331bc8d75b34e81000418f4d38ec..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/torch2onnx.py +++ /dev/null @@ -1,59 +0,0 @@ -import numpy as np -import onnx -import torch - - -def convert_onnx(net, path_module, output, opset=11, simplify=False): - assert isinstance(net, torch.nn.Module) - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = img.astype(np.float) - img = (img / 255. - 0.5) / 0.5 # torch style norm - img = img.transpose((2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - - weight = torch.load(path_module) - net.load_state_dict(weight) - net.eval() - torch.onnx.export(net, img, output, keep_initializers_as_inputs=False, verbose=False, opset_version=opset) - model = onnx.load(output) - graph = model.graph - graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None' - if simplify: - from onnxsim import simplify - model, check = simplify(model) - assert check, "Simplified ONNX model could not be validated" - onnx.save(model, output) - - -if __name__ == '__main__': - import os - import argparse - from backbones import get_model - - parser = argparse.ArgumentParser(description='ArcFace PyTorch to onnx') - parser.add_argument('input', type=str, help='input backbone.pth file or path') - parser.add_argument('--output', type=str, default=None, help='output onnx path') - parser.add_argument('--network', type=str, default=None, help='backbone network') - parser.add_argument('--simplify', type=bool, default=False, help='onnx simplify') - args = parser.parse_args() - input_file = args.input - if os.path.isdir(input_file): - input_file = os.path.join(input_file, "backbone.pth") - assert os.path.exists(input_file) - model_name = os.path.basename(os.path.dirname(input_file)).lower() - params = model_name.split("_") - if len(params) >= 3 and params[1] in ('arcface', 'cosface'): - if args.network is None: - args.network = params[2] - assert args.network is not None - print(args) - backbone_onnx = get_model(args.network, dropout=0) - - output_path = args.output - if output_path is None: - output_path = os.path.join(os.path.dirname(__file__), 'onnx') - if not os.path.exists(output_path): - os.makedirs(output_path) - assert os.path.isdir(output_path) - output_file = os.path.join(output_path, "%s.onnx" % model_name) - convert_onnx(backbone_onnx, input_file, output_file, simplify=args.simplify) diff --git a/spaces/keyikai/bing/Dockerfile b/spaces/keyikai/bing/Dockerfile deleted file mode 100644 index 5a424c3084a247240ff04cc6e4b9092f389c3d2b..0000000000000000000000000000000000000000 --- a/spaces/keyikai/bing/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,并且清除缓存🧹 -RUN apk --no-cache add git && \ - git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app && \ - apk del git - -# 设置工作目录 -WORKDIR /workspace/app - -# 编译 go 项目 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像🪞 -FROM alpine - -# 设置工作目录💼 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件👔 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# (可选)设置环境变量✍️ -ENV Go_Proxy_BingAI_USER_TOKEN_1="G4hJ9k544565uhjjhjlkjh68ah3naaYc0FvIjHmLzXeRfAq" - -# 端口 -EXPOSE 8080 - -# 容器运行✅ -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_xgb.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_xgb.py deleted file mode 100644 index 8b6fe0d98012484abbbc8091ff0f027ae9f53473..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/models/mdl_xgb.py +++ /dev/null @@ -1,66 +0,0 @@ -import pandas as pd -from sklearn.ensemble import GradientBoostingClassifier -import lib.utils as libPaths -import pickle -import sys - - -m_kstrFile = __file__ -m_kstrDataPath = libPaths.pth_data -m_kstrBinModelPath = libPaths.pth_binModels -m_kstrModelPath_gbc = m_kstrBinModelPath + 'gbc_model_colab.pkl' -m_kstrModelPath_prov111 = m_kstrBinModelPath + 'prov_gbc_v1.1.1_32cols.pkl' #--- ERROR: __randomstate_ctor() takes from 0 to 1 positional arguments but 2 were given -m_kstrModelPath_prov121 = m_kstrBinModelPath + 'prov_gbc_v1.2.1_32cols.pkl' -m_kstrModelPath_prov_py3816_sk111hp = m_kstrBinModelPath + 'prov_gbc_py3816_sk111hp_32cols.pkl' -m_kstrModelPath = m_kstrModelPath_prov_py3816_sk111hp - -m_blnTraceOn = True - - - -#--- Supervised: xg boost; gradient boosting classifier -def load_fromPkl(): - try: - with open(m_kstrModelPath, 'rb') as filPkl: - mdlAnoms = pickle.load(filPkl) - return mdlAnoms - - except: - e = sys.exc_info() - print("ERROR (mdl_xgb.load_fromPkl_genError): ", e) - - - -def save_toPkl(mdlAnoms): - with open(m_kstrModelPath, 'wb') as filPkl: - pickle.dump(mdlAnoms, filPkl) - return mdlAnoms - - - -def predict(npaData): - - try: - #--- input: numpy.ndarray of feature eng, and scaled data - mdlAnoms = load_fromPkl() - if (m_blnTraceOn): print("TRACE (mdl_xgb.predict): data loaded ... ") - npaPredict = mdlAnoms.predict(npaData) - - except: - e = sys.exc_info() - print("ERROR (mdl_xgb.predict_genError1): ", e) - - - #--- AttributeError: 'GradientBoostingClassifier' object has no attribute '_loss' - #--- version of scikit-learn? Monika: ?.?.? ; Iain: 1.2.0 - - #print("INFO (type.npaPredict): ", type(npaPredict)) - #if (m_blnTraceOn): print("TRACE (mdl_xgb.predict) npaPredict.shape: ", npaPredict.shape) - return npaPredict - - -def train(pdfTrainData): - mdlAnoms = GradientBoostingClassifier() - mdlAnoms.fit(pdfTrainData.values) - save_toPkl(mdlAnoms) - return mdlAnoms diff --git a/spaces/kira4424/VITS-fast-fine-tuning/losses.py b/spaces/kira4424/VITS-fast-fine-tuning/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/kira4424/VITS-fast-fine-tuning/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py deleted file mode 100644 index 9cf121081fbde2f5085ed380f0841649d143a4be..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import collections -import contextlib -import wave - -try: - import webrtcvad -except ImportError: - raise ImportError("Please install py-webrtcvad: pip install webrtcvad") -import argparse -import os -import logging -from tqdm import tqdm - -AUDIO_SUFFIX = '.wav' -FS_MS = 30 -SCALE = 6e-5 -THRESHOLD = 0.3 - - -def read_wave(path): - """Reads a .wav file. - Takes the path, and returns (PCM audio data, sample rate). - """ - with contextlib.closing(wave.open(path, 'rb')) as wf: - num_channels = wf.getnchannels() - assert num_channels == 1 - sample_width = wf.getsampwidth() - assert sample_width == 2 - sample_rate = wf.getframerate() - assert sample_rate in (8000, 16000, 32000, 48000) - pcm_data = wf.readframes(wf.getnframes()) - return pcm_data, sample_rate - - -def write_wave(path, audio, sample_rate): - """Writes a .wav file. - Takes path, PCM audio data, and sample rate. - """ - with contextlib.closing(wave.open(path, 'wb')) as wf: - wf.setnchannels(1) - wf.setsampwidth(2) - wf.setframerate(sample_rate) - wf.writeframes(audio) - - -class Frame(object): - """Represents a "frame" of audio data.""" - def __init__(self, bytes, timestamp, duration): - self.bytes = bytes - self.timestamp = timestamp - self.duration = duration - - -def frame_generator(frame_duration_ms, audio, sample_rate): - """Generates audio frames from PCM audio data. - Takes the desired frame duration in milliseconds, the PCM data, and - the sample rate. - Yields Frames of the requested duration. - """ - n = int(sample_rate * (frame_duration_ms / 1000.0) * 2) - offset = 0 - timestamp = 0.0 - duration = (float(n) / sample_rate) / 2.0 - while offset + n < len(audio): - yield Frame(audio[offset:offset + n], timestamp, duration) - timestamp += duration - offset += n - - -def vad_collector(sample_rate, frame_duration_ms, - padding_duration_ms, vad, frames): - """Filters out non-voiced audio frames. - Given a webrtcvad.Vad and a source of audio frames, yields only - the voiced audio. - Uses a padded, sliding window algorithm over the audio frames. - When more than 90% of the frames in the window are voiced (as - reported by the VAD), the collector triggers and begins yielding - audio frames. Then the collector waits until 90% of the frames in - the window are unvoiced to detrigger. - The window is padded at the front and back to provide a small - amount of silence or the beginnings/endings of speech around the - voiced frames. - Arguments: - sample_rate - The audio sample rate, in Hz. - frame_duration_ms - The frame duration in milliseconds. - padding_duration_ms - The amount to pad the window, in milliseconds. - vad - An instance of webrtcvad.Vad. - frames - a source of audio frames (sequence or generator). - Returns: A generator that yields PCM audio data. - """ - num_padding_frames = int(padding_duration_ms / frame_duration_ms) - # We use a deque for our sliding window/ring buffer. - ring_buffer = collections.deque(maxlen=num_padding_frames) - # We have two states: TRIGGERED and NOTTRIGGERED. We start in the - # NOTTRIGGERED state. - triggered = False - - voiced_frames = [] - for frame in frames: - is_speech = vad.is_speech(frame.bytes, sample_rate) - - # sys.stdout.write('1' if is_speech else '0') - if not triggered: - ring_buffer.append((frame, is_speech)) - num_voiced = len([f for f, speech in ring_buffer if speech]) - # If we're NOTTRIGGERED and more than 90% of the frames in - # the ring buffer are voiced frames, then enter the - # TRIGGERED state. - if num_voiced > 0.9 * ring_buffer.maxlen: - triggered = True - # We want to yield all the audio we see from now until - # we are NOTTRIGGERED, but we have to start with the - # audio that's already in the ring buffer. - for f, _ in ring_buffer: - voiced_frames.append(f) - ring_buffer.clear() - else: - # We're in the TRIGGERED state, so collect the audio data - # and add it to the ring buffer. - voiced_frames.append(frame) - ring_buffer.append((frame, is_speech)) - num_unvoiced = len([f for f, speech in ring_buffer if not speech]) - # If more than 90% of the frames in the ring buffer are - # unvoiced, then enter NOTTRIGGERED and yield whatever - # audio we've collected. - if num_unvoiced > 0.9 * ring_buffer.maxlen: - triggered = False - yield [b''.join([f.bytes for f in voiced_frames]), - voiced_frames[0].timestamp, voiced_frames[-1].timestamp] - ring_buffer.clear() - voiced_frames = [] - # If we have any leftover voiced audio when we run out of input, - # yield it. - if voiced_frames: - yield [b''.join([f.bytes for f in voiced_frames]), - voiced_frames[0].timestamp, voiced_frames[-1].timestamp] - - -def main(args): - # create output folder - try: - cmd = f"mkdir -p {args.out_path}" - os.system(cmd) - except Exception: - logging.error("Can not create output folder") - exit(-1) - - # build vad object - vad = webrtcvad.Vad(int(args.agg)) - # iterating over wavs in dir - for file in tqdm(os.listdir(args.in_path)): - if file.endswith(AUDIO_SUFFIX): - audio_inpath = os.path.join(args.in_path, file) - audio_outpath = os.path.join(args.out_path, file) - audio, sample_rate = read_wave(audio_inpath) - frames = frame_generator(FS_MS, audio, sample_rate) - frames = list(frames) - segments = vad_collector(sample_rate, FS_MS, 300, vad, frames) - merge_segments = list() - timestamp_start = 0.0 - timestamp_end = 0.0 - # removing start, end, and long sequences of sils - for i, segment in enumerate(segments): - merge_segments.append(segment[0]) - if i and timestamp_start: - sil_duration = segment[1] - timestamp_end - if sil_duration > THRESHOLD: - merge_segments.append(int(THRESHOLD / SCALE)*(b'\x00')) - else: - merge_segments.append(int((sil_duration / SCALE))*(b'\x00')) - timestamp_start = segment[1] - timestamp_end = segment[2] - segment = b''.join(merge_segments) - write_wave(audio_outpath, segment, sample_rate) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Apply vad to a file of fils.') - parser.add_argument('in_path', type=str, help='Path to the input files') - parser.add_argument('out_path', type=str, - help='Path to save the processed files') - parser.add_argument('--agg', type=int, default=3, - help='The level of aggressiveness of the VAD: [0-3]') - args = parser.parse_args() - - main(args) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py deleted file mode 100644 index 29f634d624b7df125722c3bae594c1d39a835aec..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py +++ /dev/null @@ -1,96 +0,0 @@ -"""Python 2/3 compat layer leftovers.""" - -import decimal as _decimal -import math as _math -import warnings -from contextlib import redirect_stderr, redirect_stdout -from io import BytesIO -from io import StringIO as UnicodeIO -from types import SimpleNamespace - -from .textTools import Tag, bytechr, byteord, bytesjoin, strjoin, tobytes, tostr - -warnings.warn( - "The py23 module has been deprecated and will be removed in a future release. " - "Please update your code.", - DeprecationWarning, -) - -__all__ = [ - "basestring", - "bytechr", - "byteord", - "BytesIO", - "bytesjoin", - "open", - "Py23Error", - "range", - "RecursionError", - "round", - "SimpleNamespace", - "StringIO", - "strjoin", - "Tag", - "tobytes", - "tostr", - "tounicode", - "unichr", - "unicode", - "UnicodeIO", - "xrange", - "zip", -] - - -class Py23Error(NotImplementedError): - pass - - -RecursionError = RecursionError -StringIO = UnicodeIO - -basestring = str -isclose = _math.isclose -isfinite = _math.isfinite -open = open -range = range -round = round3 = round -unichr = chr -unicode = str -zip = zip - -tounicode = tostr - - -def xrange(*args, **kwargs): - raise Py23Error("'xrange' is not defined. Use 'range' instead.") - - -def round2(number, ndigits=None): - """ - Implementation of Python 2 built-in round() function. - Rounds a number to a given precision in decimal digits (default - 0 digits). The result is a floating point number. Values are rounded - to the closest multiple of 10 to the power minus ndigits; if two - multiples are equally close, rounding is done away from 0. - ndigits may be negative. - See Python 2 documentation: - https://docs.python.org/2/library/functions.html?highlight=round#round - """ - if ndigits is None: - ndigits = 0 - - if ndigits < 0: - exponent = 10 ** (-ndigits) - quotient, remainder = divmod(number, exponent) - if remainder >= exponent // 2 and number >= 0: - quotient += 1 - return float(quotient * exponent) - else: - exponent = _decimal.Decimal("10") ** (-ndigits) - - d = _decimal.Decimal.from_float(number).quantize( - exponent, rounding=_decimal.ROUND_HALF_UP - ) - - return float(d) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__version__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__version__.py deleted file mode 100644 index 6a8e63c60262fc2650cb5c71514a4b23f949aa58..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__version__.py +++ /dev/null @@ -1,3 +0,0 @@ -__title__ = "httpx" -__description__ = "A next generation HTTP client, for Python 3." -__version__ = "0.24.1" diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/cli/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/cli/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/leogabraneth/text-generation-webui-main/.github/pull_request_template.md b/spaces/leogabraneth/text-generation-webui-main/.github/pull_request_template.md deleted file mode 100644 index 51e26b13a38889a38cac5392b6e22190fd75a8b7..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/.github/pull_request_template.md +++ /dev/null @@ -1,3 +0,0 @@ -## Checklist: - -- [ ] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines). diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/common/highway_network.py b/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/common/highway_network.py deleted file mode 100644 index d311c6924db6dfc247f69cc266d6c1975b6e03cd..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/models/sublayer/common/highway_network.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -class HighwayNetwork(nn.Module): - def __init__(self, size): - super().__init__() - self.W1 = nn.Linear(size, size) - self.W2 = nn.Linear(size, size) - self.W1.bias.data.fill_(0.) - - def forward(self, x): - x1 = self.W1(x) - x2 = self.W2(x) - g = torch.sigmoid(x2) - y = g * F.relu(x1) + (1. - g) * x - return y diff --git a/spaces/librarian-bots/README/README.md b/spaces/librarian-bots/README/README.md deleted file mode 100644 index cc6bca4fa97111225f1be2ccba39ec71954b728c..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/README/README.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: README -emoji: 🤖 -colorFrom: red -colorTo: pink -sdk: static -pinned: false ---- - -Hugging Face Librarian Bots- -✨ Curating the Hugging Face Hub one PR at a time. ✨ - -
-
-
-
-The Hugging Face Hub is the primary place for sharing machine learning models, datasets, and demos. It currently holds over 200,000 models, 40,000 datasets, and 100,000 machine learning demos.
-
-The `Librarian Bots` organization is an effort by Hugging Face's [Machine Learning Librarian](https://huggingface.co/davanstrien) to use machine learning to enhance metadata and documentation for material shared on the Hub with the ultimate goal of making it easier for people (and bots!) to find what they are looking for on the Hub. This organization is used to share datasets, models, and Spaces which help achieve this goal.
-
-## 👾 Spaces
-
-![]()
-
-
-📚 Spaces Related to Hugging Face Papers- - - [Recommend Similar Papers](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers): a Space that allows you to find papers similar to a given paper. - - [Collections Reading List Generator](https://huggingface.co/spaces/librarian-bots/collection-reading-list-generator): a Space that allows you to generate a reading list for a given Hugging Face Collection - - [📄🔗: Extract linked papers from a Hugging Face Collection](https://huggingface.co/spaces/librarian-bots/collection_papers_extractor): extract all the papers associated with items in a Hugging Face Collection. - - [📃 Hugging Face Paper Claimer 📃](https://huggingface.co/spaces/librarian-bots/claim-papers): a space that helps you to claim papers you authored on the Hugging Face Hub. -
-
-
-Spaces related to metadata- - - [🤖 Librarian Bot Metadata Request Service 🤖](https://huggingface.co/spaces/librarian-bots/metadata_request_service): With a few clicks, enrich your Hugging Face models with key metadata! - - [MetaRefine](https://huggingface.co/spaces/librarian-bots/MetaRefine): refine Hub search results by metadata quality and model card length. -- [metadata explorer](https://huggingface.co/spaces/librarian-bots/metadata_explorer): a space for exploring high-level information about the metadata associated with models hosted on the Hugging Face Hub. - -
-
-
-Spaces for exploring and keeping track of repositories on the Hub- - - [Dataset-to-Model Monitor](https://huggingface.co/spaces/librarian-bots/dataset-to-model-monitor): track datasets hosted on the Hugging Face Hub and get a notification when new models are trained on the dataset you are tracking. - - [Base Model Explorer](https://huggingface.co/spaces/librarian-bots/base_model_explorer): This Space allows you to find children's models for a given base model and view the popularity of models for fine-tuning. - - [Hugging Face Datasets Semantic Search](https://huggingface.co/spaces/librarian-bots/huggingface-datasets-semantic-search): a Space that allows you to use semantic search to find relevant datasets on the Hugging Face Hub. - -- -## 💽 Datasets - -
-
-
-Datasets for model and dataset cards- -- [Model Cards with metadata](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata): a dataset containing model cards for models hosted on the Hugging Face hub with first commit information for each model. Model cards are intended to help communicate the strengths and weaknesses of machine learning models. Whilst these model cards are primarily intended to be read by a human they are themselves also interesting corpus that can be used to explore models hosted on the Hub in various ways. - -- [Dataset Cards With Metadata](https://huggingface.co/datasets/librarian-bots/dataset_cards_with_metadatat): a dataset containing dataset cards for datasets hosted on the Hugging Face hub with first commit information for each dataset. Dataset cards are intended to help communicate the strengths and weaknesses of machine learning datasets. Whilst these dataset cards are primarily intended to be read by a human they are themselves also interesting corpus that can be used to explore datasets hosted on the Hub in various ways. - -- -## 🤖 Models - -- [BERTopic model card bias topic model](https://huggingface.co/librarian-bots/BERTopic_model_card_bias): a BERTopic model trained on the bias section of model cards hosted on the Hub. The goal of this model is to explore which topics are discussed in the bias section of model cards. Potentially in the future models such as this could also be used to detect 'drift' in the kinds of bias being discussed in model cards hosted on the Hub. - - -# Getting in touch - -If you want to collaborate on improving metadata on the Hugging Face Hub or have ideas for other related projects, reach out to [Daniel](https://huggingface.co/davanstrien) on Twitter (@vanstriendaniel) or via email (Daniel (at) our website). \ No newline at end of file diff --git a/spaces/limcheekin/WizardCoder-Python-13B-V1.0-GGUF/start_server.sh b/spaces/limcheekin/WizardCoder-Python-13B-V1.0-GGUF/start_server.sh deleted file mode 100644 index 9ec315638ea647912c58381a9409f1bea74d0180..0000000000000000000000000000000000000000 --- a/spaces/limcheekin/WizardCoder-Python-13B-V1.0-GGUF/start_server.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/sh - -# For mlock support -ulimit -l unlimited - -python3 -B main.py diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/!FULL! Crack Visual WebGui 6 4.md b/spaces/lincquiQcaudo/Top-20-Diffusion/!FULL! Crack Visual WebGui 6 4.md deleted file mode 100644 index c75b849ff95be46f3698984439910fd496909daf..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/!FULL! Crack Visual WebGui 6 4.md +++ /dev/null @@ -1,6 +0,0 @@ - Crack Visual WebGui 6 4Download ——— https://bytlly.com/2uGy6G - -6. 2 Following 6 Followers. Denver, CO, United States. Follow. About. Hello! ... Gizmox Visual WebGui Professional Studio Free Download · izotope ozone 8 Crack Torrent Keygen for Windows, 7, 8, 10 + Full Free Download 1fdad05405 - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Colored Sprite Mod Undertale.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Colored Sprite Mod Undertale.md deleted file mode 100644 index 35946e92776cd3e7e837ddeb6ed26d4121803b5e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Colored Sprite Mod Undertale.md +++ /dev/null @@ -1,20 +0,0 @@ - - How to Add Color to Your Undertale Experience with the Colored Sprites Mod-Undertale is a beloved indie game that has charmed millions of players with its quirky characters, witty dialogue, and emotional story. But did you know that you can enhance your Undertale experience with a mod that adds color to all the sprites in the game? -The Colored Sprites Mod is a fan-made project that aims to bring more life and detail to the battle and dialogue sprites in Undertale. It was originally created by Pr3tz31 and his team of pixel artists, and later updated by Michael_King for the latest version of Undertale. The mod covers every monster in the game, except for Froggit and Napstablook who are still white like their overworld sprites. -colored sprite mod undertaleDownload 🔗 https://bytlly.com/2uGwCd - The mod is easy to install and compatible with both Windows and Mac versions of Undertale. You just need to download the mod file and replace the original data.win file in your Undertale folder with the modded one. You can also use tools like UTPatcher or TranslaTale to apply the mod without overwriting your data file. -If you want to see how the Colored Sprites Mod changes the look and feel of Undertale, you can watch some videos on YouTube that showcase the mod in action. You can also check out some screenshots on Mod DB or Reddit that compare the original and modded sprites. -The Colored Sprites Mod is a great way to add some variety and freshness to your Undertale playthrough. It gives more personality and expression to the characters, and makes the battles more dynamic and colorful. Whether you are a new or veteran player of Undertale, you might want to give this mod a try and see how it changes your perspective on the game. - -But what if you want to go back to the original sprites after trying out the mod? Don't worry, you can easily uninstall the mod and restore your Undertale game to its default state. Here are the steps to do so: -
That's it! You have successfully uninstalled the Colored Sprites Mod and returned to your vanilla Undertale experience. If you ever want to reinstall the mod, you can follow the same steps as before. -We hope this article helped you learn more about the Colored Sprites Mod and how to install and uninstall it. If you have any questions or feedback, feel free to leave a comment below. And if you enjoyed this article, please share it with your friends who might be interested in Undertale mods. d5da3c52bf- - \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Curvemeister 3.0.12 .rar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Curvemeister 3.0.12 .rar.md deleted file mode 100644 index 032c54ca0f52a0452bd8121c920a52f94aadbf3d..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Curvemeister 3.0.12 .rar.md +++ /dev/null @@ -1,11 +0,0 @@ - Curvemeister 3.0.12 .rarDownload File 🆓 https://bytlly.com/2uGx5a - -Jan 29, 2019 - Curvemeister 3.0.12 full version - Wondershare Video Converter Ultimate 10.8.7.161 + Crack - Compression2008.zip .rar; Wondershare. -What do you think is the best way to protect all your files? -This may be a question that can be asked many times. -It all depends on how you want to protect your files. -If you want to protect files from being viewed or for files to be protected from being copied, the answer is encryption. -If you have files that you need to protect as well as keep secret, then perhaps you want to create an encrypted drive that will protect those files from tampering. 8a78ff9644 - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dawn Of War 2 Chaos Rising Trainer Download 2.6.rar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dawn Of War 2 Chaos Rising Trainer Download 2.6.rar.md deleted file mode 100644 index 9cf2d77db2c082ce871bc9a5d46d1ceec1df8260..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dawn Of War 2 Chaos Rising Trainer Download 2.6.rar.md +++ /dev/null @@ -1,78 +0,0 @@ - dawn of war 2 chaos rising trainer download 2.6.rarDownload File ✯✯✯ https://bytlly.com/2uGy07 - -not being copied to the /res folder. - -Added support for Intel Celeron and Atom CPUs, allowing RivaTuner emulation without a lot of hassle. - -Added Turkish, Swedish, and Romanian support. - -Added German, French, and Italian translations. - -Added support for video mode changes using Xorg autoconfiguration. - -Added support for Enigma2 video emulation. - -Fixed ArtDeco support for missing AudioBuffer Channel array. - -Fixed part of the compiler not working when the /nodefaultlib/ option is set. - -Fixed intel instructions by moving some assembly that was previously in the global/compiler.h file. - -Fixed call to ReadFile() from the main engine. - -Fixed a typo in an include. - -Fixed an error with binding a char* to a double*. - -Fixed some memory leaks. - -Fixed GetPalette() and related bits of code. - -Fixed a bug with SVGA memory virtualization. - -Fixed drivers not starting when the CpuVClkClockFreq parameter is set. - -Changed a macro that was using the wrong variable name, causing problems with the timing of the clock engine. - -Changed the use of the printf function, using the sprintf function instead. - -Changed the local var and multiversecount var types from string to uint16_t and uint8_t respectively. - -Changed the base classes for the driver and the core engine. - -Changed the name of the core engine from "CoreEngine" to "Engine". - -Changed the name of the driver from "Driver" to "DriverCore". - -Changed some compiler warnings to errors. - -Changed the name of the ProjectVCLib. - -Changed the name of the Archive. - -Changed the name of the DbgStr. - -Changed the name of the MessageBox. - -Changed the name of the LogBox. - -Changed the name of the Error. - -Changed the names of the units to be more specific. - -Changed the name of the UpdateBox. - -Changed the name of the SaveBox. - -Changed the name of the Buttons. - -Changed the name of the VCLib to the ProjectVCLib. - -Changed the unit names to be clearer. - -Changed the names of some debug files. - -Changed the name of the console class to better match the rest of the code 4fefd39f24 - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Far Cry 3 Trainer 0101 BEST.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Far Cry 3 Trainer 0101 BEST.md deleted file mode 100644 index 20de98888bffb97dd3b4421b45402152b04dd518..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Far Cry 3 Trainer 0101 BEST.md +++ /dev/null @@ -1,6 +0,0 @@ - Far Cry 3 Trainer 0101Download Zip ⚹ https://bytlly.com/2uGvEn - -... https://www.moddb.com/addons/dungeon-keeper-2-bonus-pack-3 ... https://www.moddb.com/downloads/cathartic-farcry3-mod ... https://www.moddb.com/downloads/praetorians-trainer-for-mod-complex-260 https://www.moddb.com/addons/new-sum-flare ... https://www.moddb.com/downloads/full-pr-deviation-v0101Â ... 1fdad05405 - - - diff --git a/spaces/longlian/llm-grounded-diffusion/shared.py b/spaces/longlian/llm-grounded-diffusion/shared.py deleted file mode 100644 index 021bf54db00cbd5dbf6922cf6173a189ff65d528..0000000000000000000000000000000000000000 --- a/spaces/longlian/llm-grounded-diffusion/shared.py +++ /dev/null @@ -1,15 +0,0 @@ -from models import load_sd, sam - - -DEFAULT_SO_NEGATIVE_PROMPT = "artifacts, blurry, smooth texture, bad quality, distortions, unrealistic, distorted image, bad proportions, duplicate, two, many, group, occlusion, occluded, side, border, collate" -DEFAULT_OVERALL_NEGATIVE_PROMPT = "artifacts, blurry, smooth texture, bad quality, distortions, unrealistic, distorted image, bad proportions, duplicate" - - -use_fp16 = False - -sd_key = "gligen/diffusers-generation-text-box" - -print(f"Using SD: {sd_key}") -model_dict = load_sd(key=sd_key, use_fp16=use_fp16, load_inverse_scheduler=False) - -sam_model_dict = sam.load_sam() diff --git a/spaces/luckybender/ChatGPT4/README.md b/spaces/luckybender/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/luckybender/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/m3hrdadfi/zabanshenas/libs/normalizer.py b/spaces/m3hrdadfi/zabanshenas/libs/normalizer.py deleted file mode 100644 index f4d79804580a634a74b39107a3742f9fed8cdc1e..0000000000000000000000000000000000000000 --- a/spaces/m3hrdadfi/zabanshenas/libs/normalizer.py +++ /dev/null @@ -1,86 +0,0 @@ -import re -import regex -import sys -import textwrap -from typing import Any, Dict, Optional - -punctuations = [ - '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '.', - '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', - '`', '{', '|', '}', '~', '»', '«', '“', '”', "-", -] - - -class Normalizer: - """A general normalizer for every language""" - - _whitelist = r"[" + "\p{N}\p{L}\p{M}" + re.escape("".join(punctuations)) + "]+" - _dictionary = {} - - def __init__( - self, - whitelist: str = None, - dictionary: Dict[str, str] = None, - ) -> None: - self.whitelist = whitelist if whitelist and isinstance(whitelist, str) else self._whitelist - self.dictionary = dictionary if dictionary and isinstance(dictionary, dict) else self._dictionary - - def chars_to_map(self, sentence: str) -> str: - """Maps every character, words, and phrase into a proper one. - - Args: - sentence (str): A piece of text. - """ - if not len(self.dictionary) > 0: - return sentence - - pattern = "|".join(map(re.escape, self.dictionary.keys())) - return re.sub(pattern, lambda m: self.dictionary[m.group()], str(sentence)) - - def chars_to_preserve( - self, - sentence: str, - ) -> str: - """Keeps specified characters from sentence - - Args: - sentence (str): A piece of text. - """ - try: - tokenized = regex.findall(self.whitelist, sentence) - return " ".join(tokenized) - except Exception as error: - print( - textwrap.dedent( - f""" - Bad characters range {self.whitelist}, - {error} - """ - ) - ) - raise - - def text_level_normalizer(self, text: str) -> str: - """A text level of normalization""" - - text = regex.sub(r"([" + re.escape("".join(punctuations)) + "])", r" \1 ", text) - text = text.strip() - - return text - - def __call__( - self, - text: str, - do_lowercase: Optional[bool] = False - ) -> Any: - """Normalization caller""" - - text = self.chars_to_map(text) - text = self.chars_to_preserve(text) - text = self.text_level_normalizer(text) - text = re.sub(r"\s+", " ", text) - - if do_lowercase: - text = text.lower() - - return text diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/buffer_info.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/buffer_info.h deleted file mode 100644 index 8349a46b8b92f87e9f641b30b7b86617b7f85d50..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/buffer_info.h +++ /dev/null @@ -1,116 +0,0 @@ -/* - pybind11/buffer_info.h: Python buffer object interface - - Copyright (c) 2016 Wenzel Jakob |